Neural Networks Overview

Neural networks is an advanced technique which is within the field of Deep Learning. As we know, machine learning involves working with algorithms that try to predict a target variable or segment data to find relevant patterns without human intervention.

In contrast, in deep learning architecture there is more than one layer of these algorithms and these systems represent a network of algorithms that do not necessarily need labeled data to make predictions.  Below we can visualize the main components of a neural network.

Neural Network Architecture

A neural network is composed of input layers, hidden layers and output layers. The input layers are the beginning of the workflow in a neural network and receive information about the initial data or the features of a dataset.

Hidden layers identify the important information from the inputs, leaving out the redundant information. They convert the network more efficiently and pass the important patterns of data to the next layer which is the output layer.

In the middle of the hidden layer and the output layer, the network has an activation function. This activation function transforms the inputs in the desired output and can take many forms such as a sigmoid function (as in the binary classification of logistic regression) or other types of functions (RELU, Tanh, softmax among others that are beyond the scope of this post).

The goal of the activation function is to capture non-linear relationships between the inputs and on the other hand it helps to convert the inputs into more useful output.  

The figure above shows a network architecture called “feed-forward network” as the inputs’ signals flow in only one direction (from inputs to outputs). There are other types of flows such as back propagations, where the data goes back along the neural networks and inspect every connection to check how the output would behave according to a change in the weight. The key of the model is to find the optimal weights values of W that minimize the prediction error. Just like other machine learning algorithms, the neural networks also have a loss function that will be minimized. 

This content is for paid members only.

Join our membership for lifelong unlimited access to all our data science learning content and resources.