Before going to the artificial neural networks let me introduce about neural networks. It is an interconnected group of neurons. The main example that I can use here is the human brain. In technical field neural networks are often refers to Artificial Neural Network (ANN) or neural nets. It is mostly used in artificial intelligence and for medical treatments. Artificial Neural Networks is considered as one of the fast growing field in artificial intelligence and have a great future.
Our brain is a collection of about 10 billion interconnected neurons. Each neuron is a cell that uses biochemical reactions to receive, process, and transmit information. A neuron dendrite tree is connected to thousands of neighboring neurons. When any one of the neurons fire a positive or negative charge is received by any one of the dendrites. The strengths of all received of all received charges are added together through the process of temporal or spatial summation. Spatial summation is done when several weak signals are converted into a large signal. Temporal summation converts a rapid series of weak pulses from one source into one large signal. The aggregate input is then passed to the soma (cell body).
Single Layer Network
Artificial Neural Network Architecture
Neurons can be referred as arranged in layers. Normally neurons in the same layer behave in the same manner. The main factors which decide the behavior of a neuron are its activation function and pattern of weighted connections over which it sends and receives signals. The arrangement of neurons into layers and the connection patterns within and between the layers is called the net architecture.
Based on the no of layers the network structure is broadly classified into 2 categories depends on the connection establishment in the network. They are
a) Single layer network
b) Multi layer network
a) Single layer network
It has only one layer of connection weights. The input unit will receive signals from the outside world and the response to a signal can be read from the output unit.The input units are directly connected with the output unit and they are generally used for pattern classification.
b) Multilayer network
A multilayer network has more layers of nodes between the input units and output units.Typically there is a layer of weights between 2 adjacent levels of units. Multilayer nets can solve more complicated than single-layer nets, but its training is more difficult.
Here each node is a processing elementor unit, it may be anyone of the 2 states (Black-Active, White-Inactive). Units are connected to each other with weighted symmetric connection. A positive weighted connection indicates that 2 units tend to activate each other. A negative connection allows an active unit to deactivate from a neighboring unit.
Working: A random unit is chosen. If any of its neighbors are active, the unit computes the sum of the weights on the connections to those active neighbors. If the sum is a positive result, then the unit becomes active, otherwise it becomes inactive.
Feature: Fault-tolerance: If a new processing element fails completely, the network will still function properly.
ARITIFICIAL NEURAL NETWORK
It composed of a number of nodes or units, connected by a link, where each link is associated with a numeric weight. Each unit consists of a set of inputs and only one output or a set of outputs, where each level the weights are updated, named as activation level.
Constructing a neural network for the desired task
Depends on the required task, the number of units, the kind of unit, the type of networks are decided and constructed. Next the weights are initialized to the network using a learning algorithm. Outputs are derived for a set of training samples.
Units in neural networks
Neural networks are composed of nodes or units connected by links. A link from unit j to unit i will propagate the activation function aj from j to i. Every link also has a numeric weight Wji, which denotes the strength and sign of the connection. Unit i will first computes a weighted sum of its input
Then it applies an activation function g to this sum to derive the output.The activation function g is used for 2 things. To make the unit to be active (near +1) when the right inputs are given and to make inactive (near 0) when the wrong inputs are given. Second to make the activation nonlinear, otherwise the entire neural network collapses into a simple linear function.
The choices for ginclude two logistic functions called 1) threshold function and 2) sigmoid function. Threshold function will provide output as 1 when the input is positive or 0.
Activation functions are also called as transfer functions. It is the function that is applied to the net output of any node before it is fed to the next layer. There various types of activation functions used and some are
1) Binary Step Function
Single layer network use a step function to convert the net input, which is a continuously valued variable, to an output unit that is a binary or bipolar signal. This function is called as the threshold function that I have mentioned earlier.
2) Binary Sigmoid function
Sigmoid functions are useful activation functions. Logistic function and hyperbolic function are most common ones in this category. It is very useful in neural networks trained by backpropogation.Logistic function will range from 0 to 1, is usually used as the activation function for neural networks in which the desired output values either binary or between 0 and 1. To emphasize the range of the function, we call it as binary sigmoid.
3) Bipolar sigmoid function
The logistic sigmoid function can be scaled to have any range of the values that is appropriate for a given problem. The most common range is from -1 to 1. We call this as bipolar sigmoid function.
It is a general purpose learning algorithm, which is powerful but expensive in terms of computational requirements for training. Training a network using backpropagation involves 3 stages.
1) Feedforward of the input training pattern
2) Backpropagation of the associated error.
3) Adjustment of weights.
In feedforward, every input unit receives an input signal and sends the signals to each of the hidden units. Each output unit computes its activation to form the response of the network for the given input pattern.
While training, each output unit compares its computed activation with its target value to determine the error associated with the pattern with that unit. The mathematical basis for the backpropagation algorithm is the optimization technique known as gradient descent.
Choice of activation functions
An activation function for a backpropogation network should have several important characteristics.
It should be continuous, differentiable and monotonically non-decreasing. It is very advantageous if it’s derivative is easy to compute.
Choice of initial weights and biases
The choice of initial weight will influence whether the network reaches a global or only local minimum of the error. The update of the weight between 2 units depends on both, the derivative of the upper unit’s depth on both, the derivative of the upper unit’s activation function, and the activation function of the lower unit. For this reason, it is important to avoid choices of initial weight that would make it likely that either, activations or derivatives of activations are zero. The values of the initial weight must not be too large; the in initial input signals to each hidden or output unit will be likely to fall in the region where the derivative of the sigmoid function has a very small value. On the other hand, if the initial weights are too small, the net input to a hidden or output unit will be close to 0, which also causes extremely slow learning.
How long to train the network
Since the common interest for applying a backpropogation network is to achieve a balance between correct responses to training patterns and good responses to new input patterns, it is not necessarily advantageous to continue training until the total squared error usually reaches a minimum. As long as the error for training and testing patterns decreases, training continues. When the error begins to increase, the network is starting to memorize the training patterns. At this point training can be terminated.
Applications of Neural Networks
Neural network is a fast growing field and most of the complex automated systems have their base in tis. Artificial Intelligence works are expanded by using the functionality of neural networks. Main applications are :
Robotics, Function approximation, time series prediction, Patten recognition, Novelty detection, Fitness approximation
Other applications are
It can be used to pronounce a written text. It can be achieved by mapping the text stream to phonemes basic sound elements and then passing the phonemes to an electric speech generator. It is one of the wonderful application of neural network and it works approximately correct.
Handwritten Character Recognition
An artificial neural network can be trained to recognize the handwritten characters.
Autonomous land Vehicle In a Neural Network
It can learn to steer a vehicle along a road by observing the performance of a human.
Computer data processing
It involves entering, analyzing, summarizing and converting data into other forms which is desirable for a user.
Artificial neural networks summary
Artificial neural networks are studied by considering human brain and it's neural networks. In computer field artificial neural networks have a good future. Researches are going in artificial neural networks field to enhance the use of it.
If you need to know further about Artificial Neural Network feel free to ask your questions in the comment column below.