Mathematics

Artificial Neuro Networks



Tweet
Michael Mackie's image for:
"Artificial Neuro Networks"
Caption: 
Location: 
Image by: 
©  

Artificial Neural Networks

Neural Networks are one of the three main knowledge based systems. The others being AI (Artificial Intelligence),  Fuzzy Systems, as well as hybrid systems. Neural Networks try to mimic humanlike rationalization. Inspired to reproduce the reasoning and processing elements otherwise known as neurons of the human brain. Neuronal Networks have learning and self regulating ability, but are not well suited for symbolic reasoning. Information processing using neural networks is called neurocomputation.

The basic construction of a neural network is as follows.

Inputs signal- having a load or weight which is the memory of the system attached to them.

Input function-calculates the formed net input signal to a neuron, coming from all of its inputs.

Activation signal function-calculates  the activation level of a neuron as a function of its forming input signal or of perhaps its previous state.

Output signal- this is equal to activation values which are discharged through the output (axon) of the neuron.

Neural networks are also known as connection models due to its main role. Some of the main characteristics of the neural network are as follows:

Learning - can start with no knowledge; however can be taught using data examples ex, input-output pairs or by just input data also known as unsupervised training. Learning may require recurring action.

Generalization - if a new input signal is different from the last known instance, it will generate the best output based on the last known input signal.

Massive potential parallelism-a multitude of neurons will fire together.

Robustness - If a certain amount of neurons fails, other neurons will still successfully perform their tasks.

Associative storage of information

Spatiotemporal information processing

Partial match - this happens in many instances when input signal data are not a perfect match with the new information.

Neural networks can deal with noisy data, missing data, and corrupted data by making the proper adjustments and giving good output.

Input connections x1, x2..xn. These are weights connected to the input connection: w1,w2….wn. One n put to the neuron called the bias, has a constant value of 1, and is represented as a separate value, x0.

Input function f calculates the formed net input signal to the neuron u=f(x,w); x being the input signal vector and w meaning the weight vector. F is the summation function

u=  (delta)i=1,nXiWi .

Activation(signal) - function s calculates the activation level of the neuron a=s(u).

Output function-determines the output signal value, which is emitted through the output(axon) of the neuron: 0=g(a).

The output signal is assumed to be equal to the activation level of the neuron, 0=a.

The neuron network has four parameters. The first being types of neurons, also known as nodes. A connectionist architecture, which is the organization of the connections between neurons. A learning algorithm. And the recall algorithm. A simple neural network contains four input nodes, two intermediate and one output.

Types of neural networks define its typology. Neurons can be fully connected or partially connected. Two major connectionist architectures can be known by the number of input and output sets of neurons and the layers of neurons used. Auto associative are input neurons and output neurons acting together. Hetero associative are a separate number of input neurons and output neurons.

Feed forward architecture is when there exist no connections back from the output to the input neurons. No memory is kept of the previous output values and activation states of the neurons. Feedback architecture is when there exist connections between the output to input neurons. This type of network keeps the memory of its previous states and the next state, depending on the input signals, as well as the previous states of the network. The most attractive characteristic of neural networks, is their ability to learn. This learning ability is performed by applying a learning (training) algorithm.

Supervised is when the input vector x and the output vector y work together. Training is performed until the neural network learns to partner input vector x to its output vector y.

Unsupervised is when only the input vector x is supplied. Reinforcement learning, also called reward, penalty learning is a combination of the above two paradigms. The input signal is presented to the network and looks ate the output signal. If this connection is considered acceptable, the connection weights are increased. Otherwise the connection weights are decreased.












Tweet
More about this author: Michael Mackie

From Around the Web




ARTICLE SOURCES AND CITATIONS