Mathematics

# Neural Network Artificial Distributed Computing Backpropagation self Organising

Tweet
Francis Harris's image for:
"Neural Network Artificial Distributed Computing Backpropagation self Organising"
Caption:
Location:
Image by:

The artificial neural network is a computational model inspired by the real neural networks found in the human brain. It is an approach found within the field of Artificial Intelligence (AI) - that area of study arising in the 1950s which tries to get computers and machines to do things that people do such as language, recognising faces, finding patterns, solving problems, learning and doing all this with less than perfect information.

The artificial neural network is known as "sub-symbolic" approach to AI because it does not rely upon a set of well defined rules - as does "symbolic" AI - where you can read off the rules for processing information from a computer program. Rather, the "sub-symbolic" approach has a distributed model (the network) with parameters that are adjusted according to pre-defined rules. You can never read off the "knowledge" held in the network from a set of rules, but the network is able to process information in powerful ways.

The inspiration for the artificial neural network comes from the human brain - itself a highly complex network of distributed processing units; each very simple (taking a pulse and passing it on) but producing incredible results when it comes to processing language, vision, and other "input" information and controlling the human motor functions, speech etc. in the "output" made. The neural network in the human brain is characterised by learning, and not surprisingly the artificial neural network is a computational model that learns.

Rather like statistical models that adjust parameters according to the data in the sample, or training set, so does the artificial neural network adjust internal network parameters according to the data that it encounters. This is the "training" or "learning" phase of the network. Without training artificial neural networks are useless; and by analogy while the human brain may be full of potential, until it has learned, it can do nothing.

There are two main approaches to training and artificial neural network: supervised and unsupervised. These roughly correspond to classifying data - where the category of each training item is already known (hence supervised); and clustering data - where the classes in the data are unknown and the data is "sorted" or "arranged" according to the characteristics within the set of examples (hence unsupervised). Supervised training methods require you present a training example to the network and tell it the response it should give; unsupervised training methods require you present a training example to the network and let it organise and change its internal structure according to its own rules.

A common approach to supervised training is the "backpropagation" method developed in 1986 by various researchers and popularized by Rumelhart, Hinton and Williams (see nature journal). A common approach to unsupervised training is "self organisation" developed in the early 1980sby Kohonen (see the 2001 book self-organising maps). There are many others.

Artificial neural networks have successfully been used to do tasks that would traditionally require human intelligence. They are especially useful when rules are not readily available to represent the ‘knowledge’ but a set of examples is; when we have noisy or incomplete data; when there are non-linear problems not suited to analytic methods; and when we have a problem that is closer to human perception / skill than traditional computing. Tasks with well defined steps, are generally more suited to normal programming techniques or symbolic methods of AI.

Tweet