Neural wiring for silicon brains
© Copyright 1994-2002, Rishab Aiyer Ghosh. All rights reserved.
Electric Dreams #7

All the supercomputers in the world together don't have the visual processing power of a house-fly. That enormous computational power, so good at simulating the exciting life of each dust particle in a storm, is helpless when faced with the seemingly trivial tasks of telling a smile from a grimace, an endearment from a threat, a poem from a lawyer's brief.

It is deceptive to think that this is because cold, lifeless computers cannot feel, and are incapable of the emotional responses required to feel threatened. It would be useful to human-computer communication, of course, if they provided some responses to at least the natural emotional signs present in ordinary conversation. Nevertheless, that is another issue altogether. You don't need to feel threatened to recognize a grimace from a smile; computers find it hard to distinguish between the appearances, leave alone the concepts. Traditional, algorithmic step-by-detailed-step approaches to problem-solving cannot cope with most natural actions or processes that are actually remarkably complex, however simple they seem. Just try to write down instructions to determine that the face in a photograph is smiling, using a ruler and a pair of dividers, and you'll see what I mean.

Whether computers are to do our thinking for us, or simply protect life in a fast-paced world from a flood of meaningless information, the least they can do is think (a little) like us. To solve the large class of problems that encompass far more than facial expressions, a method gaining wide acceptance is to simulate the working of the brain, and of the way humans, for instance, solve such problems. We do it without being told how; we learn. Not surprisingly, the software and hardware systems being used that learn instead of being programmed, based as they are on the jumble of nerves in our brains, are called neural networks.

Neural networks, both natural and simulated, work on the power of the collective. Neurons are not incredibly complex as individual information processing units, but as they come together in intricate networks of billions in the human brain, they manage to perform tasks that baffle the powerful single processing units of supercomputers. In the near future it might be impractical to consider artificial neural networks of consisting of even a million processors. However, the key to a neural network's success lies not so much in their sheer numbers, as in their capacity for organization. The brain is constantly adapting to changing circumstances, through a process of learning; these changes are reflected in the changing connections between neurons and their responses to simuli.

In an artificial neural network, each 'neuron' is simulated by some program mapping stimuli -- inputs, to responses -- outputs. Every neuron -- which may be a piece of software or even hardware, as part of a specialized neural chip -- keeps track of its state, a set of values that determine its responses. Unless the neural network is engaged in the process of learning something, these states are used only as auxiliary inputs to calculate outputs. Most artificial neural networks have two distinct phases of operation -- training, and performing. It is while a network is being trained that it learns, and while performing that it uses its knowledge (natural brains don't usually make this distinction). The process of learning in neural networks is what separates them from the traditional programming methods. While learning, a network is typically exposed to a data set representative of the real data they will later encounter. Along with the data set are the correct responses; over a period of training using large data sets, neural nets figure out how to extrapolate their 'understanding' to other data. For example, if you feed a series of faces into a network, along with their expressions, it will learn to tell whether a new face, given to it in the later stage of performance, is smiling.

There is some traditional programming involved -- the metalearning skills. Neural networks, built for different purposes, are programmed to learn in different ways. Primarily good at forms of pattern recognition, these neurocomputing systems, though not yet clever enough to hunt for news on Bosnia and compose critical articles, are being used in diverse areas, from predicting the rise and fall of oil futures, to recognizing -- you guessed it -- faces.

Someone once said that for a computer to do the work of a human brain, it would have to be the size of the Washington Monument with the Niagara falls for cooling. That may or not be true; in any case it is becoming very clear that without highly intelligent, brain-like but automated knowledge processing, we might drown in the information revolution rather than benefit from it.

  • Electric Dreams Index
  • Homepage