On Functional Neuron Modeling

C. E. Hendrix

Space-General Corporation
El Monte, California

There are two very compelling reasons why mathematical and physical models of the neuron should be built. Model building, while widely used in the physical sciences, has been largely neglected in biology. However, there can be little doubt that building neuron models will increase our understanding of the function of real neurons, if experience in the physical sciences is any guide. Secondly, neuron models are extremely interesting in their own right as new technological devices. Hence, the interest in, and the reason for symposia on self-organizing systems.

We should turn our attention to the properties of real neurons, and see which of them are the most important ones for us to imitate. Obviously, we cannot hope to imitate all the properties of a living neuron, since that would require a complete simulation of a living, metabolizing cell, and a highly specialized one at that; but we can select those functional properties which we feel are the most important, and then try to simulate those.

The most dramatic aspect of neuron function is, of course, the axon discharge. It is this which gives the neuron its “all-or-nothing” character, and it is this which provides it with a means for propagating its output pulses over a distance. [Hodgkin and Huxley (1)] have developed a very complete description of this action. Their model is certainly without peer in describing the nature of the real neuron.

On the technological side, Cranes’ “neuristors” [(2)] represent a class of devices which imitate the axonal discharge in a gross sort of way, without all the subtle nuances of the Hodgkin-Huxley model. Crane has shown that neuristors can be combined to yield the various Boolean functions needed in a computer.

However, interesting as such models of the axon are, there is some question as to their importance in the development of self-organizing systems. The pulse generation, “all-or-nothing” part of the axon behavior could just as well be simulated by a “one-shot” trigger circuit. The transmission characteristic of the axon is, after all, only Nature’s way of sending a signal from here to there. It is an admirable solution to the problem, when one considers that it evolved, and still works, in a bath of salt water. There seems little point, however, in a hardware designer limiting himself in this way, especially if he has an adequate supply of insulated copper wire.

If the transmission characteristic of the axon is deleted, the properties of the neuron which seem to be the most important in the synthesis of self-organizing systems are:

a. The neuron responds to a stimulus with an electrical pulse of standard size and shape. If the stimulus continues, the pulses occur at regular intervals with the rate of occurrence dependent on the intensity of stimulation.

b. There is a threshold of stimulation. If the intensity of the stimulus is below this threshold, the neuron does not fire.

c. The neuron is capable of temporal and spatial integration. Many subthreshold stimuli arriving at the neuron from different sources, or at slightly different times, can add up to a sufficient level to fire the neuron.

d. Some inputs are excitatory, some are inhibitory.

e. There is a refractory period. Once fired, there is a subsequent period during which the neuron cannot be fired again, no matter how large the stimulus. This places an upper limit on the pulse rate of any particular neuron.

f. The neuron can learn. This property is conjectural in living neurons, since it appears that at the present time learning has not been clearly demonstrated in isolated living neurons. However, the learning property is basic to all self-organizing models.

Neuron models with the above characteristics have been built, although none seem to have incorporated all of them in a single model. [Harman (3)] at Bell Labs has built neuron models which have the characteristics (a) through (e), with which he has built extremely interesting devices which simulate portions of the peripheral neuron system.

Various attempts at learning elements have been made, perhaps best exemplified by those of [Widrow (4)]. These devices are capable of “learning,” but are static, and lack all the temporal characteristics listed in (a) through (e). Such devices can be used to deal with temporal patterns only by a mapping technique, in which a temporal pattern is converted to a spatial one.

Having listed which seem to be the important properties of a neuron, it is possible to synthesize a simple model which has all of them.

A number of input stimuli are fed to the neuron through a resistive summing network which establishes the threshold and accomplishes spatial integration. The voltage at the summing junction triggers a “one-shot” circuit, which, by its very nature, accomplishes pulse generation and exhibits temporal integration and a refractory period. The polarity of an individual input determines whether it shall be excitatory or inhibitory. This much of the circuitry is very similar to Harmon’s model.

Learning is postulated to take place in the following way: when the neuron fires, an outside influence (the environment, or a “trainer”) determines whether or not the result of firing was desirable or not. If it was desirable, the threshold of the neuron is lowered, making it easier to fire the next time. If the result was not desirable, the threshold is raised, making it more difficult for the neuron to fire the next time.

In a self-organizing system, many model neurons would be interconnected. A “punish-reward” (P-R) signal would be connected to all neurons in common. However, means would be provided for only those which have recently fired to be susceptible to the effects of the P-R signal. Therefore, only those which had taken part in a recent response are modified. This idea is due to [Stewart (5)], who applies it to his electrochemical devices instead of to an electronic device.

The mechanization of the circuitry is rather straight-forward. A portion of the output of the pulse generator is routed through a “pulse-stretcher” or short-term memory which temporarily records the fact that the neuron has recently fired. The pulse-stretcher output controls a gate, which either accepts or rejects the P-R signal. The P-R signal can take on only three values, a positive level, zero, or a negative level, depending on whether the signal is “punish,” “no action,” or “reward.” Finally, the gate output controls a variable resistor, which is part of the resistive summing network. [Figure 1] is a block diagram of the complete model.

Note that this device differs from the usual “Perceptron” configuration in that the threshold resistor is the only variable element, instead of having each input resistor a variable weighting element. This simplification could lead to a situation where, to prepare a specified task, more single-variable neurons would be required than would multivariable ones. This possible disadvantage is partially, at least, offset by the very simple control algorithm which is contained in the design of the model, and is not the matter of great concern which it seems to be for most multivariable models.

Figure 1—Block diagram of neuron model

Hand simulations of the action of this type of model suggest that a certain amount of randomness would be desirable. It appears that a self-organizing system built of these elements, and of sufficient complexity to be interesting, would have a fair number of recirculating loops, so that spontaneous activity would be maintained in the absence of input stimulus. If this is the case, then randomness could easily be introduced by adding a small amount of noise from a random noise generator to the signal on the P-R bus. Thus, any neurons which spontaneously fire would be continually having their thresholds modified.

The mechanization of the model is not particularly complex, and can be estimated as follows: The one-shot pulse generator would require two transistors, the pulse stretcher one more. The bi-directional gate would require a transistor and at least two diodes.

Several candidates for the electrically-controllable variable resistor are available [(6)]. Particularly good candidates appear to be the “Memistor” or plating cell developed by [Widrow (7)], the solid state version of it by [Vendelin (8)], and the “solion” [(9)]. All are electrochemical devices in which the resistance between two terminals is controlled by the net charge flow through a third terminal. All are adaptable to this particular circuit.

Of the three, however, the solion appears at first glance to have the most promise in that its resistance is of the order of a few thousand ohms (rather than the few ohms of the plating cells) which is more compatible with ordinary solid-state circuitry. Solions have the disadvantage that they can stand only very low voltages (less than 1 volt) and in their present form require extra bias potentials. If these difficulties can be overcome, they offer considerable promise.

In summary, it appears that a rather simple neuron model can be built which can mimic most of the important functions of real neurons. A system built of these could be punished or rewarded by an observer, so that it could be trained to give specified responses to specified stimuli. In some cases, the observer could be simply the environment, so that the system would learn directly from experience, and would be therefore a self-organizing system.

REFERENCES

1.Hodgkin, A. L., and Huxley, A. L.,
“A Quantitative Description of Membrane Current and its Application to Conduction and Excitation in Nerve,”
J. Physiol. 117:500-544 (August 1952)
2.Crane, H. D.,
“Neuristor—A Novel Device and System Concept,”
Proc. IRE 50:2048-2060 (Oct. 1962)
3.Harmon, L. D., Levinson, J., and Van Bergeijk, W. A.,
“Analog Models of Neural Mechanism,”
IRE Trans. on Information Theory IT-8:107-112
(Feb. 1962)
4.Widrow, B., and Hoff, M. E.,
“Adaptive Switching Circuits,”
Stanford Electronics Lab Tech Report 1553-1, June 1960
5.Stewart, R. M.,
“Electrochemical Wave Interactions and Extensive Field Effects in Excitable Cellular Structures,”
First Pasadena Invitational Symposium on Self-Organizing Systems,
Calif. Institute of Technology, Pasadena, Calif., 14 Nov. 1963
6.Nagy, G.,
“A Survey of Analog Memory Devices,”
IEEE Trans. on Electronic Cmptrs. EC-12:388-393 (Aug. 1963)
7.Widrow, B.,
“An Adaptive Adaline Neuron Using Chemical Memistors,”
Stanford Electronics Lab Tech Report 1553-2, Oct. 1960
8.Vendelin, G. D.,
“A Solid State Adaptive Component,”
Stanford Electronics Lab Tech Report 1853-1, Jan. 1963
9.“Solion Principles of Electrochemistry and Low-Power Electrochemical Devices,”
Dept. of Comm., Office of Tech. Serv. PB 131931
(U. S. Naval Ord. Lab., Silver Spring, Md., Aug. 1958)

Selection of Parameters for
Neural Net Simulations[22]

R. K. Overton

Autonetics Research Center
Anaheim, California

Research of high quality has been presented at this Symposium. Of particular interest to me were the reports of the Aeronutronic group and the Librascope group. The Aeronutronic group was commendably systematic in its investigations of different arrangements of linear threshold elements, and the Librascope data, presenting the effects of attaching different values to the parameters of simulated neurons, are both systematic and interesting.

Unfortunately, however, interest in such research can obscure a more fundamental question which seems to merit study. That question concerns the parameters, or attributes, which describe the simulated neuron. Specifically, which parameters or attributes should be selected for simulation? (For example, should a period of supernormal sensitivity be simulated following an absolutely refractory period?)

Some selection obviously has to be made. Librascope, which is trying to simulate neurons more or less faithfully, plans to build a net of ten simulated neurons. In contrast, General Dynamics/Fort Worth, with roughly the same degree of effort, is working with 3900 unfaithfully-simulated neurons. This comparison is not a criticism of either group; the Librascope team has simply selected many more parameters for simulation than has the General Dynamics group. Each can make the selections it prefers, because the parameters of real neurons which are necessary and sufficient for learning have not been exhaustively identified.

From the point of view of one whose interests include real neurons, this lack of identification is unfortunate. I once wrote a book which included some guesses about the essential attributes of neurons. Since that time, many neuron simulation programs have been written. But these programs, although interesting and worthwhile in their own right, have done little to answer the question of the necessary parameters. That is, they do not make much better guesses possible. And yet better guesses would also make for more “intelligent” machines.