Writings / Articles / Neurobiological Principles and A.I.
Most A.I. models don’t share genuine deep connections with biological neural networks in the human brain. In this article, we review some interesting work on neural networks that inherit working principles and various characteristics of their biological counterparts.
Despite our use of the word ‘neural networks’, many A.I. models do not have operational principles that are closely related to or derived directly from features of biological neural networks, beyond superficial similarities. Yet, there are notable examples of models which are inspired by them, and there is an increasing interest on biomimetic A.I. models which led
to the growth of fields such as neuromorphic computing.
Spiking neural networks is a class of models which is built on a prominent feature of neurobiology – that neurons communicate among one another by exchange of electrical pulses called action potentials which are spike-like bursts in the neuron membrane’s electrostatic potential. Typically, there are multiple neuronal inputs for a single spiking output and they can be inhibitory or excitatory in nature. In neuroscience, there are various models quantifying the way these action potentials develop with time, from more complicated Hodgkin-Huxley models to simpler Integrate-and-Fire models. A spiking neural network is a finite directed graph \( (V,E) \), where the vertices \( V \) are the neurons and edges \( E \) are the synapses. It is often trained via a learning rule that contains some discretized version of the ordinary differential equation that describes ‘Leaky-Integrate-and Fire’ models as follows.
\[ C \frac{du}{dt} = -\frac{u}{R} + i_{ext} (t) + \sum_j w_j i_j (t) \]
where \( u \) is the membrane potential, \( i_j (t) \) are the pre-synaptic input current weighted by \( w_j \), and \( i_{ext} (t) \) is the external current taking into account other background inputs to the neuronal current. For a great example of code implementation, see e.g. this PyTorch implementation of spiking neural networks based on this wonderful work by Eshraghian et al.
The structural topology of a spiking neural network can be of the feedforward type (i.e. no feedback connections) or the recurrent type (i.e. with reciprocal/feedback connections). The feedforward type resembles structural properties of neurobiological systems that are found closer to the brain periphery and generally reflect low-level sensory systems like vision. On the other hand, recurrent neural networks resemble biological counterparts associated with the formation of memories.
In the Figure above, the left diagram represents an artificial neural network architecture paralleling the biological neuron’s synaptic connection as sketched in the central diagram. (See Fig. 2 of the review paper by Schmidgall et al.). The rightmost diagram represents a spiking neural network model designed by Zenke and Ganguli (see their paper ‘SuperSpike: Supervised Learning in Multilayer Spiking Neural Network‘ ).
Another major class of neural networks which do inherit some neurobiological operational principles is convolutional neural networks (CNN). In the seminal deep learning textbook written by Ian Goodfellow et al., the authors explained how CNNs have features which are shared by the biological neural network. According to them, the parallels are as follows.
Despite these similarities and parallels, there are also many essential differences between biological and artificial neural networks that continue to inspire new developments in our understanding of either fields. Two crucial differences are :
There is also a growing interest on how to develop actual hardware that implements the neurobiology-inspired artificial neural networks, bringing us closer towards constructing more interesting artificial analogues of the human brain (or aspects of). Brain Inspired Computing or Neuromorphic Computing are fields involving the creation of hardware that mimics the structure and functionality of the biology brain, e.g. Intel’s Loihi, IBM’s TrueNorth. See this nice review ‘Brain Inspired Computing: A systematic survey and future trends’ by Li et al. for a great discussion covering current trends and achievements in this domain.
End