Our soft brains Computer processors seem far from solid silicon chips, but scientists have a long history of comparing the two. Alan Turing 1 wrote this in 19552: “We are not interested in the fact that there is a compatibility of cold porridge in the brain.” In other words, the medium does not matter, only computational power.
Today, the most powerful artificial intelligence systems use a type of machine learning called deep learning. Their algorithms can learn by processing large amounts of data through hidden layers of interconnected nodes, known as deep neural networks. As their name implies, deep neural networks were inspired by actual neural networks in the brain, nodes were modeled after actual neurons – or, at least, what neuroscientists knew about neurons in the 1950s, when a perceptor known as the dominant neuron model was born. . Since then, our understanding of the computational complexity of single neurons has expanded dramatically, so biological neurons are known to be more complex than artificial ones. But by how much?
To find out, David Beniaguyev, Eden Segev and Michael London of the Hebrew University of Jerusalem have trained an artificial deep nerve network to mimic the calculations of a simulated biological neuron. They showed that a deep neural network is required between five and eight layers of interconnected “neurons” to represent the complexity of a single biological neuron.
Even the authors did not predict such complications. “I thought it would be easier and shorter,” Beniaguyev said. He hoped that three or four levels would be enough to capture the calculations performed in the cell.
Timothy Lillycrap, who designed the decision-making algorithm at Google-owned AI company Deep Mind, said the new findings suggest that the old tradition of comparing neurons to brain neurons in the context of machine learning may need to be reconsidered. “This paper really helps to force it to think more carefully and you can get involved with how much you can do with those analogies,” he said.
The most fundamental similarity between artificial and real neurons involves how they handle incoming information. Both types of neurons receive incoming signals and decide based on that information whether their own signals will be sent to other neurons. Although artificial neurons rely on a simple calculation to make this decision, decades of research have shown that the process is much more complex than that of biological neurons. Computational neuroscientists use an input-output function to model the relationship between inputs obtained by the long traillike branch of a biological neuron, called dendrites, and decide to send a signal to the neuron.
This function is what the authors of the new work have taught to simulate an artificial deep nerve network to determine its complexity. They began by creating a huge simulation of the input-output function of a type of neuron with distinct trees of the dendritic branch above and below it, known as pyramidal neurons from the rat cortex. They then fed the simulation into a deep neural network with 256 artificial neurons at each level. They continue to increase the number of layers until they achieve 99 percent accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the input-output function of the neuron at least five-but not more than eight-artificial levels. In most networks, this is equivalent to about 1,000 artificial neurons for just one biological neuron.
“[The result] Builds a bridge from biological neurons to artificial neurons, ”said Andreas Tolias, a computational neuroscientist at Baylor College of Medicine.
But the study’s authors warn that this is not yet a direct correspondence. “The relationship between how many layers you have in a neural network and the complexity of the network is not clear,” London said. So we can’t say how much complexity can be achieved by going from four levels to five. Nor can we say that the need for 1,000 artificial neurons means that a biological neuron is exactly 1000 times more complex. Ultimately, it is possible that rapidly using more artificial neurons within each layer will eventually lead to a deeper neural network with a single layer – but this will probably require much more information and time to learn the algorithm.