How Hard Is Neuron?


Our mushy brain it looks like a far cry from solid silicon chips in a computer processor, but scientists have a history of comparing the two. As Alan Turing ikani in 1952: “We do not want to know that the brain is a cold liquid.” In other words, the healer does not care, just knowing the art of making.

Today, most sophisticated mechanical devices use advanced learning tools called in-depth learning. Their algorithms learn to process large amounts of system through the secrets of interconnected networks, known as deep neural networks. As their name suggests, deep neural networks are stimulated by specific brain networks, consisting of nodes that are followed by real neurons — or, at least, based on what neuroscientists knew about neurons in the Middle Ages. 1950, when the well-known type of neuron called perceptron was born. Since then, our understanding of the computational computational of single neurons has grown exponentially, which is why natural neurons are known to be more complex than their producers. But how much?

For you to know, David Beniaguev, Idan Segev and Michael London, both at the Hebrew University in Jerusalem, taught how to connect devices to mimic nerve numbers. They showed that a deep network requires between eight and eight segments of connected neurons to represent the complexity of a single living neuron.

Even the writers did not expect such difficulties. “I thought it would be easier and smaller,” Beniaguev said. He expects three or four units to be sufficient based on the calculation of the room.

Timothy Lillicrap, who develops decision-making solutions at Google’s AI company DeepMind, said its new findings suggest that it may be necessary to reconsider the old traditions that do not compare neurons in the brain with neurons based on machine learning. “This paper really helps to force the issue of careful consideration and to deal with how to make analogies,” he said.

The most important comparison between artificial and real neurons involves the way they handle incoming information. Both types of neurons receive incoming signals and, based on this information, decide whether to send their signal to other neurons. Although the production of neurons relies on simple calculations to make decisions, decades of research have shown that this process is the most complex in the natural nervous system. Scientists use an input method by comparing existing relationships with long-distance branches like trees, called dendrites, and the concept of neuron transmitting signals.

This work is one in which the recruits of a new project also taught a more intensive communication technique to realize its complexities. He began by making extensive comparisons of the input function of a type of neuron with specific dendritic branches of the upper and lower extremities, called a pyramidal neuron, from the upper extremities of the rat. He then fed the comparison into a deep network that contained 256 neurons for each phase. He continued to increase the number of segments until he was able to track 99% at the millisecond level between inputs and outputs of the experimental neuron. The deep neural networks accurately predicted the way neuron function is produced by at least five – but not more than eight – production components. In most networks, they contain about 1,000 neurons of just one neuron.

Neuroscientists now know that the computational complexity of a single neuron, such as the pyramid neuron on the left, depends on the tree-like branches, which are blown up by oncoming signals. This results in a local electrical change, represented by a change in the color of the neurons (red means electrical energy, blue means electrical energy) before the neuron makes the decision to send its signal called a “spike.” They only move three times, as shown by the branches of each color on the right, while the colors represent the location of the dendrites from the top (red) to the bottom (blue).

Video: David Beniaguev

“[The result] they make a bridge from living neurons to synthetic neurons, ”he said Andreas Tolias, a specialist in Baylor College of Medicine.

But the authors warn that it is not a direct connection yet. “The connection between the several components you have in the network is mixed with network challenges is not clear,” London said. For that reason we cannot say how much is obtained by moving, say, four to five parts. Nor can it be said that the need for 1,000 neurons means that natural neurons are 1,000 times more complex. Ultimately, it is possible that the use of visual neurons within each component can eventually connect to other layered networks – but it may require more time and time for the algorithm to learn.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *