What is neural communication?

Over the past couple of years, artificial intelligence has quietly shaken itself off the tags of “science fiction” and “game design” and has become firmly established in daily news feeds. Entities under the mysterious name “neural networks” identify people from photographs, drive cars, play poker and make scientific discoveries. At the same time, it is not always clear from the news what these mysterious neural networks are: complex programs, special computers or racks with orderly rows of servers?

Of course, you can already guess from the name that in neural networks the developers tried to copy the structure of the human brain: as you know, it consists of many simple neuron cells that exchange electrical signals with each other. But how then do neural networks differ from a regular computer, which is also assembled from primitive electrical parts? And why didn’t they think of the modern approach half a century ago?

Let's try to figure out what lies behind the word "neural networks", where they came from - and whether it is true that computers are gradually gaining intelligence right before our eyes.

The idea of ​​a neural network is to assemble a complex structure from very simple elements. It's unlikely that just one part of the brain can be considered intelligent, but people usually do surprisingly well on IQ tests. Nevertheless, until now the idea of ​​​​creating a mind “out of nothing” has usually been ridiculed: the joke about a thousand monkeys with typewriters is already a hundred years old, and if you want, criticism of neural networks can even be found in Cicero, who sarcastically suggested throwing tokens with letters into the air until you’re blue in the face, so that sooner or later a meaningful text will turn out. However, in the 21st century it turned out that the classics were being sarcastic in vain: it is an army of monkeys with tokens that, with due persistence, can take over the world.

Beauty begins when there are many neurons

In fact, a neural network can be assembled even from matchboxes: it is just a set of simple rules by which information is processed. “Artificial neuron”, or perceptron , is not a special device, but just a few arithmetic operations.

The perceptron works couldn’t be simpler: it receives several initial numbers, multiplies each by the “value” of this number (more on that below), adds it up and, depending on the result, outputs 1 or –1. For example, we photograph a clear field and show our neuron some point in this picture - that is, we send it random coordinates as two signals. And then we ask: “Dear neuron, is this heaven or earth?” “Minus one,” answers the dummy, serenely looking at the cumulus cloud. “It’s clear that it’s earth.”

“Pointing a finger at the sky” is the main activity of the perceptron. You can’t expect any accuracy from him: you can just as easily flip a coin. The magic begins in the next stage, which is called machine learning . We know the correct answer, which means we can write it into our program. So it turns out that for every incorrect guess the perceptron literally receives a penalty, and for a correct guess - a bonus: the “value” of the incoming signals increases or decreases. After this, the program is run using the new formula. Sooner or later, the neuron will inevitably “understand” that the earth in the photograph is below and the sky is above, that is, it will simply begin to ignore the signal from the channel through which the x-coordinates are transmitted to it. If you slip another photo to such a sophisticated robot, then it may not find the horizon line, but it certainly won’t confuse the top with the bottom.

To draw a straight line, the neuron crosses the entire sheet

In real work, the formulas are a little more complicated, but the principle remains the same. The perceptron can only do one task: take numbers and put them into two piles. The most interesting thing begins when there are several such elements, because the incoming numbers can be signals from other “building blocks”! Let's say one neuron will try to distinguish blue pixels from green ones, the second will continue to tinker with the coordinates, and the third will try to judge which of these two results is closer to the truth. If you set several neurons on blue pixels at once and sum up their results, you will get a whole layer in which the “best students” will receive additional bonuses. Thus, a fairly widespread network can shovel through a whole mountain of data and take into account all its errors.

Perceptrons are not much more complex than any other computer element that exchanges ones and zeros. It is not surprising that the first device based on the principle of a neural network, the Mark I Perceptron, appeared already in 1958, just a decade after the first computers. As was customary in that era, the neurons of this bulky device consisted not of lines of code, but of radio tubes and resistors. Scientist Frank Rosenblatt was able to build only two layers of the neural network, and the signals to the Mark-1 were sent from an improvised screen measuring as much as 400 dots. The device quickly learned to recognize simple geometric shapes - which means that sooner or later such a computer could be trained, for example, to read letters.

Rosenblatt and his perceptron

Rosenblatt was an ardent enthusiast of his work: he was well versed in neurophysiology and taught a popular course of lectures at Cornell University, in which he explained in detail to everyone how to use technology to reproduce the principles of brain function. The scientist hoped that within a few years, perceptrons would turn into full-fledged intelligent robots: they would be able to walk, talk, create their own kind, and even colonize other planets. Rosenblatt’s enthusiasm is understandable: back then scientists still believed that to create AI it was enough to reproduce a complete set of mathematical logic operations on a computer. Turing had already proposed his famous test, Isaac Asimov called for thinking about the need for the laws of robotics, and the exploration of the Universe seemed to be a matter of the near future.

However, among the pioneers of cybernetics there were also incorrigible skeptics, the most formidable of whom was Rosenblatt’s former classmate, Marvin Minsky. This scientist had an equally loud reputation: the same Asimov spoke of him with constant respect, and Stanley Kubrick invited him as a consultant to the filming of “2001: A Space Odyssey.” Even from Kubrick’s work it is clear that in fact Minsky had nothing against neural networks: HAL 9000 consists precisely of individual logical nodes that work in conjunction with each other. Minsky himself became interested in machine learning back in the 1950s. It’s just that Marvin was uncompromising about scientific errors and groundless hopes: it was not for nothing that Douglas Adams named his pessimistic android in his honor.

Unlike Rosenblatt, Minsky lived to see the triumph of AI

Minsky summed up the doubts of the skeptics of that time in the book “Perceptron” (1969), which for a long time discouraged interest in neural networks from the scientific community. Minsky mathematically proved that Mark-1 has two serious flaws. Firstly, a network with only two layers could do almost nothing - and it was already a huge cabinet, consuming a lot of electricity. Secondly, Rosenblatt’s algorithms were not suitable for multilayer networks: according to his formula, some information about network errors could be lost without ever reaching the desired layer.

Minsky did not intend to criticize his colleague too much: he simply honestly noted the strengths and weaknesses of his project, and he continued to work on his developments. Alas, Rosenblatt died in 1971 - there was no one to correct the perceptron errors. “Ordinary” computers developed by leaps and bounds in the 1970s, so after Minsky’s book, researchers simply gave up on artificial neurons and moved on to more promising areas.

The development of neural networks stopped for more than ten years - these years are now called the “winter of artificial intelligence.” By the beginning of the cyberpunk era, mathematicians had finally come up with better formulas for calculating errors, but the scientific community initially did not pay attention to these studies. Only in 1986, when the third group of scientists in a row independently solved the problem of training multilayer networks discovered by Minsky, work on artificial intelligence finally began to boil with renewed vigor.

Although the operating rules remained the same, the sign changed: now it was no longer about “perceptrons”, but about “cognitive computing.” Nobody built experimental instruments anymore: now it was easier to write all the necessary formulas in the form of simple code on a regular computer, and then loop the program. In just a couple of years, neurons have learned to assemble into complex structures. For example, some layers looked for specific geometric shapes in the image, while others summarized the data obtained. This is how we managed to teach computers to read human handwriting. Soon even self-learning networks began to appear that did not receive the “right answers” ​​from people, but found them themselves. Neural networks immediately began to be used in practice: a program that recognized numbers on checks was gladly adopted by American banks.

1993: captcha is already obsolete

By the mid-1990s, researchers agreed that the most useful property of neural networks is their ability to independently come up with the right solutions. The trial and error method allows the program to develop its own rules of behavior. It was then that competitions between homemade robots, programmed and trained by enthusiastic designers, began to become fashionable. And in 1997, the supercomputer Deep Blue shocked chess fans by beating world champion Garry Kasparov.

Strictly speaking, Deep Blue did not learn from its mistakes, but simply tried millions of combinations

Alas, around the same years, neural networks hit the ceiling of their capabilities. Other areas of programming did not stand still - it soon turned out that conventional well-thought-out and optimized algorithms could handle the same tasks much more easily. Automatic text recognition made life much easier for archive workers and Internet pirates; robots continued to get smarter, but talk about artificial intelligence slowly died out. For truly complex problems, neural networks still lacked computing power.

The second AI “thaw” happened only when the programming philosophy itself changed.

In the last decade, programmers - and ordinary users - have often complained that no one pays attention to optimization anymore. Previously, the code was shortened as much as possible - just so that the program would run faster and take up less memory. Now even the simplest Internet site strives to grab all the memory for itself and hang itself with “libraries” for beautiful animation.

Of course, this is a serious problem for ordinary programs, but this is precisely the kind of abundance that neural networks lacked! Scientists have long known that if you do not save resources, the most complex problems begin to be solved as if by themselves. After all, this is exactly how all the laws of nature operate, from quantum physics to evolution: if you repeat countless random events over and over again, selecting the most stable options, then a harmonious and orderly system will be born from chaos. Now humanity finally has a tool in its hands that allows us not to wait for billions of years for changes, but to train complex systems literally on the go.

In recent years, no revolution in programming has happened - computers have simply accumulated so much computing power that now any laptop can take a hundred neurons and run each of them through a million training cycles. It turned out that a thousand monkeys with typewriters just need a very patient overseer who will give them bananas for correctly typed letters - then the animals will not only copy War and Peace, but also write a couple of new novels just as good.

And so the third coming of perceptrons happened - this time under the familiar names of “neural networks” and “deep learning”. It is not surprising that news about AI successes is most often shared by large corporations such as Google and IBM. Their main resource is huge data centers, where multilayer neural networks can be trained on powerful servers. The era of machine learning has truly begun right now, because the Internet and social networks have finally accumulated the same big data, that is, gigantic amounts of information that are fed to neural networks for training.

As a result, modern networks are engaged in those labor-intensive tasks that people simply would not have enough time to do in their lifetime. For example, to find new drugs, scientists have until now had to spend a long time calculating which chemical compounds are worth testing. And now there is a neural network that simply goes through all possible combinations of substances and suggests the most promising areas of research. The IBM Watson computer successfully helps doctors in diagnosis: having learned from medical histories, it easily finds non-obvious patterns in the data of new patients.

People classify information using tables, but neural networks have no reason to limit themselves to two dimensions - so data sets look something like this

Computers have made as much progress in entertainment as they have in science. Thanks to machine learning, they have finally succumbed to games for which winning algorithms are even more difficult to come up with than for chess. Recently, the AlphaGo neural network defeated one of the world's best Go players, and the Libratus program won a professional poker tournament. Moreover, AI is already gradually making its way into cinema: for example, the creators of the series “House of Cards” used big data during casting to select the most popular cast.

Just like half a century ago, the most promising area remains pattern recognition. Handwritten text or “captcha” is no longer a problem - now networks successfully distinguish people from photographs, learn to identify facial expressions, and draw cats and surreal paintings themselves. Nowadays, the main practical benefit from these entertainments comes from the developers of self-driving cars - after all, in order to assess the situation on the road, the car needs to very quickly and accurately recognize the surrounding objects. Intelligence agencies and marketers are not far behind: using a regular video surveillance recording, a neural network has long been able to find a person on social networks. Therefore, those who are especially distrustful get themselves special camouflage glasses that can deceive the program.

“You're just a machine. Just an imitation of life. Can a robot compose a symphony? Will a robot turn a piece of canvas into a masterpiece of art?” ("I am a robot")

Finally, Rosenblatt’s prediction about self-copying robots is beginning to come true: the DeepCoder neural network was recently taught programming. In fact, the program so far simply borrows pieces of someone else’s code, and can only write the most primitive functions. But didn’t the history of the networks themselves begin with the simplest formula?

Games with bots

It’s a lot of fun to have fun with undertrained neural networks: they sometimes produce errors that you wouldn’t even dream of.
And if the AI ​​begins to learn, excitement appears: “Can it really do it?” Therefore, Internet games with neural networks are now gaining popularity. One of the first to become famous was the Internet genie Akinator, who could guess any character with just a few leading questions. Strictly speaking, this is not exactly a neural network, but a simple algorithm, but over time it became more and more clever. The genie replenished the database at the expense of the users themselves - and as a result, he was even taught Internet memes.

Another guessing game game is offered by AI from Google: you need to scribble a picture for a given word in twenty seconds, and the neural network then tries to guess what it was. The program misses very funny times, but sometimes only a couple of lines are enough for the correct answer - and this is how we ourselves recognize objects.

And, of course, you can’t do without cats on the Internet. The programmers took a completely serious neural network that can build facade designs or guess the color in black and white photographs, and trained it on cats so that it would try to turn any outline into a full-fledged cat photograph. Since the AI ​​tries to do this even with a square, the result is sometimes worthy of Lovecraft’s pen!

With such an abundance of amazing news, it may seem that artificial intelligence is about to become aware of itself and will be able to solve any problem. In fact, everything is not so rosy - or, if you take the side of humanity, not so gloomy. Despite the success of neural networks, they have accumulated so many problems that another “winter” may well be ahead of us.

The main weakness of neural networks is that each of them is tailored for a specific task. If you train a network on photographs of cats and then give it the task “distinguish the sky from the earth,” the program will not cope even if it has at least a billion neurons. In order for truly “smart” computers to appear, it is necessary to come up with a new algorithm that no longer unites neurons, but entire networks, each of which deals with a specific task. But even then, computers will be a long way from the human brain.

Currently, the largest network is owned by Digital Reasoning (although new records appear almost every month) - their creation contains 160 billion elements. For comparison, there are about a billion connections in one cubic millimeter of a mouse brain. Moreover, biologists have so far managed to describe at most an area of ​​a couple of hundred micrometers, where about ten thousand connections were found. What can we say about people!

One layer can recognize people, another can recognize tables, a third can recognize knives...

It’s fashionable to use these 3D models to illustrate news about neural networks, but this is just a tiny part of the mouse brain

In addition, researchers advise being careful about loud statements from Google and IBM. There have been no fundamental breakthroughs in “cognitive computing” since the 1980s: computers still mechanically process incoming data and produce results. A neural network can find a pattern that a person would not notice, but this pattern may turn out to be random. The machine can count how many times the Oscars are mentioned on Twitter, but it won’t be able to determine whether users are happy with the results or snarky about the academy’s choice.

Artificial intelligence theorists insist that one of the main problems - understanding human language - cannot be solved by simply searching for keywords. And this is the approach that is still used by even the most advanced neural networks.

Comparison of the brain with a neural network


One can come across many criticisms that the biological brain or biological neural networks work completely differently from the currently popular computer neural networks. Various specialists resort to such comments, both from biologists, neurophysiologists and from specialists in computer science and machine learning, but there are very few specific comments and suggestions. In this article we will try to analyze this problem and identify particular differences between the work of a biological and computer neural network, and propose ways to improve computer neural networks that will bring their work closer to a biological analogue.

Frontier of knowledge

First, I want to explain why, in my opinion, everything is still so sad in the matter of creating strong artificial intelligence, despite the tremendous advances in computer science and knowledge about the biological brain.
First of all, this is due to the large ideological gap between these two pillars of science. Computer science requires a certain schematic simplicity, rigor and conciseness in the description of systems, a certain systematic approach, discarding unnecessary things and clear structuring sufficient for registration in program code. In biology, the detail of the description of the observed systems prevails; nothing can be discarded or ignored from observations. The systems described must include all observable facts. Therefore, it is difficult for biologists to apply a systematic approach to their vast knowledge to create brain algorithms. After all, in order to create the design of the aircraft, it was necessary to revise a lot and discard the image of the bird. On the other hand, it is easy to understand scientists and engineers who, when immersed in the study of computer neural networks from a description of the principles of brain operation, are content with a short paragraph of text about a neuron that, with the help of synapses on dendrites, “listens” to other neurons and transmits the result of summation calculations over the layer along a single axon further, without applying any critical assessment to this knowledge. Even neuroscientists use the formal McCulloch-Pitts neuron when describing the principles of operation of a biological neuron, but they do this for a different reason, due to the fact that there are no worthy alternatives, there is no clear description in biology of what a neuron does, what logic it performs, despite extensive knowledge about him.

If someone tries to reverse-engineer the functioning of the brain, they will encounter a whole layer of accumulated contradictory knowledge, which in fact would not be enough to understand in the lifetime of even a biologist, not to mention a systems engineer who is accustomed to a more different form of knowledge. Working with such a volume of information is only possible through the prism of some general theory of how the brain works, which does not yet exist.

Humanity has technologies of colossal computing power and a gigantic amount of knowledge about the brain, but cannot obtain a synthesis of these things. Let's try to solve this problem and erase this boundary of knowledge.

Brain it should be easy

The first very important principle that should be followed is the idea that the brain should work according to some very simple rules, i.e.
All cognitive processes, no matter how complex they may seem, are based on simple basic principles. This is different from what we are used to hearing about the brain. The long absence of a general theory of how the brain works has given rise to much speculation that the brain is some incomprehensibly complex object, or that the nature of its work goes far beyond the scientific methods of study that are applied to it. For example, they compare the brain with a quantum computer, or unfairly attribute to individual neurons the properties of complex computers, which, coupled with their number in the nervous system, makes the requirements for computing power for brain modeling unattainable. In my opinion, scientists who say that Humanity will never comprehend the complexity of the human brain should be deprived of their scientific degrees, such statements can only undermine the morale of people who want to devote themselves to solving this problem.

So what evidence supports the simplicity of the brain? Here I will give a completely paradoxical example. If we take a snail and apply electrodes to one neuron of its large ganglion, in accordance with all the requirements that apply to such experiments, then we will be able to obtain a graph of the activity of an individual neuron, and if we try to analyze it, we will get a very complex nature of its activity. Even if we take into account the invasive nature of our experiment, the fact that our electrodes cause serious damage to the cochlea and limit its vital activity, the nature of the neuron’s activity still looks very complex. We will see both spontaneous activity and changes in the number and frequency of spikes over time. Many scientists have been struggling to explain this complex neuron behavior for a long time, looking for any pattern in it.

These facts make the neuron a kind of complex computer that works according to a complex algorithm. Considering that there are about 20 thousand such neurons in the nervous system of a snail, we can say that the computing power of the nervous system of an ordinary snail is comparable to a mainframe. I think this should put you in awe of these animals. But let's see how complex snail behavior is. A snail is a kind of biological automaton, and it has a certain degree of variability in behavior, but it is very small. This is a set of unconditioned reflexes, often very simple, which can be explained by existing knowledge about neurons, synapses and reflex acts and there is no place for complex calculations.

In support of the above, I would like to make a reference to my previous article, which describes a model of a frog tadpole, in which, thanks to a nervous system of several dozen neurons, one can obtain quite complex behavior of a waterfowl.
Moreover, from very simple neurons, the model of which is based on facts known in science. OpenTadpole

Article "OPENTadpole: the first cybernetic animal"

So where does this complex neuron behavior come from, and why are there so many of them? Here, in fact, one follows from the other. There is a paradoxical phenomenon in nature that can be called the neuron efficiency paradox. It lies in the fact that with the increase and complexity of the nervous system, the effectiveness or role of an individual neuron in this system decreases.

If we analyze the nervous system of the annelid c.elegans, an animal whose connectome of 301 neurons is completely composed, we see that not only individual neurons are important in the proper functioning of its nervous system, but even individual synapses are important. That is, we can assign an individual annelid neuron 100% efficiency. If we consider the human nervous system from this point of view, it is difficult to assign significant efficiency values ​​to neurons that can be taken out of the skull with a crowbar, while preserving a person’s vital activity and even his social integration, well, almost preserving it.*

*reference to the very famous case of Phineas Gage

Wikipedia
You regularly see articles that describe cases where people who live full lives and are socially adaptive suddenly discover that their brain is devoid of any areas or lobes.
It is not surprising that such facts give rise to the idea that the problem is not in the neurons, and indeed not in the brain at all. If we observe the activity of a healthy brain, we will not see any extra neurons, each neuron will be involved, to varying degrees, of course, each will be assigned its own role. How the brain does this, what the neuron algorithm should be for this to happen, when the neuron’s efficiency is low, I will explain below.

The paradox of neuron efficiency can be explained by the fact that as the number of neurons in the nervous system increases, the “attention” of evolutionary processes to individual neurons decreases. Therefore, the neurons of an annelid, one might say, work like a clock, very precisely, while the neurons of a snail and a Human cannot boast of such accuracy; in their work one can see both spontaneous activity and the absence of a response where it should be, as well as its instability .

So, two theories can be imagined for the complex activity of a neuron: a neuron is a complex computer, the algorithm of which is difficult to understand and justify, or a neuron simply works very unstable, which is compensated by its excess quantity, which is the simplest solution from the point of view of evolution. Apply the rule of Occam's razor to these theories, according to which you should keep the ideas that have the simplest explanation and most likely these ideas will be correct.

On the one hand, the neuron efficiency paradox gives us positive hope that the necessary computing power for brain modeling will be required significantly less than when directly assessed by the number of neurons and synapses in the human brain. On the other hand, this greatly complicates the study of the biological brain. We can create a fairly detailed model of a small fragment of the cerebral cortex, spending a lot of computing power, and in this model we will not see any significant processes that would indicate how cognitive mechanisms occur in the nervous system. Such attempts have already been made.

At first glance, the simplest and most straightforward approach to creating a general theory of how the brain works is to create a detailed model of the brain, in accordance with the many scientific facts known about the neuron and synapses. Modeling is the most practical scientific tool in the study of any complex systems. The model literally reveals the essence of the object being studied, allows you to immerse yourself and influence the internal processes occurring in the modeled system, making it possible to understand them more deeply.

A neuron does not have any special organelles that perform computations, but its membrane has a number of features that allow the neuron to perform certain work. This work can be determined using a system of equations called the Hodgkin-Huxley model, which was developed in 1952, for which its authors received a Nobel Prize.

These formulas contain several coefficients that determine some parameters of the neuron membrane, such as the reaction rate of ion channels, their conductivity, etc. This magical model describes several phenomena at once, in addition to changes in the charge on the surface of the neuron membrane. Firstly, it describes the activation function of the neuron, or the summation mechanism; it is quite simple. If the initial charge is insufficient, then the model remains in an equilibrium state. If the charge passes through a certain threshold, then the model responds with one spike. If the charge significantly exceeds this threshold, then the model responds with a series of spikes. Computer neural networks use a wide variety of activation function options, the closest to biology may be the Heaviside function (unit step) and the linear rectifier (Rectifier). But you need to understand that we are describing a fairly simple aspect of the neuron’s operation – summation. In my work on the tadpole, mentioned above, I used a very simple version of the summation model, which can be figuratively represented as a vessel accumulating a factor of incentive influence; if this factor exceeded a certain threshold, then the neuron was activated. In order for this adder to work in real time, the impact factor slowly flowed out of the figurative vessel.

This summation model made it possible to sum up signals that arrived at the neuron asynchronously, and it works quite realistically. In my opinion, the simpler it is to describe this process, the better, and this is not a fundamental difference between biological and computer networks. Secondly, the Hodgkin-Huxley model describes the change in charge at one point of the membrane, but if we, for example, create a topologically accurate 3D model of a neuron and divide this model into a uniform mesh, we can apply the Hodgkin-Huxley model at each vertex (node) of this mesh, with the condition that the charge influences the value at neighboring vertices along the mesh. Thus, we will obtain a model of the propagation of excitation throughout the neuron close to how it occurs in a living neuron.

The main conclusions that can be drawn from this model are that excitation, having arisen on any part of the membrane, spreads to the entire membrane, including spreading along a long axon to the most distant synapses. The Hodgkin-Huxley model is very resource-intensive; therefore, for modeling purposes, less expensive models with very similar graphs are used; several such models have been invented.

The Human Brain Project (HBP) created a model of a small fragment of the mouse cerebral cortex, and its creators took a lot into account. 3D models of neurons were recreated from real neurons, a variant of the Hodgkin-Huxley model was used, different types of neurons and neurotransmitters were taken into account, and there is no doubt that the model truly corresponds to the biological analogue. A lot of resources and time were spent on this, but it never yielded significant results due to the fact that in such a small size, due to the paradox of neuron efficiency, it was impossible to see significant processes. Therefore, the path of detailed repetition of biology is very, very labor-intensive. The key to success is the ability to understand how nervous tissue and neurons work on a larger scale.

Let's look at how the brain processes information using a particular example, the processing of visual information. We will draw up a diagram of a neural network that performs this task.

Information from the retina is transmitted via the optic nerve to the thalamus, where the information practically does not undergo significant transformations. It is then transmitted to the primary visual cortex (V1). The cerebral cortex is divided into six layers, but these layers are based on histological or morphological characteristics. We are probably dealing with two layers here, since some structures are repeated twice. But even at the same time, we are rather not dealing with two separate independent layers, layers of nerve cells working in tandem.

Let us characterize the visual cortex area V1 as the first layer in which information processing occurs. Area V1 also has feedback connections to the thalamus. Similar feedback connections exist between all subsequent layers. These connections form cyclic transfers of excitation between layers called reverberations.

After zone V1, information is transferred to the next zone V2, all subsequent zones will have smaller areas. Depending on what the brain observes, whether it was an object, a symbol, a person's face, a place or something else, information from V2 can be transmitted to various areas V3, V4, V5. That is, already in this visual area V2, serious categorization of visual images occurs. And approximately already on the third or fourth layer it will be possible to identify neurons-detectors of certain images. For example, we can identify a neuron that detects the letter “A”, the number 3, or Jennifer Aniston’s face. By activating these detector neurons, we can judge what the brain is currently observing. A fairly simple architecture of a neural network, if we compare it with the architecture of computer neural networks specialized in visual image recognition, convolutional neural networks.

AlexNet
There are similar points, this is a hierarchy of convolutional layers, each subsequent layer will have an increasingly smaller number of parameters. But the layers of this type of computer networks do not have recurrent connections; of course, their presence is not a criterion for successful pattern recognition, since the nature of reverberations in a living brain is not fully understood. There is a hypothesis that reverberations are associated with the phenomenon of instant memory, that memory that allows us, for example, not to get lost when dialing a phone number or pronouncing it. Reverberating activity is delayed, as it were, indicating the areas through which this activity passes, thereby creating a context for the information being processed.

A person can recognize complex images in a fraction of a second, the speed of action potential propagation across the membrane is from 1 to 120 m/s, the synaptic delay in chemical synapses is 0.2-0.5 ms, which suggests that during recognition a chain of more than a hundred neurons.

The above suggests that in our skull there is a neural network that works faster and more efficiently than any computer neural network, while it is organized relatively simply, performing simple transformations of information. Understanding this prompts the search for a network algorithm that would perform the task of pattern recognition using significantly less computing resources than modern neural networks.

Formal neuron

Since my school years, I have been excited by the idea of ​​​​creating artificial intelligence; I satisfied my interest by studying literature on neurophysiology, and I knew nothing about artificial neural networks. I became acquainted with neural networks later, when I was already a student. I was puzzled and disappointed by the formal McCulloch-Pitts neuron, which is the basis for all modern neural networks, due to its heavy emphasis on dendritic synapses.

A formal McCulloch-Pitts neuron can be thought of as a function with many arguments and one answer. The input arguments are multiplied with corresponding coefficients called weights (W1, W2,... Wn), then these values ​​are added and the resulting sum passes through the activation function, the result of which is the result of the neuron’s calculations. The main thing is to choose the right weights, that is, to train the neural network. This neuron model may seem simple and obvious, but it has a strong emphasis on dendritic synapses.

There are two important parts to a chemical synapse: the presynapse and the postsynapse. Presynapses are located at the ends of a long single axon process, which can branch multiple times. The presynapse is presented as a small compaction at the tips, it refers to the neuron that transmits excitation. Postsynapses are located on short branched processes of dendrites; they belong to the neuron to which excitation is transmitted.

The presynapse contains vesicles, vesicles with portions of the neurotransmitter substance. It was in presynapses that the disparity of synapses was previously revealed; presynapses differ in the number of portions of the neurotransmitter stored in it, as well as in the amount of the neurotransmitter released when it is activated. We denote the weight or strength of the presynapse by the letter S.

On the surface of the postsynaptic membrane there are receptors that respond to the neurotransmitter. The number of these receptors determines how sensitive the synapse will be. That is, the postsynapse can also be characterized by some characteristic, weight. Let's denote this weight by the letter A. Of course, these two parameters can be represented as one W, which determines the strength of the entire synapse, but these parameters must be configured differently during training and they still relate to different neurons.

This representation of a neuron is more realistic, but at the same time it becomes much more complicated, since now we have to understand how to configure all these parameters during training.

I would like to present my version of the algorithm by which changes occur in postsynapses, that is, dendritic synapses. It is based on the fact that a biological neuron needs to maintain a certain level of activity. The fact is that a neuron as a cell is very resource-intensive for the body; it cannot feed itself; satellite cells, glia, do this for it. Therefore, if a neuron for some reason does not perform its functions, then the best option is to get rid of it for the sake of the efficiency of the whole organism. In the absence of activation for a long time, the process of apoptosis can begin in the neuron; this process is actively supported by satellite cells, literally tearing and tearing the neuron apart. Therefore, in order to survive in conditions of an insufficient source of activation, a neuron has to develop dendritic branches, increase the sensitivity of synapses on dendrites, and sometimes even migrate to other areas (this happens extremely rarely and under certain conditions), or produce spontaneous activity. This is evidenced, for example, by visual or auditory hallucinations in people whose organs of vision or hearing are subject to deprivation or degradation due to aging. Oliver Sacks writes about this in more detail in his book “The Man Who Mistook His Wife for a Hat.”

Oliver Sacks on hallucinations

Motile neurons
On the other hand, excessive activity of a neuron can also lead to its death.
The activity of a neuron is a very complex process that requires the precise implementation of many mechanisms, and any failure in their implementation will lead to fatal consequences for the entire cell. If the sources of activity are excessive, then the neurons begin the process of degradation of some branches of dendrites and a decrease in the sensitivity of their postsynapses. Thus, the neuron tries to find some balance in its activity level by regulating dendritic synapses. The neuron, acting as an independent agent acting in its own interests, provides amazing adaptability and plasticity of the entire brain. Despite the paradox of neuron efficiency, a healthy brain works very smoothly, and each neuron plays its own role. Thanks to this mechanism, neurons in the visual cortex of blind people will be involved in other neural processes not related to the processing of visual images. And the redundancy in the number of nerve cells makes the nervous system very reliable and if some areas of the nervous tissue are damaged, neurons can take over the functions and roles of the lost cells. Based on this version, dendritic synapses are assigned a role that influences the adaptive qualities of the entire nervous system, and not some logical functions that determine cognitive processes.

There is already an algorithm for changes in presynapses of axon synapses, the so-called Hebb's rule.

If the axon of cell A is close enough to excite cell B, and repeatedly or continuously participates in its excitation, then there is some process of growth or metabolic change in one or both cells leading to an increase in the effectiveness of A as one of the cells excitatory of B.

Hebb, DO The organization of behavior: a neuropsychological theory.
New York (2002) (Original edition - 1949) (thank you) I present here the full text of Hebb's rule because there are many interpretations of it that change its semantic meaning.

As we can see, the emphasis is on the neuron that transmits the excitation, that is, on the axon synapses, and not on the dendritic synapses of the receiving neuron. Presynapse and postsynapse certainly influence each other. For example, with a deficit of activations, the neuron will first increase the sensitivity of the postsynapse that is used more often. And if it is necessary to reduce the level of activation, those postsynapses that were used least often will degrade first. This is due to the importance of preserving the logic of learning during adaptive processes.

If we want to create an artificial neural network, then we can neglect adaptive mechanisms; after all, biological systems are more demanding in terms of saving resources by each element than artificial models.

It turns out that computer neural networks are based on a neuron model, in which the emphasis is placed in the opposite direction than that of a biological neuron. Therefore, you should not count on high-quality results in the development of this area. But by understanding these problems, you can change the situation; you need to rebuild the concept of neural networks anew, reconsider it, laying the right foundation.

Analysis and Synthesis

Neurophysiology is a young, immature science; it does not yet have strict fundamental laws like the laws in physics, although it contains a large number of theories and facts.
It seems to me that such laws may be the postulates and principles of the reflex theory of Ivan Petrovich Pavlov. They can be compared to Newton's laws of physics. When creating new theories in neurophysiology, we must ask questions: how reflexes occur and are formed within the framework of our theory, as well as how the processes of synthesis and analysis manifest themselves. Analysis and synthesis require special attention. These concepts seem very abstract, but these are concrete processes that occur in the nervous system. I.P. Pavlov believed that analysis and synthesis continuously occur in the cerebral cortex. These processes are the basis for cognitive activity. I will try to clearly convey what these processes are; this is very important in order to recreate cognitive processes in neural networks.

Synthesis is a mechanism for combining, generalizing various features into one image or action.

An example from the experiments of I.P. Pavlova: When feeding a specially prepared model animal - a dog, isolated from other external stimuli and immobilized (involuntary), the sound of a metronome is played, which was previously indifferent and indifferent to it. After several such combinations, the dog will develop a conditioned reflex, that is, to the sound of only the metronome, the model animal can produce gastric juice, as when feeding.

Analysis is a mechanism for selecting, ranking (giving ranks, significance) of each feature from a limited set of features.

An example from the works of I.P. Pavlova: For a previously trained model animal, which has formed a conditioned reflex to the sound of a metronome to produce gastric juice, the experimental conditions are changed; now the animals receive food at a metronome sound of 120 beats per minute, and at a sound of 160 beats per minute they will not be reinforced with anything. At first, the learned food conditioned reflex was triggered by both sounds of the metronome, but after many repetitions, and significantly more times than in the experiment with synthesis, the dog begins to distinguish between these two very similar stimuli and stops responding to the sound of the metronome with a frequency that was not reinforced.

Let's qualitatively compare these two cognitive processes.

Synthesis is a relatively fast mechanism because it requires a small number of examples, while Analysis requires much more repetition. Synthesis can occur in some passive form, that is, the main thing here is the simultaneous combination of stimuli or signs so that they can be combined. Analysis always requires emotional reinforcement or some kind of feedback, which will determine which features should be increased or decreased in importance or rank. Synthesis always precedes Analysis, that is, the characteristics must first be combined into a group, within which ranking can already be carried out (the analysis process).

Analysis always leads to a reduction in the number of errors, as it gives the data additional information content: ranks or significance of individual features. Pure Synthesis creates many errors, as it leads to a decrease in the information content of the source data, combining and generalizing them into single groups.

Now, armed with an understanding of these processes, let’s analyze computer neural networks for their presence.

Backpropagation of error is pure Analysis, it is the process of ranking neuron inputs based on the results of the entire neural network. There is no synthesis as a mechanism in neural networks. Each neuron initially already has a group of inputs formed; this group does not change in any way during the learning process according to the principle of Synthesis. There may be a false impression of the presence of Synthesis in neural networks due to their ability to classify data, but this is the result of the Analysis engine working on the data. Synthesis is the ability to generalize, merge data, and not group into groups based on common characteristics.

As a result, the high generalization ability that is characteristic of human intelligence is sorely lacking in computer neural networks, and this is compensated by the need to use a large number of examples during training.

We must understand that algorithms that focus on Analysis will still have an advantage in certain tasks. For example, in the task of searching for patterns in a large amount of data, or recognizing faces from a database of millions, no algorithm can compare with modern neural networks. But in a task where it is necessary to apply the experience gained from a small number of examples in different and varied situations, for example, the autopilot task, then other new algorithms are required, based on Synthesis and Analysis, just as it happens in the brain.

Instead of a conclusion

What I do is search for new algorithms, create models based on the above principles. I am inspired by studying the biological brain. This path goes through a series of failures and misconceptions, but with each new model I gain valuable experience and become closer to my goal. In the future, I will try to share some of my experience using the example of analyzing specific models. Let's look at how I apply my knowledge about the nervous system in practice.

Now I have set myself the task of creating a neural network algorithm that can distinguish handwritten numbers from the standard MNIST set, and no more than 1000 examples and presentations should be used during training. I will consider the result satisfactory if there is at least a 5% error. I'm sure this is possible because our brain does something similar. Let me remind you that MNIST contains 60,000 training examples, which can be presented several dozen times to configure a neural network.

Since I began writing about my ideas and work on Habré - Giktimes, people with similar ideas and aspirations began to contact me, people for whom my articles turned out to be inspiring for their own research. It is also a positive motivating factor for me. Now is a time of opportunity, when you don’t have to be an academician or scientist to create new technologies or solve fundamental problems. One of these seekers, like me, is Nikolai - he is independently creating a kind of platform for modeling the nervous system of a simple animal, the Daphnia project. The project is open and anyone can join.

In English on Medium: Comparison of the brain with a computer neural network

Tales about Skynet


Although we ourselves find it difficult to resist irony on the topic of robot rebellion, serious scientists should not even ask about scenarios from “The Matrix” or “Terminator”: it’s like asking an astronomer if he has seen a UFO.
Artificial intelligence researcher Eliezer Yudkowsky, best known for his novel Harry Potter and the Methods of Rationality, has written a series of articles explaining why we are so worried about the rise of the machines—and what we should actually be afraid of. First of all, Skynet is cited as an example as if we have already experienced this story and are afraid of repetition. And all because our brain does not know how to distinguish fiction from movie screens from life experience. In fact, the robots never rebelled against their program, and the aliens did not fly in from the future. Why do we even think that this is a real risk?

We should not be afraid of enemies, but of overzealous friends. Any neural network has a motivation: if the AI ​​has to bend paper clips, then the more it makes, the more “rewards” it will receive. If you give a well-optimized AI too many resources, it will not hesitate to melt down all the surrounding iron into paper clips, then people, the Earth and the entire Universe. It sounds crazy - but only to human tastes! So the main task of future AI creators is to write such a strict ethical code that even a creature with limitless imagination cannot find “holes” in it.

Cases and experiments in history

Bushmen tribe. Nowadays, obvious telepathic abilities are demonstrated by representatives of the Bushmen tribe living in the Kalahari near the border of Bechuanaland. Lawrence Green (South African writer) studied the life of this tribe for many years. The researcher was most impressed by the work of “Kalahari Radio” - this is the name given to the columns of smoke that the Bushmen use to transmit messages at a distance. The paradox is that such signals do not have any code system. Just by looking at the smoke, some members of the tribe understand what we are talking about... “When the Bushmen see the smoke, they focus their attention on it. Soon, some of them already know what happened to the hunters and tell the others about it. Some people know how to “read smoke”, others don’t.”

One of the Australian aborigines explained the unusual information content of smokes in this way: “I make smoke so that another person knows what I think. And he thinks too, and in this way he thinks my thoughts.” Most likely, the smoke plays the role of a “get in touch” signal and does not carry information in itself. Radio Kalahari sometimes broadcasts very complex and detailed messages that reach much faster than if they were transmitted using conventional signals.

Fire in Stockholm. One of the earliest cases of subconscious communication between people was recorded on July 19, 1759. At that time there were no telephones, radios or other means of operational communication. At the time of the fire in Stockholm, the famous scientist and inventor Emanuel von Swedenborg was in the company of friends in the city of Gothenburg, which is located about 250 miles from Stockholm, where he was invited by his friend William Kastel. At about 6 o'clock in the evening, Swedenborg suddenly turned pale and announced to those around him that a fire was raging in Stockholm. He also said that the fire was spreading very quickly and that his friend's house had already burned down and now his house was in danger. Two days later, a royal courier arrived in Gothenburg and reported a terrible fire in Stockholm. Many assume that he received information about the fire from his friend, whose house burned down.

Experiment with lithium. In 1986, scientists at Cornell University studied the effect that two isotopes of lithium had on the behavior of rats. The pregnant rats were divided into three groups: one was given lithium-7, one was given lithium-6, and the third served as a control group. After the birth of offspring, rats treated with lithium-6 had a much stronger maternal instinct, expressed in care, care and nest building, than in the other two groups. Chemically, the two isotopes must be identical, and even more so in the moisture-filled environment of the human body, they should not show any differences. So what could have caused the differences in behavior observed by the researchers? Since lithium-7 and lithium-6 have different numbers of neutrons, they also have different spins. As a result, lithium-7 loses coherence too quickly, and lithium-6 can remain confused longer, increasing the ability of mother and child to subconsciously communicate.

* * *

So, true artificial intelligence is still a long way off. On the one hand, neuroscientists are still struggling with this problem, who still do not fully understand how our consciousness works. On the other hand, programmers are advancing, who simply take the task by storm, throwing more and more computing resources at training neural networks. But we already live in a wonderful era when machines take on more and more routine tasks and become smarter before our eyes. And at the same time they serve as an excellent example for people, because they always learn from their mistakes.

Brain. Neural Factory

We pronounce the phrase “nerve cells do not recover” in dialogues, hinting to the interlocutor that there is no need to worry so much. But what is its origin? For more than 100 years, scientists believed that neurons were not capable of division. And, according to these views, when he died, an empty space remained in his brain forever. Stress is known to be detrimental to nerve cells. So what happens - the more nervous you are, the more “holes” there are in the nervous system?

Nursery for nerve cells

If nerve cells disappeared from the brain forever, then, probably, the Earth would not have seen the rise of civilization. A person would lose his cellular resources before acquiring any skills. Neurons are very “delicate” creatures and are easily destroyed by adverse influences. It is estimated that we lose 200,000 neurons every day. This is not much, but nevertheless, over the years, the shortage can affect health if the losses turn out to be irreparable. However, this does not happen.

Scientists' observation about the impossibility of dividing nerve cells was absolutely correct. But the fact is that nature has found another way to restore losses. Neurons can multiply, but only in three parts of the brain, one of the most active centers is the hippocampus

. And from there the cells slowly migrate to those areas of the brain where they are lacking. The rate of formation and death of neurons is almost the same, so no functions of the nervous system are impaired.

Who has more?

The amount of nerve cell loss varies greatly with age. It would probably be logical to assume that the older a person is, the more irreversible nervous losses he has. However, young children lose the most neurons. We are born with a significant supply of nerve cells, and in the first 3-4 years the brain gets rid of the excess. There are almost 70% fewer neurons. However, children do not become stupid at all, but, on the contrary, gain experience and knowledge. Such loss is a physiological process; the death of nerve cells is compensated by the formation of connections between them.

In older people, the loss of neurons is not fully compensated, even by the formation of new connections between nerve cells.

It's not just about quantity

In addition to restoring cell numbers, the brain has another amazing ability. If a neuron is lost and its place is not occupied for some reason, then neighbors can take over its functions by strengthening connections with each other. This ability of the brain is so developed that even after quite severe brain damage, a person can successfully recover. For example, after a stroke, when neurons in an entire area of ​​the brain die, people begin to walk and talk.

Hit the hippocampus

With many adverse effects and diseases of the nervous system, the restorative function of the hippocampus is reduced, which leads to a decrease in neurons in the brain tissue. For example, regular drinking of alcohol slows down the proliferation of young nerve cells in this part of the brain. With a long “alcoholic experience,” the brain’s recovery abilities decrease, which affects the alcoholic’s state of mind. However, if you stop using it in time, the nervous tissue will recover.

But not all processes are reversible. For Alzheimer's disease

the hippocampus becomes depleted and ceases to perform its functions fully. With this disease, nerve cells not only die faster, but their losses become irreplaceable.

But acute stress is even beneficial because it mobilizes the brain. Another thing is chronic stress.

The nerve cells it kills can still be replaced by the hippocampus, but the recovery process is significantly slower. If stressful circumstances are strong and prolonged, the changes may become irreversible.

Do it yourself


A neural network can be made using matchboxes - then you will have a trick in your arsenal that you can use to entertain guests at parties. The editors of MirF have already tried it and humbly acknowledge the superiority of artificial intelligence. Let's teach irrational matter to play the game "11 sticks." The rules are simple: there are 11 matches on the table, and in each move you can take either one or two. The one who took the last one wins. How to play this against the “computer”? Very simple.

  1. Take 10 boxes or cups. On each we write a number from 2 to 11.
  2. We put two pebbles in each box - black and white. You can use any objects, as long as they are different from each other. That's it - we have a network of ten neurons!

Now the game begins.

  1. The neural network always goes first. First, look at how many matches are left and take a box with that number. On the first move it will be box number 11.
  2. Take any pebble from the desired box. You can close your eyes or throw a coin, the main thing is to act at random.
  3. If the stone is white, the neural network decides to take two matches. If black - one. Place a pebble next to the box so you don’t forget which “neuron” made the decision.
  4. After this, the person walks - and so on until the matches run out.

Now comes the fun part: learning. If the network wins the game, then it must be rewarded: throw one additional pebble of the same color that fell out during the game into those “neurons” that participated in this game. If the network loses, take the last used box and remove the unsuccessful stone from there. It may turn out that the box is already empty, then the “last” neuron is considered to be the previous one. During the next game, when it hits an empty box, the neural network will automatically give up.

That's all! Play a few games like this. At first you won’t notice anything suspicious, but after each win the network will make more and more successful moves - and after about a dozen games you will realize that you have created a monster that you cannot beat.

What is neural communication?

Home » Neurographics » What is neural communication?

Neurographics

Anna Boychak 01/12/2019

545 Views 1 comment

You've probably heard about neural connections and their impact on our lives.

We all have strong neural connections that have been formed throughout our lives. Our behavior and the results of certain actions depend on them.

In most cases, our behavior in a given situation can be called a habit. What happens after a person becomes aware of a habit that is undermining their quality of life and decides to break it?

What is a neural connection? Let's illustrate this with a simple example.

When you first enter a previously unknown dark room, you look for the light switch.

When you enter this room a second time, you already know approximately where the switch is and find it much faster.

A few days later, when you enter the room, you no longer search and put your hand on the switch without even looking in that direction. You have created a neural connection and now, thanks to it, you don’t need to think and remember where the switch is.

This way neural connections are created.

Our whole life consists of such neural connections. Do you think you are living consciously? I will disappoint you with this.

You live according to the programs of our society on neural connections. All our successes and failures are connected to these neural connections.

In a nutshell, an attitude, pattern or habit is just a program loaded into the brain, which is a well-established neural connection. He understands that this is not his fault at all; the connections were formed under the influence of external circumstances. He also understands that in order to rebuild his behavior, he needs to build a new neural circuit in order to abandon the old one. He does not try to cut off old connections by force of will; he redirects his attention to creating new ones. And, as you already know, it strengthens those paths that are used most often, and overgrows those that are not used. Thus, he abandons the old, creating the new.

It is almost impossible to tear them apart, but rewriting them or actually creating them anew is not difficult, but this requires time and techniques, which include neurographics.

The timing of achievement depends on the problem and its strength of impact on you! And also the timing of exposure (children’s attitudes are the most “strong”).

When working correctly with a specific problem, the result usually appears within 21-40 days. In some cases, these terms have to be doubled.

For any technique or practice to change neural connections to work, it must be done daily for at least 21 days, and preferably 40 days.

It all depends on the problem and its impact on you. This is exactly the time required to create a stable neural connection. For example, after marathons we always give recommendations not to interrupt the practice, but to let the connection strengthen! However, most people intuitively understand this themselves and unnecessary reminders are not needed)

Next to it is another important property of the brain. Neuroplasticity is the amazing ability of our brain to change neural connections depending on the influence of external circumstances.

People are capable of mastering any skill and result. You just need to continue doing some action or process, and the neural connection will strengthen and a habit will develop

You can also create strong neural connections in a negative way: poor nutrition, and even the ability to earn and spend money. There is only a habit (neural connection); how much money is “normal” and how much am I capable of earning?

Most people's minds are tuned to negative feelings and memories. This problem is considered one of the main ones on the path to conscious transformation of your life. This is also based on neural connections.

Do you want to wake up a happy person in a month so that you can have significantly more pleasant events in your life?

I suggest you create new neural connections that will change you inside and begin to change the world around you for the better.

Even something as simple as repeating something every day. First, try doing it every day and see the results.

After this, you will begin to feel how neural connections are rewritten.

Do you want to experience in practice how changing neural connections works and a wonderful change in the “scenery of your life” occurs through “randomly” arising circumstances, acquaintances, and events that you need?

Master neurography and you will master the art of managing your life consciously, and not under the influence of past attitudes.

Good luck and awareness to you!

*some materials are taken from open sources
Rating
( 1 rating, average 4 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]