MARCH 15, 2012
“THE INTEGRATION OF THE HUMAN BRAIN INTO CONTROL SYSTEMS FOR ANTHROPOMORPHIC DEVICES USING BRAIN-COMPUTER INTERFACE TECHNOLOGY”

 

 

Alexander Kaplan, Dr., psychophysiologist, founder of first Russian BCI laboratory.


 

Good afternoon, dear colleagues, friends and guests of this wonderful, remarkable, and I would say landmark congress. I would say that the support, including support of the activity that I am involved in (the creation and development of neuron interface technologies) is a great thing, and here not only financial support is required, but also neuro-network support, in other words it should be something that people need. I hope that this congress will be the first of its kind, and will help both scientists and everyone to understand whether it is necessary to create a system which allows the brain to have direct contact with the outside world.

 

Today, the outside world is to a large extent digital, and we are becoming increasingly immersed in the digital world, and the brain also works on these information and digital technologies, and so why should these two worlds not come into direct contact? Will this bring us any benefits, will it help us in any way? I would like to cover these two simple aspects in 20 minutes. The first aspect (very briefly) is the basic ideas, the present and near future of neuron interface systems. I am a direct participant in the development and creation of these system – I know about them at first hand. The second aspect I would also like to emphasize, which is perhaps a little simplified, is the ethical and philosophical aspect of the need for these systems.

 

I will now begin with the second, ethical-philosophical aspect. This is a doll taken out from the dusty chest of the mid-18th century. Its creators were the father and son Pierre and Andre Dros. Andre, in fact, was the originator of androids. This doll was capable of drawing, writing, and another doll, a girl who played the harpsichord, would play, and in particularly emotional passages, her lips would move, her chest would rise, and the boy, who wrote with a pen, would sometimes write the phrase: “I think, therefore I am”. Does he exist? Yes, this doll exists right now, it has lived for 200 years and will live another 200 years – perhaps it will outlive many of us. But this is life!

 

How are we to deal with this issue? Are we to shut down neuron interface studies, or speed them up? I believe that here we must try to move full speed ahead, as previous speakers said, for example Vitaly Lvovich Dunin-Barkovsky. Why is this? Look at this scheme. Here, in this hall, there is no one who is more than one year old according to the biological life of their cells. Cells constantly renew themselves: some every five days, some over the course of a year, but in any case, in an average human lifespan, they renew themselves many dozens of times, and the cells copy themselves – apart from the brain cells. We won’t go into detail here about why this is (clearly because these are the cells of a large information system), but it is also important that we witness people passing away, mainly for reasons not connected with the activity of their nerve cells. Nerve cells are the last to die, and in the vast majority of cases they do not die because they are diseased, but because of disorders in the organs that allow these nerve cells to function. Evidently, nerve cells are long-lived, we simply do not give them the chance to live out the lifespan that is given to them. No one knows how long this lifespan is, it may well be 200-300 years, or just twice as long as a human lifespan! Perhaps it may be three or four times longer. We’re not even talking about order, but for everyone present in this hall, 300 years of life is almost eternity. So I would use the short-term perspective – can we concentrate on extending the life of the brain by two to three times? The personality of the person that is preserved in the brain would exist two to three times longer. Here, on the stage, peoples could talk to us who was alive two centuries ago. They would have gained this experience thanks to their healthy brain cells.

 

How can this be done? Here, at the congress, we may already state that any human organ can be replaced with an artificial one. This can be done right now: all the technology is ready, and it is indeed being done right now, only no attempt is being made to combine this all into an anthropomorphic body. But let’s imagine that we actually do this. What will we do about the brain? The hypothesis that a brain can be recorded onto a hard medium and thus installed in this anthropomorphic body seems dubious to me at present. I may be mistaken, but it’s dubious as to whether this can be realized in a realistic, reasonable time frame. In principle, it is theoretically possible, without violating the physical laws of nature, but in the near future? In the near future we may realistically preserve the brain for longer than the body.

 

And here comes the moral and ethical problem: if society will be prepared to have a living brain attached to an artificial body, then a problem arises: firstly, this body must be anthropomorphic (simply because the brain, throughout its cognitive life, is tuned to certain momentum, body movements, and to the body’s circuitry in general, and so it needs to be given the opportunity to rely on the capabilities of the previous human body), and secondly, the brain must control this body. How can it – there are no nerves or muscles? The “brain-computer” interface is the key module which must be built into the anthropomorphic robot containing a natural brain. As fantastic as this may sound, I could demonstrate that this is working right now in our laboratory, and in dozens of laboratories in other countries around the world.

 

The first problem that arises here is the extent to which the brain can contain will impulses for controlling movements, behavior and desires, for the Cartesian brain was practically reflectory: something appeared at the output – special logical schemes were synthesized in the course of phylogeny and ontogenesis, and there was an output: a button was pushed, and a bell rang. What interface can be made with the computer here? This is something quite unnecessary: please analyze the input and then you can program the output. Ivan Mikhailovich Sechenov was the first who showed by experiments that the internal will programs and impulses are an important regulator of human behavior. There is internal activity of the human brain – to use modern terms, its psyche, and these will impulses must be taken up by the “brain-computer” interface.

 

This is the main question: can these impulses by taken up by some recognizing, decoding systems? Neil Miller was one of the first to show that are approaches, that an animal or person can find paths, and find these will impulses in themselves, and change something in the environment. He attached a device for measuring blood pressure to the caudal artery of a rat, and connected this device in such a way that when there the rat’s blood pressure rose, it received food. The rat quickly adapted to this situation, and, if you can say this about an animal, changed the blood pressure in its caudal artery at will, depending on how the sensor for providing food was connected. Miller did the same thing with other physiological indicators: with the gut motility, with the heart rhythm, and he did not connect sensors that provided food, but that stimulated the reward center. The rat eagerly went along with all the experimenter’s tricks, and always found the physiological indicator that was connected with the stimulation of the lateral hypothalamus (reward center).

 

Thus, the will impulse, the impulse of need really did find an output through a physiological indicator, which was not originally designed in nature, we simply created a bio-technical system, a new output channel from the brain, from the centers of desire, need and intention to the external control systems, to the external actuating devices. This is the precursor of the neuro-computer interface which we mentioned at the beginning.  

 

What physiological indicator can be selected as the output from the brain, which we can begin to decode and decipher in order to achieve direct contact between the brain and reality? The most obvious thing is the electrical activity of the brain, because firstly it is easy to register, by placing sensors on the skin of the skull, and secondly, this is an inertia-free system for registering the information activity of the brain: conditionally speaking, the nerve cells communicate between themselves by electrical impulses, and we feel the echoes of them through the sensors on the skin of the head. This means we can use this physiological indicator as being the closest to the information activity of the brain.

 

We worked with this indicator for a long time in order to decipher and diagnose various diseases and various states of the brain – these are quite complex graphs, as you can see now. We tried to divide these graphs alphabetically, into separate patterns which exist – for example, you can divide them into A, B, C. D. But here the idea arises: what about making these individual patterns into commands? As soon as this pattern appears in electrical activity, will it turn into a command and cause certain actions in the medium, or will it print a certain letter in the medium? Can we use this approach to creating communications systems, neuro-communicators?

 

I won’t go into detail about all of these technologies right now, I would only like to say that the main problem here is that it’s all very well to have these discrete neuro-dynamic patterns, and some other indicators that we single out from the encephalogram, but what is most important is whether the person can change them at will. If the person can’t do this, then they follow their own initiative in accordance with the work of the brain, and cannot be a communicator with the external environment. If the person can learn to control these patterns, then this is an artificial undertaking, because they will be intended for completely different goals. But a person can artificially regulate their electronic activity, then this means this will be a control channel.

 

Joe Kamiya, who is sitting here today, in the center, was the first to discover the human capability of controlling brain rhythms. Whether this is good or bad, it was discovered in 1958, and in 1968, the article “Consciousness Control of the Brainwave” was published: 10 years were required to understand the meaning of this discovery. But today, we already know this, and here the logical scheme of a computer interface arises. Just look: psychic effort causes a specific change in the encephalogram, this specific change is captured by the calculation system – this must be detected, registered and also classified separately, and turn it into a command for an external executing mechanism. The person sees the result of their work on the outside, they see whether it is successful or not – thus, feedback is formed, and essentially we receive a full circle of the person learning to control an external object by changing their own electrical activity.

The problem here is: with what degree of differentiation can a person control electro-encephalographic patterns? This must be learned, and to do this there are more developed schemes for decoding, and for using completely different types of classifiers. The key core of this system is to classify the brain patterns, which respond to various desires and needs. For example, simple needs – I want to turn the wheels of the machine to the right or to the left, forwards or backwards. How can this be deciphered? In the end, with a certain degree of error this is achieved, to do this various classifiers are used, and detection takes place with a certain percentage of errors (for example, 3%): I wanted to turn it to the right, but it moved to the left. But for systems which are only just coming into being, this is quite good. In medicine, this is already being used for patients with damaged motor functions, and for paralyzed patients (they can already control their wheelchairs), but we are interested in the healthy person, because if we are to move into the future, then we need to apply these interfaces not when the line has already been drawn, but before this happens.

 

Can a healthy person learnt to control a machine? We have made labyrinths through which a machine moved, which was supposed to turn correctly. All of these are models for a device which registers, decodes and so on. This is control of a machine which proves to be possible. We don’t work with children officially, but one of my graduate students took the system home. We made a game where a puzzle is assembled using this device, and here we have children who are good at putting puzzles together, on the left where the picture is broken up into random pieces, and on the right where a tiger has almost been completely assembled. It turns out that at the age of seven, a person can learn to do this in three minutes. Here we can see that this is entertaining, it is not about coercing a person, but an amusing task. When a person sees that the action takes place from the power of thinking, this is a special feeling, because it has not been given to us by nature: we work with our hands, but here we have the power of thinking. At a childhood age, this looks especially organic. All of this involves technologies that work through a neuron interface.

 

There is another technology that is fundamental here. You see, all of these systems require focused attention, for our thinking to be engaged – we cannot be distracted by other things, but must work with this channel of the interface. Can we do this unconsciously? We connected three indicators of the electrical activity of the brain (an article has been published about this, and details can be found on my website) to the sliders of an RGB-monitor driver (red, green, blue). We know that at least half of all psychologists claim that people have certain color preferences. The brain was connected, and the test subject did not know about this. But even if a person does not know that they can control the monitor screen, the brain is already connected – will the brain start to control the operations of this monitor unconsciously, if it prefers a certain color? It did a great job. The 15 lines are the 15 test subjects, this column is the RGB codes before the connection with the monitor was turned on, and here the connection is turned on: many test subjects started to stabilize a certain color on the screen, and when it was turned off, it disappeared again. In other words, the brain reaches an agreement with the external environment without the control of the consciousness – this is a separate effect.

 

This article was published some time ago, and here we have the case from October last year, which David Izrailevich Dubrovsky mentioned: if you show a person a film, you can approximately decode what the person sees from magnetic resonance tomography charts. I am somewhat critical of this data, because people who have for the most part not read this article create great expectations about how images can be seen. But if you read the article, you will see that this is not about seeing images at all, but just certain correlations: when a patch is too light, then you get one tomographic picture, when the patch is too dark, you get another picture, and on the basis of a synthesis of many of these impressions, you get some patches on the screen. You can see something – but not images.

 

Unfortunately, this is not a girl that you are seeing, only an advertisement for a film – the anthropomorphic robot that is advertised in the film “Surrogates”. Here is a photo from this film, where the problem is once again raised with which I began my talk: will this be suitable for a person? My personal opinion that if we are talking of the prospect of 200-300 years, then this is quite a normal prospect for human life. We may also be able to make interface systems for anthropomorphic robots, and thus give people a choice when an appropriate situation arises: to live in this form, when many sensors are attached to the body, which correct all parameters (the pulse, blood pressure, the heart), or in this form – to live at your own leisure, and not only at your leisure, but also to work, preserving the work of the brain. Thank you for your attention.

 


News
08.06.2012
"Global Future 2045" Congress, which took place in Moscow in February 2012, hosted round table discussion "Dialog of Faiths".
22.02.2012
On the February 20, 2012 After three days of seminars and presentations, Congress "Global Future 2045" closed with the round table discussion…
20.02.2012
Alexander Bolonkin, Astrophysicist, NASA Senior Research Fellow, addresses “Global Future 2045” congress.
20.02.2012
Barry Rodrigue, Professor at the University of Southern Maine, co-chairman of GF2045: “Manifesto for a New Millennium. A Working Agenda for the…
15.02.2012
Test-Cosmonaut Sergei Kirichevsky talks about the paramount role of space exploration in the survival of human civilization.