Nick Bostrom, PhD, Professor of Oxford University, co-founder (with David Pearce) of the World Transhumanist Association
I think it is time that we try to start to wake up and begin to discuss and vessel the really big question for humanity in this century, which is how can we ensure that we will still be around in the next century and that we will rip the rewards of this enormous potential that we have to transform the human condition for the better. And I think, once you start to think seriously about this not in the sense of figuring out how you can more effectively argue your point of view to convince others, but in the sense of actually trying to figure out, how it works and what you should do. You will soon realize that it is a lot more complicated than it might seen at first, but the stakes are so high that we just need to keep trying to get even that one per cent increase in clarity in how these issues might pen out.
There are just a couple of different possible paths towards extreme longevity: a biological path and a digital path. The biological path would mean that we would invent better life extension technologies, initially by medi-curing diseases, then slowing down the aging process, and having a kind of open end, but still staying in the biological life span. The digital path would be, if we could develop technology eventually to do human whole brain emulation, where we would create a very detailed model of a particular human brain and then emulate that in the computer, where we would have an indefinite life span potential, we could make backup copies and so forth. I think in the long run the digital path is the more likely of those two. The biological path might be a stop gap measure, if one could make these kinds of breakthroughs in bio-gerontology, it would give a little bit of extra time for the people, who alive, could benefit then, till the digital technology will come along. But ultimately the potential in computing technology is far greater than that of biological computing matter that we use now to implement ourselves.
My view is that I think more attention focused on these issues is generally a good thing. I think more brains being concentrated on the problem makes it more likely about we will find solutions. It is interesting to consider the possibility that there might be instances, in which information is actually harmful or dangerous. I call this “informational hazards”, where the possibility of true information or the dissemination of true information can cause a risk. And an example of this might be, if you have a new way of, say, modifying a virus that increases its virulence and it is very easy to do this modification, and, if the virus would become sufficiently dangerous there might be a strong fear that we would be better off without that information. This kind of information hazards I think will be an important dose of risk in some of these technological areas. And right now we have very poor mechanisms for dealing with this. The issue rolls recently with the modification of the bird flue virus, which could be made infective among humans, at least so it seems, and this animal model that was used to a fairly simples process, and there was a controversy about this whether this research should be published, and eventually the recommendation was that there should be reductions, the general result could be published, not the technical details of how they go about it. But the recommendation is still really just a voluntary option for the researches. They could, if they want, to protocol line or take it to another scientific journal and research had already been funded and done, without really any serious attention to the potential risks that could result from it. I think, as the power of new technologies grows, our sophistication in how deal with this information hazards will also need to increase radically. And partly here is a problem of coordination that еven, if one research group holds back or even, if one country holds back, if there somebody else, who is going to go ahead and doing it any way, the cat will come out of the box. So, if one looks at the spectrum of existential risk, I think a fair part of that spectrum will rise from these kinds of informational hazard, where that somebody will discover it sooner or later anyway and it is hard really to put a genie back into the bottle, ones it is out. So, becoming better able to deal with those things would be something that would seriously reduce this existential risk.
Now, of course some methods of doing it would create an existential risk on their own, if you made a very ubiquities program of surveillance that might also be enabled by some of these technologies that have been developed: this recognition software and small cameras, data mining technologies, that in itself could create new existential risks in the forms of totalitarian & oppressive governments, which historically has been one of the major sources of human misery and stagnation. So, in many cases this is a double edged sward, when you have something that reduces some of existential risk, but at the cost of creating others. And it becomes really difficult once you start to try just to figure out how this all place out, even now sometimes, which direction is up and which direction is down, like even to know what would be the desirable thing to be aiming for. So, what always strikes me, when I am surveying the field is, how many people there are, who are trying to push some particular agenda and other people, who are opposing them and everybody specie either pushing or pulling, or something, but few people, who actually stand by and can try to figure out, which direction ought be pushing or pulling in the first place. So, my hope would be that we will allocate slightly more power authorities into this reflective task of trying to figure out how the different factors interact and which direction would be desirable to move in, even if it goes at the expense of going slightly slower.
Right now it seems that issues from technology risk are not at the center of political agenda like all sort of other things: unemployment, financial crisis, terrorism and what not, and there might be many reasons for this, so many technologies tend to be developed and deployed relatively slow and gradually over many years, it is not like suddenly a very severe explosion or crisis, so it makes it easier to ignore this at the next administration for your sense whatever can deal with it. Although many policy makers, at least in the West, have a background in law, they often have law degrees, so they are sometimes from business, but they are rarely scientists or technologists. It is interesting, how China is different in that regard, where in their highest decision-making body until recently every single person there was an engineer by training. And now that is no longer true – no they have apparently only one person with a background in the Western engineers. Whereas in the US Congress I think there are something like a couple of people with a science background among hundreds. So, I think that also might sort of help shape, which kind of issues are brought to their attention, and I think it will be desirable to have more technical competence in the highest level of governments, as technology becomes an increasingly important shaper of human destiny.
I think what has tended to happen so far is that, while ideas were still at the periphery of speculative, uncertain & controversial they have a special label, which might be transhumanism. And there is a small group of people discussing on the Internet these new ideas. As some of these ideas become adopted into the mainstream – for example, the idea of in vitro fertilization for infertile couples was once very controversial, now it is common practice – it is no longer seen as something distinctive and transhumanism is just, you know, something that you do, unless you particularly have a big objection to that. And I think that, that is going to be the general tendency, while I stood new in a speculative domain I might have a special label and particularly strange people had promoting & discussing them. Once that enters a common practice, I think that label would no longer be seen as particularly luminating. It will just enter the normal political give & take. And I think it is probably a healthier thing for it to be like that. I think sometimes these ideological labels can polarize people and make it harder to make the nuanced distinctions that ultimately are necessarily, when you actually start using something in the real world. Everything is more complicated than it initially looks like it should be, once you should try to put it into practice and that point I think the big banners might sort of have lost their purpose.
So, one might forget perhaps analogs. The environmental movement has generated political parties in many countries, green parties, where the root was a particular kind of concern for the environment and they have had to develop a more general set of issues and policies and other things and it is possible that something similar could happen with transhumanist concerns. But probably they would have to be first as a large sort of popular movement & interest in these issues that might then create the ground, from which somebody could politically organize and then create sort of more formal political structure on top of that. And I think may be that could work. It is hard to predict how these kinds of political realities will play out.
Now, one concept that is useful I think in thinking about long-term futures for humanities the notion of the single tongue, which should be a world order, where at the highest level of organization there is only one decision-making entity. Now, that decision-making entity could be any of a wide range of certain structures: it could be the world democratic government, it could be a dictator, it could be a super-intelligent machine, it could be some, perhaps, even a universal moral code that had provisions for its law-enforcement, but both good and bad structures could count as a single tongue. But the unifying feature would be that they all have the ability to solve global coordination problems, for example, to avoid arms races or to solve these global common problems, like, when we have different countries spearing out pollutants into the atmosphere or over-fishing the oceans, – now all these kinds of coordination problems that arise from the lack of a single decision-making entity on the top. And the future scenarios for humanity might sort of to be grouped towards the creation of a single tongue which could then solved these coordination problems or not, where you would have some kind of competitive scenario. And I think the single tongue could be either very good or very bad, but provided you had a single tongue that adequately represented interests of the different parts that participated in it, it might be something that could reduce existential risk by eliminating some of these failures of coordination that could resolve dangerous arms races, for example.
It seems may be unrealistic today, because we are so far from that kind of unified world, but if one takes a long historic view over this, there has been an increasing scale of political integration in the human history. We started hunting together as we were bands of may be forty or sixty people that was the largest entity of organization, then we had larger tribes, we had city-states, like in ancient Greece, then larger nations and now we have regional enterprises like the European Union and some global organizations that increasingly weave together in the world in partial ways like World Trade Organization, the UN is weak, but it is more than nothing. So, if one extrapolates this from tribe to city-state, to nation, to EU, then the logical combination of that would be a single tongue. So, I think over a longer-time span in remains a fairly likely, perhaps, more likely than not that humanity will in one way or another end up as a single tongue.