Skip to main content

Science Non-Fiction II


Science Non-Fiction II

I fear none of the existing machines; what I fear is the extraordinary rapidity with which they are becoming something very different to what they are at present. No class of beings have in any time past made so rapid a movement forward. Should not that movement be jealously watched, and checked while we still can check it? (…) Are there not probably more man engaged in tending machinery than in tending men? Are we not ourselves creating our successors in the supremacy of the earth? Daily adding to the beauty and delicacy of their organization, daily giving them greater skill and supplying more and more of that self-regulating, self-acting power which will be better than any intellect?
Samuel Butler, Erewohn. 1872.

Every generation thinks that the challenges they face are new, but actually they rarely are. And that’s also true about the challenges we’re facing with the development of artificial intelligence (AI), reviewed in the first part of Science Non-Fiction. It seems they were already seeing it coming 150 years ago, during the industrial revolution in England. It sure gives you the willies to read these reflections in a novel written in the nineteenth century.

This second part will be devoted to exploring what factors influence today’s apparent general lack of concern about the possible consequences of the development of AI. We will also see who the main actors of this development are.

We closed the first part with Nick Bostrom wondering what can we do to increase our chances of survival as a species before the arrival of a superintelligent system. Now some experts say that this is still far away, while others say that we could on the verge of that scenario, which will happen within the next 20 to 40 years. Considering that up to 30% of these experts think that, when it arrives, the outcome will be either bad or very bad for us, this is surely something that we should be talking about. Having a 30% chance of belonging to an endangered species is definitely not cool. 1 or 2%, ok, I can live with that, but... 30%??

The debate around possible effects of AI development ​​is centered today in very serious things that are already here and that certainly deserve that debate. We already mentioned some of them in the first part: the social credit system being deployed in China, lethal autonomous weapons or the biased effect in facial recognition software used by the police. However, the fact that within a few years the future of Terminator could come true but this time without John Connor or resistance or any hope at all doesn’t seem to bother us much.

It’s rather strange. Shouldn’t we all be very scared? Or put another way: why is there almost no one talking about this?

There seem to be several reasons. For starters, you have this “science fiction” effect: there is a tendency to consider those who dare to give such radical predictions as unscientific or flippant. There are also reasons that have to do with human nature: we are programmed to worry about direct and concrete threats and not about future and abstract ones. In addition, there is also a “wishful thinking” effect built within our nature: if they tell us that with AI ​​the future will be wonderful, why worry. Let's leave this in the hands of the experts, they surely know what they’re doing. Ring a bell? Another possible reason is that there is some fatigue for technological progress: we have seen so many changes in recent years that revolutionary things like AlphaZero barely make headlines in tech media. Finally, perhaps one of the key reasons is the “arms race” effect, in which the agents that are leading this revolution try to make sure that concerns over AI ​​don’t put any additional hurdles since this would delay them, benefiting their competitors.

Let's take a look first at that “science fiction” effect. Until not that long ago, systems with human intelligence, let alone the arrival of an artificial superintelligence, were considered unreal events or at least something to worry about very far in time by most of the experts. That has changed radically in the last two decades, however. When Ray Kurzweil claimed that a machine would pass the Turing test by 2029 in his book The Age of Spiritual Machines, the general reaction among the experts was one of disbelief, to put it mildly. Many critics considered that he was not a serious scientist, others dismissed the prediction as plain nuts. Most assured that if it was to happen, which was very doubtful indeed, hundreds of years would pass before. So Ray published the book in 1999. As it was said in the first part, in a 2013 survey, that is, less than 15 years later, the consensus among these same experts had dropped from “after hundreds of years” to “by the 2040s.” Ray says, and rightly so, that if they keep up the rhythm, they will end up criticizing him as a conservative. (In fact, a 2018 survey among general population in the US put the probability of occurrence in more than 50% by 2028).

You have to be very careful to reject predictions that look like science fiction, because for fiction to become non-fiction sometimes you just have to add talent with very high incentives. Stuart Russell, another leading researcher in the field of AI, tells the following story: Lord Ernest Rutherford, the scientist who discovered that atoms contained large reserves of energy, said however in 1933 that “Anyone who expects a source of power from the transformation of these atoms is talking moonshine.” The next day another scientist read the comment and did not like it a bit. Irritated, he went for a walk and, out of the blue, came up with the idea of ​​a nuclear chain reaction. A few years later and with the incentive to win a world war, that idea was put into practice and the United States built the atomic bomb. Stuart uses this story as an example from which we could learn to be cautious. He points out that, as for incentives, billions of dollars are being thrown in the development of the AI; unthinkable breakthroughs may in fact happen from one moment to another, turning what until then was moonshine, or science fiction, into science.

However, it’s pretty clear that this “science fiction” effect is still strong. Despite all the advances of this decade, despite the work of people like Nick Bostrom, and despite the fact that public figures as credible as Bill Gates or Stephen Hawking have warned that we should take the worst case scenario seriously. But some years have passed since Gates and Hawking made their statements, and the truth is that the general debate that they wanted to help start is not happening.

Leaving the academic world and going down to main street, there’s more of the same. Serious media talk about the problems we already have, but close to nothing about what can happen in a few years. Unsurprisingly, DeepMind hit some headlines and massive attention only when there was a sporting angle: man against the machine in the final challenge, Lee Sedol vs. AlphaGo. And when we say massive attention, this is somewhat limited to Japan, China and Korea, where go is more than just one of the national sports. Perhaps to really have a massive debate on AI, the names of Stephen Hawking and Lee Sedol should be changed in the headlines by those of Kim Kardashian and Floyd Mayweather.

Let's move on to the next reason why that debate is practically non-existent today: human nature. It must be understood that, in evolutionary terms, we are more or less the same creature as we were one hundred thousand years ago, and that therefore our species is designed to live in conditions very different from those of today. Hunter-gatherers, which is what Homo Sapiens has been 90% of his time on this earth, didn’t have to deal with future and abstract threats, but with very direct and concrete ones: I’d rather not climb that tree to look for eggs because it is too high and I can fall; this mushroom doesn’t look too yummy, I better not eat it; the wind brings the smell of a tiger, let’s run! As for long term strategies, hunter-gatherers at most evaluated moving to another area to improve their chances of survival: it’s starting to get cold; it’s becoming increasingly difficult to find food; there are too many predators around here.

When we stopped collecting fruits and moving from territory in small herds and went on to sell financial derivatives and live sedentary lives in mega cities within nation-states, the strategic management of the threats became much more complex. Our evolutionary preparation ceased to be relevant, except perhaps for those who live in certain neighborhoods of these mega cities. Social systems were stratified and decisions about what was a threat to the whole of society and how to avoid it were entrusted to a few individuals from the upper strata. And these higher strata have become increasingly experts in managing populations by creating or magnifying threats and also in obviating or silencing other ones depending on what best suit their interests.

Unsurprisingly, therefore, some of the most important decisions that have been made in the wake of very serious threats have not been precisely the most beneficial ones for the whole of society. And this despite having a lot of information and prediction tools that have become increasingly reliable. Let's highlight for example the chain of calculation errors that led to the carnage of the First World War (specifically pointing out the general enthusiasm that was transmitted to the population before the possibility of that war). A more pressing example: the conflicted interests that affect the decision-making on the challenge of climate change (it is more profitable and therefore better for my shareholders to invest part of my profits in climate change denial lobbies than to address the problem and have to strategically reorient my business.)

In many cases, both individually and collectively, there’s another variable about human nature that also comes into play: the “wishful thinking” effect or hoping that things will turn out well and planning accordingly. In the development of AI there seems to be a lot of that. The possible favorable consequences are so tempting that people really want to believe that we will achieve them. And that’s fine, we all want those favorable consequences. However, one of the wisest advices on any good father’s book (and one copied by many business gurus), is: “hope for the best, prepare for the worst”. It’s great to think and work so that the AI ​​will be something that help us to live much more and much better; but at the same time it is necessary to prepare and plan so that we don’t get that black ball from the bag of new technologies, the one that would mean our doom.

Speaking of new technologies, another of the reasons mentioned above for which we do not seem to be paying too much attention to the AI ​​revolution is precisely tech fatigue. This happens in part because of the exponential growth curve, which maybe is already getting a bit uphill for us. Technologies are developed faster and faster, and with more scope. We can hardly pay attention to what’s new, overwhelmed by the offer. Revolutionary news about AI like AlphaGo is obsolete a few months later with AlphaZero. Who can keep up?

We don’t seem to be quite ready for this either. If you are old enough, you’ll probably remember that when you were getting used to the internet and the email, mobiles arrived massively, with their apps and social networks. And when you finally embraced that other technology, it came with an unprecedented level of control and manipulation on a large scale (as we pointed out in the previous longread, Bricks in your wall). The questions raised by the writer Yuval Noah Harari in this op-ed piece are tremendous: “How does liberal democracy function in an era when governments and corporations can hack humans? What’s left of the beliefs that ‘the voter knows best’ and ‘the customer is always right’? How do you live when you realise that you are a hackable animal, that your heart might be a government agent, that your amygdala might be working for Putin, and that the next thought that emerges in your mind might well be the result of some algorithm that knows you better than you know yourself?” Thinking about these challenges we already have here is overwhelming. But that doesn’t mean we shouldn’t also think about the challenges we will have to face tomorrow, before they run over us (yes, there’s that truck metaphor again!).

And so we arrive to the last reason we pointed out: the “arms race” effect, that term that seems to come out of the cold war. Because the actors that are leading the development of AI are the governments of the main countries of the world, of course led by the United States and China; and big tech companies such as Google, Facebook, Amazon, Microsoft , IBM, Baidu or Tencent. Actors with a huge capacity to influence a debate and too much at stake to be shy about it. Because those who face more obstacles (criticism, regulation, that kind of thing) will probably be delayed in the race, giving advantage to their competitors.

Think about it. What would be the prize for the winner of this race? If everything goes “well”, it will be a system with the capacity to turn into super-humans to those who control it, have access or can afford it. A system that could give rise to another species, one that could consider humans to be left out as humans consider chimpanzees, or worse, as a farmer considers his livestock. A system with the potential to grant all its benefits to the first and, if decided so, none to the second. That is what is at stake in the race.

We were talking before about the tendency of the upper strata of societies to create or magnify the threats that best suit their interests and to ignore or silence threats that hinder them. We can definitely say that the participants of this race are the elite of those higher strata. As it happens in any race, to arrive first you have to run faster than the others.  And for that, you better have the fastest car, because otherwise you will need to assume more risks to compete.

That is perhaps the scariest thing: for these racers, any competitor arriving before them to the finish line seems to be a greater threat than taking higher risks to get there first, even if it means getting the black ball for everyone. Oh well, bad luck, what are you going to do? It isn’t strange then that none of them like the idea of people shouting “extinction!” and that they would try to do anything in their power to prevent that scenario. Not that of extinction, but of people shouting.

Ok so these are some reasons why it’s very likely that you won’t see the truck coming until it’s on you. Let's take now a closer look to those who are in the race and to what they are doing.

The Chinese state, to begin with, has made a clear and decisive commitment to speed up and try to take lead ​​in the coming years. This of course includes big Chinese tech companies, such as Tencent or Baidu, and hundreds of new AI startups that are being handsomely financed. Other variables that align here are a huge population with an almost null capacity for protest or reaction due to an increasing effectiveness of its control, and a massive capacity for data collection to fuel AI systems. The result is resembling more and more George Orwell’s Nineteen eighty-four dystopia, with some “improvements” taken directly from Black Mirror, such as the credit system for social behavior. Well at least they don’t care much about hiding what would happen if they win the race.

Now United States is a much more complex case. There are two main actors driving the development of AI. The first is the Pentagon, with things like the now famous Project Maven, which aims to accelerate the analysis of images captured by military drones, automatically classifying images of objects and people (take a moment to think about its possible uses). Secondly, big tech companies: Google, Facebook, Amazon, Microsoft and IBM, above all. And we say that the case is more complex because while in China all the actors are more or less aligned, in the United States the Pentagon and big tech compete to attract the talent of the professionals, while also collaborating in billion-dollar development contracts. And this causes friction. For example last year, faced with the activism of its own employees, Google declined to renew its Project Maven contract. As this article explains, there is a conflict between the interests of tree parties: on the one hand, the professionals, Google employees in this case, who don’t want their work to be used to kill people; on the other, the companies, which have a mandate from shareholders to obtain and maintain multi-billion dollar contracts regardless the contractor (which may end up being China, as with Project Dragonfly); and last but not least, the Pentagon, which wants to use the technology to continue having the military advantage over US adversaries, and for that, they are trying to build a bridge to save differences with the other two parties, working the “help us, we’re the good guys here” angle.

Of the big tech companies, Google is by far the most active company in the field. In fact, it is estimated that 80% of the main AI researchers and developers (like Ray Kurzweil, for example) work for Google or for some of the leading companies that it has been acquiring, like Boston Dynamics or DeepMind. The creator and CEO of the latter, Demmis Hassabis, describes its mission in the following way: “Step one, solve intelligence. Step two, use it to solve everything else.” There is a solid consensus across the field that DeepMind is one of the best placed actors to achieve it. Their advances with deep reinforcement learning algorithms applied to games have been a spectacularly successful part of the first step, and Demmis himself expects breakthrough advances in much more significant fields of science and medicine in the coming years. In this sense, they have just introduced AlphaFold, a system that predicts the structure of proteins.

One of the reasons why DeepMind founders agreed to the acquisition by Google was that the agreement included the creation of an ethics board, to make sure that the technology they created in the future was not misused. About that board, Nick Bostrom said that “In my opinion, it’s very appropriate that an organisation that has as its ambition to ‘solve intelligence’ has a process for thinking about what it would mean to succeed, even though it’s a long-term goal (...) It is good to start to studying these in advance rather than leave all the preparation for the night before the exam.”

There is certainly no shortage of boards, committees, institutes, forums and other initiatives on ethics and safety in the use of AI and its future. Big tech companies created one of those with the bombastic name of Partnership on Artificial Intelligence to Benefit People and Society. Elon Musk contributed to another, Open AI. In the university field, the Future of Humanity Institute in Oxford, the Machine Intelligence Research Institute and the Center for Human-Compatible IA in Berkeley, the Center for the Study of Existential Risk in Cambridge, or the Future of Life Institute of Harvard and the MIT, have been responsible for the publication of several key papers and surveys, which every now and then even get some headlines.

But the truth is that the general situation is quite worrisome, as Kelsey Piper states in this piece. The analogy he uses is very clear: “On the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.”

In short: ethics and security in the development of AI lack a public policy coordinated at an international level, there is no open and general debate on its most profound possible implications, the key papers published about it barely get beyond the academic field, and resources dedicated to long-term research and prevention are ridiculous. Meanwhile, the race to get an AI that would give a definitive competitive advantage over corporate or geostrategic rivals is getting billions of dollars in funding.

And so, to finish with the metaphor with which we started all this, there’s these people making an 8-axle trailer truck race on a road full of curves they don’t have a map of. The rules of the race are simple: the winner takes it all. The speed is increasing and nobody plans to slow down. You are walking along that road, gawking at your cell phone. And your children are playing somewhere ahead, behind a curve.

Have a great Sunday.


© Goonder 2019

Comments

Post a Comment

Popular posts from this blog

Bricks in your wall

Bricks in your wall Intro You might know  Another brick in the wall , or at least have heard about it: it’s the title of a song in three parts from the album  The Wall  by Pink Floyd, and also from the film that it inspired. The lyrics tell the story of someone who isolates himself behind an imaginary wall to which he keeps adding more and more bricks as his life develops between childhood traumas, rage and desolation. (Don’t be scared off by this cheerful intro, keep reading just a bit more.) So why are these Goonder folks talking about a 1979 progressive rock album, you may ask yourself. Well, it’s because we couldn’t come up with a better metaphor than The Wall to capture several things, namely: why the Internet, something so revolutionary, so promising, something that was going to change the world for the better and turn it into a utopia, has brought us this dystopia in which it seems that we live today? Why do the big technology companies, which started out

Bitcoin, the Blockchain and You

Bitcoin, the Blockchain and You “Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments”. It is likely that this opening line used by Satoshi Nakamoto in his 2008 paper “ Bitcoin: A Peer-to-Peer Electronic Cash System ” will become as well known as “All happy families are alike, each unhappy family is unhappy in its own way”, or “It was the best of times, it was the worst of times.” After all, no matter what happens in the next few years with Bitcoin, we can be certain that people will be talking about Nakamoto’s work one century from now, as we do today about Tolstoy’s, or Dickens’. Less than a decade from its publication and despite heavy opposition, Bitcoin is the first challenger that has a real shot at the almighty Dollar’s throne, becoming the new gold standard and turning the global financial system upside down. Moreover, its base technology, the blockchain, is widely