Skip to main content

Science Non-Fiction


Science Non-Fiction

In this article we will talk about artificial intelligence (hereinafter referred to as AI), and about the risks and consequences that could come from its development. You may not be interested on the subject a tiny bit or be just fed up with headlines about it. But the thing is, AI will affect your life sooner or later, and most likely in a profound way. We’ll try to give you here a good take on the topic, so that you at least know what to expect. As always, if you want to go deeper, follow the links throughout the text to access other articles and videos with much more info.

Let’s start with a quote from I.J. Good, an English mathematician who worked with the legendary Alan Turing in the mid-twentieth century in the design of the first modern computers: “The first ultra intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously”.

Maybe so. The blogger Tim Urban thinks otherwise: he says that when we hear about AI we think about things like Star Wars or Terminator, and that makes us associate the term with fiction, making us not take it too seriously.

Tim is right. When you start reading in depth about AI, the first reaction is one of absolute surprise, because one realizes that the general level of knowledge and concern about this issue is so very far from where it should be. The rapid development of Artificial Intelligence could be described as an 8-axle trailer truck that is hurdling towards us at 300 mph. So how come we are looking the other way?

It seems a relevant question, don’t you think? It doesn't seem normal that we are arguing all the time over some nonsense that always seems to be a matter of life and death, while issues that may not be a matter of life or death but of immortality or extinction are considered by most as stories for nerds. This may seem like an exaggeration, but actually, it isn’t.

We'll go over that question in the second part of Science Non-Fiction, which will be posted next Sunday. This first part has the goal of describing for you the truck and its speed; enough in detail to maybe erase that smile you have right now.

The topic of AI is actually a very broad one. Yes, AI is behind you getting all those advertisements for hotels in London after doing a search for flights over the internet; and maybe you’ve also read about AI​​ beating every human at chess and at that Chinese board game (it's called Go). Nowadays many AI tools and applications are being developed across many fields with sweeping advances. Some of that stuff could actually deliver great outcomes for us, such as the ability to diagnose diseases much more accurately, or the breaking of communication barriers with natural language processing. In the case of the financial industry, which is our main concern at Goonder, AI can help algorithms not only to be able to better predict market fluctuations, but also to be able to learn from the individual evolution of each investor, allowing us to offer a much more personalized service. But some other stuff already in development can be really scary, such as the police use of facial recognition (especially when there are clear signs of gender and skin type biased systems), lethal autonomous weapons or the use in China of social credits for citizen behavior.

Beyond all these developments and the debate over the possible implications, the general perception today for ordinary mortals seems to be that if one of these AI tools has not yet taken my job, so far so good.

Looking into the future, things could get seriously unsettling if we get to the next step, which is not a step but a huge leap, one that would make Neil Armstrong look like a child playing hopscotch. It would be to get to AGI, or Artificial General Intelligence: a system that would be as efficient as a human not just in concrete tasks, but in every task. Put another way, a system that has the same level of intelligence as we do. From today’s perspective it may seem that we are very far from reaching that point. But are we?

The classic way to measure progress in the field of AI is the famous Turing test. A machine will have passed the test when the evaluator considers that its answers are indistinguishable from those of a human. 5 or 10 years ago this seemed a distant goal. Nowadays you talk to Siri or Alexa and yeah, if you ask them about the meaning of life they'll probably answer some nonsense, but you may have noticed that they give fewer and fewer nonsense answers. According to 2016 measurements, Google's AI was approaching the intelligence level of a 6-year-old child. And like children, these things grow fast. Catching up with us seems to be getting more likely and less far away.

In terms of technological evolution, reaching AGI basically depends on achieving significant increases in process and learning capacities. Regarding the first one, the process capacity to have an AGI system actually already exists; we just have to make it more efficient and cheaper. At the current pace, it would be a matter of a few years. The second increase is the most difficult: how do you improve the learning capacity of a system? Contrary to what one could intuitively think, it seems that the fastest way is not to feed the system with all available knowledge, but to program it so that it can learn on its own, based on trial and error. In fact, one of the strategies that is making more progress today is programming the systems using neural networks and what is called deep reinforcement learning, so they learn and improve alone.

Here’s an illustrative example. Do you remember Deep Blue, the system created by IBM that in 1997 beat the then world chess champion, Gary Kasparov? Deep Blue was a system that was fed with all the knowledge accumulated in chess, millions of games. With that and raw process capacity, it was able to choose better strategies than the best human and thus defeat him. Since then, systems created under that approach have routinely won over any chess grandmaster, and since a couple of years they also beat us at go, an even more complex and abstract game. However, the artificial intelligence company DeepMind (owned by Google) has achieved a huge evolutionary breakthrough this last year with AlphaZero, a system that they fed with just the rules of three complex board games: chess, shogi, and go. The idea, and hence the name, is that the system learned to play all three games from scratch using neural networks and deep reinforcement learning, and without it being contaminated with any prior human knowledge. Thus, it began playing ultra-fast games against itself, and it took the thing a few hours to reach the most advanced strategies of the grandmasters, and a little while more to discover absolutely unthinkable strategies for us pitiful humans. Of course, AlphaZero is now the unofficial world champion of all three disciplines. And even when playing with a hand tied behind its back, it’s a walk in the park: of 100 games of go, and with the tenth of chips and time to move, AlphaZero crushed 100 - 0 the previous champion, its older brother AlphaGo (which was the system that in 2016 had defeated the last human champion, Lee Sedol, as you can see in this documentary).

So, summarizing a bit: for a system to reach a level of human intelligence, the problem of process capacity will be solved in the medium term, and solving the problem of learning ability is achieving huge qualitative leaps with the use of neural networks and machine deep learning techniques.

Now before we go on with the description of the truck, a word about the growing speed in which it’s coming. There is a concept that may be not that difficult to understand, but which is in fact really hard to visualize: exponential growth. In this case, we can see that the more a system learns and improves, the better it will be at learning and improving afterwards, since each improvement cycle will allow for better and faster increases in capacity, eventually leading to exponential growth. But this concept is at the same time very difficult to grasp, since us humans are designed to think in linear, not exponential terms. Tim Urban explains it very well with the help of really funny and illustrative graphics in his blog Wait but why, where he has published two posts about the AI revolution: The road to superintelligence and Immortality or extinction (if you're interested in AI, reading these two posts is highly recommended: they’re quite exhaustive and very entertaining). At the end of his first post, Tim presents us with the following conclusion: for a system that is learning exponentially, getting to AGI state will be just something that happened for a second, and was left behind the next one. This is exactly what AlphaZero did with the game strategies that humans had been able to come up with.

Reached this point, let’s introduce two new concepts: Singularity and ASI. Both concepts are based on the ability of technology to generate that exponential growth that will go faster and faster and over which we will have less and less control. The singularity would be that moment in which a system’s capacity for recurrent self-improvement reaches such a degree that an intelligence explosion happens. ASI means Artificial Superintelligence, or the system that would emerge from that intelligence explosion. ASI is something that could leave humans so far behind that we would have about the same capacity for understanding and controlling it as that of an ant colony on the generation of atomic energy. (Actually, much smaller. See the stairs graph in Tim Urban's second post).

So that’s the truck.

Before going over possible consequences, maybe it’s worth stopping here a moment to consider what you just read. You’re thinking you've seen this before, aren’t you? Skynet in Terminator, Hal 9000 in 2001, Agent Smith in The Matrix... Science fiction. Right?

Actually, it’s a logical path, one that many experts think that is going from possible to probable. It has even been predicted by one of the pioneers of computing. Do you remember that quote we started with? Alright, so you never heard of that I.J. Good. Well, here are more quotes from people you should know. Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race.” Bill Gates: “I don’t understand why some people are not concerned.” Elon Musk: “We are summoning the demon.”

Okay, maybe lately Elon Musk is not the most credible source of news in the world. So, what do the experts think? Surveys of those leading the AI field from 20112013 and 2018 show some powerful data. For starters, the number of those who consider that all this remains pure science fiction and that a system will never reach human intelligence, depending on the survey you look at, vary between 2 and 20%. That is, the vast majority of experts disagree on the when, not in the if. Regarding the when, the median in the experts’ answers is that we would have an AGI system between 2040 and 2060, and that an ASI would arrive afterwards, either very quickly, or at the most 30 years later.

Put another way: if you are between 20 and 40 years old, there is a 50% chance that your children will be run over by the truck. And your grandchildren’s odds look very ugly.

Or very beautiful. Because maybe when the time comes the truck instead of running over us, picks us up and take us to an instantaneous future in which almost all our limitations will disappear, starting with death (and maybe even taxes.)

It’s now time to introduce you to Ray Kurzweil, or as they call him in a documentary, "The Transcendent Man". Scientist, inventor, writer, musician, entrepreneur, creator of the Singularity University, lately director of engineering at Google. A guy that even those who think he is nuts admit that he is a genius. 21 honoris causa doctorates, an off-the-chart IQ and a quite impressive fulfilled predictions track record. He’s the equivalent of a rock superstar for technologists. You can see him in action in TED talks, read profiles on him by The New Yorker or Rolling Stone, or interviews with him in Playboy or The Guardian.

Ray is one of the optimists. He believes that everything will happen for the better, and that it will happen much faster than his colleagues believe: he assures that a machine will pass the Turing test in 2029 and that the Singularity will happen by 2045. He also believes that exponential growth of knowledge across several fields will end up making us immortal. He predicts that the whole process will be controlled, and that very soon there will be breakthroughs that will overcome the classical barriers of biology, turning us effectively into a new species, with a human base and artificial evolutions. Ray is taking great care of himself because he’s not that young anymore and of course he’d like to get to live forever. In fact, just in case he doesn’t make it, his plan B is to freeze his brain. He also intends to make his father alive again using DNA.

There’s that smile again. Who could blame you? But leave aside Ray’s ability to generate headlines and think about it a bit, and you’ll see that it's not such a deranged scenario. We know that we are not biologically designed for today’s living conditions, but for a time when we were hunter-gatherers, just one more animal trying to survive and transmit our genes. Biological evolution, which uses roughly the same tools as AlphaZero, (a trial-and-error system with rewards for the most successful strategies), works in temporary terms that have nothing to do with the artificial capacity for improvement that we have achieved. Therefore, if we aspire to get on the truck and not to be run over, any future evolution minimally appreciable, and needless to say if it is so radical, must necessarily have a technological base.

Let's try with a more visual example: Who runs faster, Usain Bolt or your grandmother? Ok. Yes. Usain Bolt. But what if we give your grandmother a Ferrari? She’ll probably have some trouble getting to handle the gears, she might crash into a tree or run over some pedestrians... but hypothetically it's clear that she’ll run much faster than Usain Bolt, right? Change that Ferrari for exponential advances in AI and biotechnology, and suddenly it’s not that crazy to foresee all those things that Ray imagines will be common for future generations. Direct access with the mind to all the data in the cloud. Daily backups of our brain: memories, sensations, thoughts, emotions. Possibility of using nanotechnology and augmented reality to inhabit other bodies, other realities. Nanobots circulating through our bloodstream and our brain, repairing and improving all biological systems, giving us physical and mental abilities that can only be described as post-human. Building genetic and enhanced copies of ourselves and of our parents, in case we don’t make it to the party. And immortality.

Of course to Ray, who is more than 70 years old, all this seems like a great prospect. For Charlie Brooker, who is in his forties, it inspired Black Mirror.

Charlie is a storyteller. The dystopias he imagines in Black Mirror stem from a dark vision of the human condition mixed with technological developments that don’t seem so distant. And the most worrying thing is not that these projections seem so creepy; it’s that these are scenarios in which the truck doesn't run over us. Therefore, they are optimistic scenarios.

Because another possible scenario is that the ASI system that comes after the singularity may have the same interest in the survival of humans as any human would in the survival of mosquitoes, except maybe for Greenpeace members. And yet another scenario is that the ASI system has strategic goals incompatible with human existence, as in Nick Bostrom's famous example of the paperclip maximizer.

Nick Bostrom is not like Ray Kurzweil at all. Maybe they have similar IQs and areas of interest, but that’s about it. Nick is definitely no rock star. Philosopher, thinker, author of more than 200 papers, he’s an expert in ethics and technology, and the director of the Future of Humanity Institute at Oxford University. If you have the time and you like the topic, you should read his book Superintelligence. If you don’t have that much time, you might enjoy reading his profile in The New Yorker and watching the talk he gave at Google. And if you have no time at all, maybe you can watch this TED talk for a quick summary.

Nick sees the arrival of the ASI as an existential risk for the human species, the most likely candidate to be the black ball that we take out of the bag of new technologies and that means our doom. He wonders if it is not a Darwinian mistake to create something that is smarter than us. He concludes that, whether or not we end up creating a superintelligent system, it would be a good idea to start laying some solid foundations so that the outcome is as benevolent as possible for humanity. Sounds pretty logical, right?

Because if it happens, regardless of when it happens, the truth is that no one knows for sure what would come next. Whether it will be good or bad for the species, or very good for a few and very bad for the majority. Or maybe it won't be better or worse, but very different, like Black Mirror. Those surveys that we talked about earlier show much more diverse results on this point. In the 2013 survey, experts who thought that the result would be good or very good for humanity added up to 52%, and those who thought it would be bad or very bad, 31%. So, when Nick was asked at the end of that talk he gave at Google if he thought we were going to make it, he answered the question laterally. “Yeah, probably less than 50% risk of doom, I don’t know what the number is”, he said. And then he added, “But the more important question I guess is what is the best way to push it down.”

This may be the most important issue that we have to face in the coming years.

© Goonder 2019

Comments

  1. I generally check this kind of article and I found your article which is related to my interest.reinforcement learning in machine learning Genuinely it is good and instructive information. Thankful to you for sharing an article like this.

    ReplyDelete

Post a Comment

Popular posts from this blog

Bricks in your wall

Bricks in your wall Intro You might know  Another brick in the wall , or at least have heard about it: it’s the title of a song in three parts from the album  The Wall  by Pink Floyd, and also from the film that it inspired. The lyrics tell the story of someone who isolates himself behind an imaginary wall to which he keeps adding more and more bricks as his life develops between childhood traumas, rage and desolation. (Don’t be scared off by this cheerful intro, keep reading just a bit more.) So why are these Goonder folks talking about a 1979 progressive rock album, you may ask yourself. Well, it’s because we couldn’t come up with a better metaphor than The Wall to capture several things, namely: why the Internet, something so revolutionary, so promising, something that was going to change the world for the better and turn it into a utopia, has brought us this dystopia in which it seems that we live today? Why do the big technology companies, which started out

Science Non-Fiction II

Science Non-Fiction II I fear none of the existing machines; what I fear is the extraordinary rapidity with which they are becoming something very different to what they are at present. No class of beings have in any time past made so rapid a movement forward. Should not that movement be jealously watched, and checked while we still can check it? (…) Are there not probably more man engaged in tending machinery than in tending men? Are we not ourselves creating our successors in the supremacy of the earth? Daily adding to the beauty and delicacy of their organization, daily giving them greater skill and supplying more and more of that self-regulating, self-acting power which will be better than any intellect? Samuel Butler, Erewohn. 1872. Every generation thinks that the challenges they face are new, but actually they rarely are. And that’s also true about the challenges we’re facing with the development of artificial intelligence (AI), reviewed in  the first part

Bitcoin, the Blockchain and You

Bitcoin, the Blockchain and You “Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments”. It is likely that this opening line used by Satoshi Nakamoto in his 2008 paper “ Bitcoin: A Peer-to-Peer Electronic Cash System ” will become as well known as “All happy families are alike, each unhappy family is unhappy in its own way”, or “It was the best of times, it was the worst of times.” After all, no matter what happens in the next few years with Bitcoin, we can be certain that people will be talking about Nakamoto’s work one century from now, as we do today about Tolstoy’s, or Dickens’. Less than a decade from its publication and despite heavy opposition, Bitcoin is the first challenger that has a real shot at the almighty Dollar’s throne, becoming the new gold standard and turning the global financial system upside down. Moreover, its base technology, the blockchain, is widely