Topics
Abuse of Power
3
Activism
6
Anarchy
3
Antiwar
5
Austrian Economics
4
Be Informed
6
Building Tomorrow
7
Compulsory Schooling
2
Growing Government
5
Hipsters
1
Historical Perspective
9
Humor
2
Libertarian Party
1
Libertarian Perspective
7
Liberty Tech
14
Mainstream Media
2
Music
2
Occupy Wall Street
6
Paths to Liberty
12
peace
1
Propaganda
7
Psychology
2
Religion
1
Revolution
3
Ron Paul
11
Signs of Hope
11
StandwithRand
1
Videos
6
Roger Pruyne
July 21, 2015, 5:29 pm
ASI: An Existential Threat


I'm not sure if you're aware of the growing debate about Artificial Intelligence (AI), some of our brightest minds have started to warn us of the potential existential threat that an Artificial Super Intelligence (ASI) may pose to humanity.


Here's Steven Hawking being asked about machine learning:

"I think the development of full artificial intelligence could spell the end of the human race"


Elon Musk is one of the most outspoken public figures speaking about the dangers of Artificial Intelligence, here's a clip of him speaking to MIT students and faculty, notice, he is so deeply concerned, that he completely misses the next question:

"I think we have to be very careful about artificial intelligence, if I were to guess what our biggest existential threat is, I would guess it'd be that... With artificial intelligence we are literally summoning the demon"


You may not know, but he's recently donated millions to the Future of Life Institute, with the goal of ensuring AI's interests are aligned with our own. Here's another quote from Elon Musk on artificial intelligence, speaking with Bill Gates in China to entrepreneurs, he's trying to help people to understand the difference between Artificial General Intelligence (AGI) and ASI:

"The risks of digital super intelligence, I want you to appreciate that it wouldn't just be human level, it would be super-human almost immediately, it would zip right past humans to be way beyond anything we can really imagine... It could be catastrophically bad, it could be the equivalent of a nuclear meltdown."


Following that question, Bill Gates was asked "Is there any difference between you and him [on this subject]", Gates responded in the same way, trying to emphasize the huge difference between AGI and ASI:

"I don't think so... I highly recommend this Bostrom book called 'Superintelligence'... You won't even know when you're at the human level, you'll be at this superhuman level almost as soon as that algorithm is, implanted in silicon... When people say it's not a problem, then I really start to really get to a point of disagreement. How can they not see what a huge challenge this is."


A few months ago, Elon Musk tweeted an article about ASI, that's pretty much where I recently got started back into this subject, it's a two part series from wait but why on Artificial Narrow Intelligence (ANI), basically the kind that can play chess, drive a car, manage the stock market and other things, and AGI, human level intelligence, a more rounded kind that isn't focused on one specific task, the second part titled "The AI Revolution: Our Immortality or Extinction" is about ASI. To help us understand the difference between our level of understanding, and that of what ASI might understand, he uses a staircase:




"To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimp’s incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves...

But the kind of superintelligence we’re talking about today is something far beyond anything on this staircase. In an intelligence explosion—where the smarter a machine gets, the quicker it’s able to increase its own intelligence, until it begins to soar upwards...

And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means."




waitbutwhy.com


A great deal of the information from these articles comes from a book by Nick Bostrom, "Superintelligence", here's an hour and 12 minute presentation he gave on the subject, I found it incredibly compelling, but you have to be shaken at least a little by the answer to this question - "You're one of the worlds experts about superintelligence and the existential risks... do you think we're going to make it":

"Uhhhh, yea I mean, it's, I think that the, uh. I mean like, I mean, uh, yea probably less than 50% risk of doom... the more important question is what is the best way to push it down."


Last night Jon, Stanton, Ben and I were having a discussion about the existential threat that ASI poses. Most people have a difficult time understanding how a computer can do something that it wasn't programmed to do. My answer was they are only programmed to learn, their responses are not programmed, they are learned, and deep learning can only happen when you expose this learning process to big data. Jon's deep skepticism found root in my explanation of the learning process reward mechanism, whether or not there was human interaction. I had to learn more about how this works, and as expected, there are many different approaches, here's a great articulation on how one of the approaches works, the non-human interaction one, which has huge benefits in speed to advancement as well as a really potential negative result:


In the real world, it is not the case that operators can always determine agent rewards. For example, our sheer distance from the Mars rovers Spirit and Opportunity make our communication with them slow and prone to future breakdown; if these rovers were reinforcement learning agents, operators would have significant restrictions in the speed and reliability of reward allocation. Similarly, autonomous financial agents operate on very fast timescales that human operators cannot effectively respond to.

In response to such difficulties, designers of a system may engineer the environment to make rewards assignments more reliable, perhaps even removing a human from the loop altogether and giving rewards via an automatic mechanism. Call this type of effort reward engineering; the reinforcement learning agent’s goal is not being changed, but the environment is being partially designed so that reward maximization leads to desirable behaviour.

For most concrete cases faced today—by Mars rovers, or by financial agents, for example—the reader should be able to devise ad hoc reward engineering methods that prevent some pathological dominance relationships from holding. However, the theoretical problem remains unsolved, and may rear its head in unexpected places in future reinforcement learners:

  • Increasing an agent’s autonomy, its ability to manage without contact with human operators, makes the agent more able to venture into situations in which operators cannot contact them. If pathological behaviours arise when an agent is not easily reachable, then it will be difficult to correct them— reward engineering becomes more difficult.

  • Increasing an agent’s generality expands the set of policies which it is able to generate and act on. This means that more potentially dominant policies may come into play, making it harder to pre-empt these policies. Generality can be both motivated by desire for increased autonomy, and can exacerbate the reward engineering problems autonomy causes; for example, a Mars rover would be well-served by an ability to repair or alter itself, but this could introduce the dominant and undesirable policy of “alter the reward antenna to report maximum rewards at every future time”.


These two observations motivate the reward engineering principle:

The Reward Engineering Principle: As reinforcement-learning based AI systems become more general and autonomous, the design of reward mechanisms that elicit desired behaviours becomes both more important and more difficult.

As a result of the reward engineering principle, the scope of reinforcement learning practice will need to expand: in order to create reinforcement learning systems that solve the AI problem— “do the right thing”—reliably, theory and engineering technique will need to be developed to ensure that desirable outcomes can be recognized and rewarded consistently, and that these reward mechanisms cannot be circumvented or overcome.


fhi.ox.ac.uk


Most of the people that speak to the potential existential threat that ASI poses, will nearly always refer to types of AI that has been programmed for recursive learning. So if we only provide a framework for how they learn, and a system for them to determine what a quality decision is, rather than having humans always there to reward every decision, than they can learn on their own at unthinkable rates, and can begin to make some bad decisions if not closely monitored.

Here's a great video talking about the current state of Artificial Intelligence, hosted by Steve Jervetson. It's a high level view about how we've had a recent explosion in AGI, most of my understanding on how AI works, comes from these developers and their explanations, this is queued up to the part where Ilya Sutskever from Google is talking about the learning principal, what I've been calling the reward mechanism, how it determines which is the best answer:


Whether or not you think ASI does pose an existential threat to humanity, I think it's healthy to begin the discussion, because if it may be, I hope by the time we know, we have put enough thought and resources towards it's prevention to ensure that it doesn't.
Roger Pruyne
businessinsider.com
Roger Pruyne
An incredible look by Dr. Susan Schneider at ASI and silicon based consciousness from a philosophical perspective: