“We're not necessarily doubting that God will do the best for us; we are wondering how painful the best will turn out to be." CS Lewis

Monday, November 22, 2010

Will Our Computers Love Us In The Future?

Or, will our computers become self-programming super intelligences who decide to kill us all.  A leader and expert in the development of Artificial General Intelligence in machines, Dr. Ben Goertzel wrote on his blog The Multiverse According to Ben last October (reposted at the Singularity Hub) why he concludes that it is unlikely that AGI, which will pass up human intelligence in the coming decades, referred to by some observers as the Singularity, will conclude that humans are no longer needed and in the words of Dr. Goertzel, decide to "repurpose our molecules for it's own ends".


In his blog post, Dr. Goertzel argues that the fear that unrestricted development of AGI will lead to an artificial intelligence which does not value humanity's right to exist, a concept he calls the Scary Idea, while possible is highly unlikely, even though he acknowledges that the odds are impossible to estimate.


The argument for the Scary Idea has been floating around science fiction for a long time.  Dr. Goertzel, while disagreeing with the premise, doesn't pretend as if there isn't a basis for the Scary Idea, at least in part.
Please note that, although I don't agree with the Scary Idea, I do agree that the development of advanced AGI has significant risks associated with it. There are also dramatic potential benefits associated with it, including the potential of protection against risks from other technologies (like nanotech, biotech, narrow AI, etc.). So the development of AGI has difficult cost-benefit balances associated with it -- just like the development of many other technologies.

I also agree with Nick Bostrom and a host of SF writers and many others that AGI is a potential "existential risk" -- i.e. that in the worst case, AGI could wipe out humanity entirely. I think nanotech and biotech and narrow AI could also do so, along with a bunch of other things.

I certainly don't want to see the human race wiped out! I personally would like to transcend the legacy human condition and become a transhuman superbeing … and I would like everyone else to have the chance to do so, if they want to. But even though I think this kind of transcendence will be possible, and will be desirable to many, I wouldn't like to see anyone forced to transcend in this way. I would like to see the good old fashioned human race continue, if there are humans who want to maintain their good old fashioned humanity, even if other options are available
As Dr. Goertzel sees it, the Scary Idea has four main points.
As far as I can tell from discussions and the available online material, some main ingredients of peoples’ reasons for believing the Scary Idea are ideas like:


  1. If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low
  2. Human value is fragile as well as complex, so if you create an AGI with a roughly-human-like value system, then this may not be good enough, and it is likely to rapidly diverge into something with little or no respect for human values
  3. “Hard takeoffs” (in which AGIs recursively self-improve and massively increase their intelligence) are fairly likely once AGI reaches a certain level of intelligence; and humans will have little hope of stopping these events
  4. A hard takeoff, unless it starts from an AGI designed in a “provably Friendly” way, is highly likely to lead to an AGI system that doesn’t respect the rights of humans to exist
I emphasize that I am not quoting any particular thinker associated with SIAI here. I’m merely summarizing, in my own words, ideas that I’ve heard and read very often from various individuals associated with SIAI.


If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The proponents of this argument come from members of the Singularity Institute for Artificial Intelligence (SIAI).  Their idea is a derivative of physicist and science fiction writer, Isaac Asimov's idea of the Three Laws of Robotics. where artificial intelligence was embodied in the form of robots.


The Three Laws of Robotics are as follows:



1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
SIAI argues that widespread AGI development should be curtailed until the development of "provably non-dangerous AGI" or friendly AGI can be developed.  Where Goertzel seems to disagree is not on the question of the need for creating ways to engineer AGI software to ensure friendliness towards humanity.  His argument is that we can do both at the same time.
I agree that AGI ethics is a Very Important Problem. But I doubt the problem is most effectively addressed by theory alone. I think the way to come to a useful real-world understanding of AGI ethics is going to be to
  • build some early-stage AGI systems, e.g. artificial toddlers, scientists’ helpers, video game characters, robot maids and butlers, etc.
  • study these early-stage AGI systems empirically, with a focus on their ethics as well as their cognition
  • in the usual manner of science, attempt to arrive at a solid theory of AGI intelligence and ethics based on a combination of conceptual and experimental-data considerations
  • humanity collectively plots the next steps from there, based on the theory we find: maybe we go ahead and create a superhuman AI capable of hard takeoff, maybe we pause AGI development because of the risks, maybe we build an “AGI Nanny” to watch over the human race and prevent AGI or other technologies from going awry. Whatever choice we make then, it will be made based on far better knowledge than we have right now.
So what’s wrong with this approach?  
Nothing, really — if you hold the views of most AI researchers or futurists. There are plenty of disagreements about the right path to AGI, but wide and implicit agreement that something like the above path is sensible.


But, if you adhere to SIAI’s Scary Idea, there’s a big problem with this approach — because, according to the Scary Idea, there’s too huge of a risk that these early-stage AGI systems are going to experience a hard takeoff and self-modify into something that will destroy us all.


But I just don’t buy the Scary Idea.


I do see a real risk that, if we proceed in the manner I’m advocating, some nasty people will take the early-stage AGIs and either use them for bad ends, or proceed to hastily create a superhuman AGI that then does bad things of its own volition. These are real risks that must be thought about hard, and protected against as necessary. But they are different from the Scary Idea. And they are not so different from the risks implicit in a host of other advanced technologies.
Dr. Goertzel concludes that the benefit in the development of AGI is in it's role as an agent of change which contributes to bio-technology, nanotechnology, as well as other emerging technologies which will provide solutions to the myriad of problems we face, but also act to protect us from the risks and dangers of those technologies as well.
I think that to avoid actively developing AGI, out of speculative concerns like the Scary Idea, would be an extremely bad idea.


That is, rather than “if you go ahead with an AGI when you’re not 100% sure that it’s safe, you’re committing the Holocaust,” I suppose my view is closer to “if you avoid creating beneficial AGI because of speculative concerns, then you’re killing my grandma” !! (Because advanced AGI will surely be able to help us cure human diseases and vastly extend and improve human life.)


So perhaps I could adopt the slogan: “You don’t have to kill my grandma to avoid the Holocaust!” … but really, folks… Well, you get the point….


Humanity is on a risky course altogether, but no matter what I decide to do with my life and career (and no matter what Bill Joy or Jaron Lanier or Bill McKibben, etc., write), the race is not going to voluntarily halt technological progress. It’s just not happening.


We just need to accept the risk, embrace the thrill of the amazing time we were born into, and try our best to develop near-inevitable technologies like AGI in a responsible and ethical way.
The either/or arguments about an AGI which either helps us become superhuman or decides to destroy humanity leaves a lot of room between the two poles for many other outcomes.  A concept floated in the 1980 novel, Mockingbird, revolved around the idea of compassionate computer super intelligences which act as a caretaker for an illiterate and drug addled human race in a final decline towards extinction.  With an already measurable negative impact of internet and computer technology on literacy rates and vocabularies, this outcome also bears contemplation.  


The problem, in my own opinion, in speculating on the impact of any particular technology, whether it is artificial intelligence, bio-technology, nano-technology, or any other emerging technology, is that they will all be impacting each other in ways impossible to predict.  I think that in the short term, Goertzel's fears of near AGI being used by humans to leverage other technologies to cause harm to other humans is probably the greatest danger we face in our future.  


In fact, near AGI isn't a requirement.  Human beings themselves are fully capable of causing great mischief using the software we have now.  A cyber worm, such as the Stuxnet worm released into the internet to attack elements of the Iranian Nuclear development infrastructure, which was unleashed on our power grids could lead to real negative outcomes, including death, for vast numbers of people.  In the case of Stuxnet, the NY Times reports that it was aimed at two elements of the Iranian nuclear power/weapons program, the centrifuges used to enrich uranium and the steam turbine at the Bushehr Nuclear Power Plant.  If someone were to do our power grid a similar favor, it would be a very bad thing for us, even if we suffered only a collapse of a regional power grid for an extended time.  Just one example of how software may run amok in our near future, which hardly requires the advent of artificial intelligence.


We live in interesting times.