Computer Security Specialist Ralph Langner's thought provoking TED talk on the Stuxnet Virus which targeted Iran's nuclear enrichment program.
Ralph Langner is a german computer security specialist who has received attention world wide for his work in deciphering the mystery surrounding the Stuxnet virus.
A Correspondence of Speculation About Our Uncertain Future In A World Of Exponentially Accelerating Technology
“We're not necessarily doubting that God will do the best for us; we are wondering how painful the best will turn out to be." CS Lewis
Saturday, April 9, 2011
Monday, November 22, 2010
Will Our Computers Love Us In The Future?
Or, will our computers become self-programming super intelligences who decide to kill us all. A leader and expert in the development of Artificial General Intelligence in machines, Dr. Ben Goertzel wrote on his blog The Multiverse According to Ben last October (reposted at the Singularity Hub) why he concludes that it is unlikely that AGI, which will pass up human intelligence in the coming decades, referred to by some observers as the Singularity, will conclude that humans are no longer needed and in the words of Dr. Goertzel, decide to "repurpose our molecules for it's own ends".
In his blog post, Dr. Goertzel argues that the fear that unrestricted development of AGI will lead to an artificial intelligence which does not value humanity's right to exist, a concept he calls the Scary Idea, while possible is highly unlikely, even though he acknowledges that the odds are impossible to estimate.
The argument for the Scary Idea has been floating around science fiction for a long time. Dr. Goertzel, while disagreeing with the premise, doesn't pretend as if there isn't a basis for the Scary Idea, at least in part.
The Three Laws of Robotics are as follows:
The problem, in my own opinion, in speculating on the impact of any particular technology, whether it is artificial intelligence, bio-technology, nano-technology, or any other emerging technology, is that they will all be impacting each other in ways impossible to predict. I think that in the short term, Goertzel's fears of near AGI being used by humans to leverage other technologies to cause harm to other humans is probably the greatest danger we face in our future.
In fact, near AGI isn't a requirement. Human beings themselves are fully capable of causing great mischief using the software we have now. A cyber worm, such as the Stuxnet worm released into the internet to attack elements of the Iranian Nuclear development infrastructure, which was unleashed on our power grids could lead to real negative outcomes, including death, for vast numbers of people. In the case of Stuxnet, the NY Times reports that it was aimed at two elements of the Iranian nuclear power/weapons program, the centrifuges used to enrich uranium and the steam turbine at the Bushehr Nuclear Power Plant. If someone were to do our power grid a similar favor, it would be a very bad thing for us, even if we suffered only a collapse of a regional power grid for an extended time. Just one example of how software may run amok in our near future, which hardly requires the advent of artificial intelligence.
We live in interesting times.
In his blog post, Dr. Goertzel argues that the fear that unrestricted development of AGI will lead to an artificial intelligence which does not value humanity's right to exist, a concept he calls the Scary Idea, while possible is highly unlikely, even though he acknowledges that the odds are impossible to estimate.
The argument for the Scary Idea has been floating around science fiction for a long time. Dr. Goertzel, while disagreeing with the premise, doesn't pretend as if there isn't a basis for the Scary Idea, at least in part.
Please note that, although I don't agree with the Scary Idea, I do agree that the development of advanced AGI has significant risks associated with it. There are also dramatic potential benefits associated with it, including the potential of protection against risks from other technologies (like nanotech, biotech, narrow AI, etc.). So the development of AGI has difficult cost-benefit balances associated with it -- just like the development of many other technologies.As Dr. Goertzel sees it, the Scary Idea has four main points.
I also agree with Nick Bostrom and a host of SF writers and many others that AGI is a potential "existential risk" -- i.e. that in the worst case, AGI could wipe out humanity entirely. I think nanotech and biotech and narrow AI could also do so, along with a bunch of other things.
I certainly don't want to see the human race wiped out! I personally would like to transcend the legacy human condition and become a transhuman superbeing … and I would like everyone else to have the chance to do so, if they want to. But even though I think this kind of transcendence will be possible, and will be desirable to many, I wouldn't like to see anyone forced to transcend in this way. I would like to see the good old fashioned human race continue, if there are humans who want to maintain their good old fashioned humanity, even if other options are available
The proponents of this argument come from members of the Singularity Institute for Artificial Intelligence (SIAI). Their idea is a derivative of physicist and science fiction writer, Isaac Asimov's idea of the Three Laws of Robotics. where artificial intelligence was embodied in the form of robots.As far as I can tell from discussions and the available online material, some main ingredients of peoples’ reasons for believing the Scary Idea are ideas like:
- If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low
- Human value is fragile as well as complex, so if you create an AGI with a roughly-human-like value system, then this may not be good enough, and it is likely to rapidly diverge into something with little or no respect for human values
- “Hard takeoffs” (in which AGIs recursively self-improve and massively increase their intelligence) are fairly likely once AGI reaches a certain level of intelligence; and humans will have little hope of stopping these events
- A hard takeoff, unless it starts from an AGI designed in a “provably Friendly” way, is highly likely to lead to an AGI system that doesn’t respect the rights of humans to exist
I emphasize that I am not quoting any particular thinker associated with SIAI here. I’m merely summarizing, in my own words, ideas that I’ve heard and read very often from various individuals associated with SIAI.
If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The Three Laws of Robotics are as follows:
SIAI argues that widespread AGI development should be curtailed until the development of "provably non-dangerous AGI" or friendly AGI can be developed. Where Goertzel seems to disagree is not on the question of the need for creating ways to engineer AGI software to ensure friendliness towards humanity. His argument is that we can do both at the same time.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I agree that AGI ethics is a Very Important Problem. But I doubt the problem is most effectively addressed by theory alone. I think the way to come to a useful real-world understanding of AGI ethics is going to be to
- build some early-stage AGI systems, e.g. artificial toddlers, scientists’ helpers, video game characters, robot maids and butlers, etc.
- study these early-stage AGI systems empirically, with a focus on their ethics as well as their cognition
- in the usual manner of science, attempt to arrive at a solid theory of AGI intelligence and ethics based on a combination of conceptual and experimental-data considerations
- humanity collectively plots the next steps from there, based on the theory we find: maybe we go ahead and create a superhuman AI capable of hard takeoff, maybe we pause AGI development because of the risks, maybe we build an “AGI Nanny” to watch over the human race and prevent AGI or other technologies from going awry. Whatever choice we make then, it will be made based on far better knowledge than we have right now.
So what’s wrong with this approach?
Dr. Goertzel concludes that the benefit in the development of AGI is in it's role as an agent of change which contributes to bio-technology, nanotechnology, as well as other emerging technologies which will provide solutions to the myriad of problems we face, but also act to protect us from the risks and dangers of those technologies as well.Nothing, really — if you hold the views of most AI researchers or futurists. There are plenty of disagreements about the right path to AGI, but wide and implicit agreement that something like the above path is sensible.
But, if you adhere to SIAI’s Scary Idea, there’s a big problem with this approach — because, according to the Scary Idea, there’s too huge of a risk that these early-stage AGI systems are going to experience a hard takeoff and self-modify into something that will destroy us all.
But I just don’t buy the Scary Idea.
I do see a real risk that, if we proceed in the manner I’m advocating, some nasty people will take the early-stage AGIs and either use them for bad ends, or proceed to hastily create a superhuman AGI that then does bad things of its own volition. These are real risks that must be thought about hard, and protected against as necessary. But they are different from the Scary Idea. And they are not so different from the risks implicit in a host of other advanced technologies.
The either/or arguments about an AGI which either helps us become superhuman or decides to destroy humanity leaves a lot of room between the two poles for many other outcomes. A concept floated in the 1980 novel, Mockingbird, revolved around the idea of compassionate computer super intelligences which act as a caretaker for an illiterate and drug addled human race in a final decline towards extinction. With an already measurable negative impact of internet and computer technology on literacy rates and vocabularies, this outcome also bears contemplation.I think that to avoid actively developing AGI, out of speculative concerns like the Scary Idea, would be an extremely bad idea.
That is, rather than “if you go ahead with an AGI when you’re not 100% sure that it’s safe, you’re committing the Holocaust,” I suppose my view is closer to “if you avoid creating beneficial AGI because of speculative concerns, then you’re killing my grandma” !! (Because advanced AGI will surely be able to help us cure human diseases and vastly extend and improve human life.)
So perhaps I could adopt the slogan: “You don’t have to kill my grandma to avoid the Holocaust!” … but really, folks… Well, you get the point….
Humanity is on a risky course altogether, but no matter what I decide to do with my life and career (and no matter what Bill Joy or Jaron Lanier or Bill McKibben, etc., write), the race is not going to voluntarily halt technological progress. It’s just not happening.
We just need to accept the risk, embrace the thrill of the amazing time we were born into, and try our best to develop near-inevitable technologies like AGI in a responsible and ethical way.
The problem, in my own opinion, in speculating on the impact of any particular technology, whether it is artificial intelligence, bio-technology, nano-technology, or any other emerging technology, is that they will all be impacting each other in ways impossible to predict. I think that in the short term, Goertzel's fears of near AGI being used by humans to leverage other technologies to cause harm to other humans is probably the greatest danger we face in our future.
In fact, near AGI isn't a requirement. Human beings themselves are fully capable of causing great mischief using the software we have now. A cyber worm, such as the Stuxnet worm released into the internet to attack elements of the Iranian Nuclear development infrastructure, which was unleashed on our power grids could lead to real negative outcomes, including death, for vast numbers of people. In the case of Stuxnet, the NY Times reports that it was aimed at two elements of the Iranian nuclear power/weapons program, the centrifuges used to enrich uranium and the steam turbine at the Bushehr Nuclear Power Plant. If someone were to do our power grid a similar favor, it would be a very bad thing for us, even if we suffered only a collapse of a regional power grid for an extended time. Just one example of how software may run amok in our near future, which hardly requires the advent of artificial intelligence.
We live in interesting times.
Wednesday, October 20, 2010
Technology and Know Nothing-ism Conspire To Make America Dumber. That Is Not A Good Thing.
For my entire lifetime the "controversy" surrounding Darwin's theories on the evolution of a species being driven by biological adaptation of a species to constantly changing environments has bewildered me. As far as I comprehend that certain sub-cultures, particularly those of conservative religious fundamentalists, reject evolution from what I can only discern as a perceived threat to religious dogma, I don't understand how those attitudes persist, even though it seems to me that understanding the mechanism of an evolutionary process in no way disproves a guiding hand if that's what they are worried about. I, for one, can not diffinitively discern the hand of God from random chance. What is alarming me now is how this rejection of knowledge is metastasizing into the wider culture. Throughout my childhood and early adulthood, you just didn't see major figures on credible news and opinion media endorsing the rejection of knowledge and science as Glenn Beck did the other day. The impact of Beck's endorsement of Know Nothing rejection of evolution is not just in undermining the undermining of future generations ability to compete in the scientific advancements required to keep the United States competitive in the information technology world, (DNA is simply molecularly encoded information.) it undermines our future ability to compete on all scientific fronts. The goal of this growing power in our society aims not educational excellence, but rather requiring educational adherence to sub-cultural dogma. What was the proclaimed basis for Beck's rejection of science? The fact that he has never seen a half-man half-ape. That's right folks.
Blogger Steve Bennen at the Washington Monthly looked back today at past comments of President Obama regarding how the rest of the world looks at the importance of education.
, or is the growing energy of the Know Nothing crowd driven by other factors, such as a growing unease with the rapid pace of technology? The differences are stark when you consider that China has more english speaking engineers and scientists graduating from their universities than we have graduating in the United States, especially when you factor in the large percentage of foreign born students we have graduating at post graduate levels from American Universities. What ever the variables at work it seems as if much of the world is on the upward slope of the curve while we seem to be on the downward slope.
Speaking of the rapid pace of technology, a recent article from MIT's Technology Review, The Ultimate Persuasion Device, speculated on the negative impact on our collective intelligence by the evolution of smart phones into a super iPhone social networking and social information device will have on the near future.
"I think that's ridiculous. I haven't seen a half-monkey, half-person yet. Did evolution just stop? There's no other species that is developing into half-human?"
Blogger Steve Bennen at the Washington Monthly looked back today at past comments of President Obama regarding how the rest of the world looks at the importance of education.
This got me thinking about a story President Obama told about a year ago, after he returned from a trip to Asia. He shared an anecdote about a luncheon he attended with the president of South Korea.
"I was interested in education policy -- they've grown enormously over the last 40 years," Obama said. "And I asked him, 'What are the biggest challenges in your education policy?' He said, 'The biggest challenge that I have is that my parents are too demanding.' He said, 'Even if somebody is dirt poor, they are insisting that their kids are getting the best education.' He said, 'I've had to import thousands of foreign teachers because they're all insisting that Korean children have to learn English in elementary school.' That was the biggest education challenge that he had, was an insistence, a demand from parents for excellence in the schools.
"And the same thing was true when I went to China. I was talking to the mayor of Shanghai, and I asked him about how he was doing recruiting teachers, given that they've got 25 million people in this one city. He said, 'We don't have problems recruiting teachers because teaching is so revered and the pay scales for teachers are actually comparable to doctors and other professions. '
"That gives you a sense of what's happening around the world. There is a hunger for knowledge, an insistence on excellence, a reverence for science and math and technology and learning. That used to be what we were about."Which brings me to speculate on whether or not there is some mathematical curve the collective intelligence of a society moves on which might correlate with that societies rise and fall on other measures. Would it be a leading indicator or a trailing one? Is it driven by a Poverty of Affluence
Speaking of the rapid pace of technology, a recent article from MIT's Technology Review, The Ultimate Persuasion Device, speculated on the negative impact on our collective intelligence by the evolution of smart phones into a super iPhone social networking and social information device will have on the near future.
TR: In your book, America is a post-literate society and we lost the ability to interact directly with one another. Did technology lead us to this point?
GS: In the book there are many culprits. I think technology can be construed as one culprit but I think the main culprit is the fact that we are getting dumber. Whether technology enables this or not is an open question. But compared, relative to other countries in the world, we are constantly on the way down in terms of our scores in a wide variety of things.
This may not be so evident obviously at MIT because the whole world comes to the Institute to get educated, but in terms of primary education, things are really bad.
What's interesting is one study that the Times recently published about-- children's vocabularies are shrinking because their parents are constantly texting and typing away and they don't have enough time to just communicate to the child.
So that's something that really intrigued me and felt like it was already part of this world.Maybe just as technology creates a problem, it may also create a solution. Perhaps the solution will be dolls running artificial intelligence programs and faces with the capacity for expressing emotion which take over the role of teaching children to communicate and read. Whatever the solution, be it cultural or be it technological, we need it now, not later.
Labels:
Artificial Intelligence,
education,
know nothing-ism
Sunday, October 3, 2010
Weekly Address: Solar Power and Clean Energy Economy
We have a stark choice. Policies which incentivize the development of new energy technologies and the industries and jobs they create or drilling, further deregulation and tax breaks for the oil industry. Each path leads to it's own particular future. The election in the fall matters.
Thursday, August 26, 2010
I Have A Nightmare
I just don't get it. When I wonder about how the conservative group mind has so much mis-information floating around in it, I have to remember the source of the mis-information. As Steve Benen at the Washington Monthly noted on Beck's Restore America 2010 rally held at the Lincoln Memorial:
The folks who gathered in D.C. today were awfully excited about something. The fact that it's not altogether obvious what that might be probably isn't a good sign.
Saturday, August 21, 2010
Hello Iris. Welcome To The Future.
A world where everyone knows your name may be just around the corner. Within the next decade the time may com when employers, hospitals, stores and banks, not to mention law enforcement, will know your identity as soon as you walk through their doors. They will read it in your eye. Or in your iris to be more specific.
Austin Carr of the Fast Company website reports that an american biometrics research firm, Global Rainmakers Inc., and the city of Leon, Mexico plan to implement the first city wide biometrics identification system, first for law enforcement and later followed by the release of commercial applications.
The eye scan technology, from the description provided by Mr. Carr of his experience at GRI's research facilities in New York, NY, is going to revolutionize life as we know it. It will be able to identify the passengers in vehicles as they travel down the highway. It will be able to identify convicted shoplifters, thieves, robbers, even hot check writers, as they pass through a stores doorway. Law enforcement officers will need only to have a person they have stopped look at a small hand held scanner to quickly know the identity of the person they have stopped. The technology may even serve as the lock on your door.
The roll out of the Iris Scan system for the city of Leon is already underway according to Carr.
I think it's going to be hard not to argue that we are headed, as Carr's frequent reference's to Orwell's novel 1984 implies, towards a world where the government, and anyone else with access to the Iris system data base, has the capacity to track each person through their everyday life using billions of sensors and scanners which are predicted to become ubiquitous in our public and private lives. Theoretically opting into the system will be voluntary, but the disincentives for choosing not to opt-in are large. As Carr notes, refusing to opt-in potentially makes it more likely to make you an object of suspicion or investigation.
I don't know about Iris, but I'm not all that comfortable with the thought of Big Brother moving in.
Austin Carr of the Fast Company website reports that an american biometrics research firm, Global Rainmakers Inc., and the city of Leon, Mexico plan to implement the first city wide biometrics identification system, first for law enforcement and later followed by the release of commercial applications.
"In the future, whether it's entering your home, opening your car, entering your workspace, getting a pharmacy prescription refilled, or having your medical records pulled up, everything will come off that unique key that is your iris," says Jeff Carter, CDO of Global Rainmakers. Before coming to GRI, Carter headed a think tank partnership between Bank of America, Harvard, and MIT. "Every person, place, and thing on this planet will be connected [to the iris system] within the next 10 years," he says.Like all new technology, adoption of this tech will start off slow. Leon is just one city after all. But notice the rate of exponential growth Mr. Carter suggests when he says that "every person, place and thing" will be connected to a central system within the next 10 years.
The eye scan technology, from the description provided by Mr. Carr of his experience at GRI's research facilities in New York, NY, is going to revolutionize life as we know it. It will be able to identify the passengers in vehicles as they travel down the highway. It will be able to identify convicted shoplifters, thieves, robbers, even hot check writers, as they pass through a stores doorway. Law enforcement officers will need only to have a person they have stopped look at a small hand held scanner to quickly know the identity of the person they have stopped. The technology may even serve as the lock on your door.
The roll out of the Iris Scan system for the city of Leon is already underway according to Carr.
"GRI's scanning devices are currently shipping to the city, where integration will begin with law enforcement facilities, security check-points, police stations, and detention areas. This first phase will cost less than $5 million. Phase II, which will roll out in the next three years, will focus more on commercial enterprises. Scanners will be placed in mass transit, medical centers and banks, among other public and private locations.
The devices range from large-scale scanners like the Hbox (shown in the airport-security prototype above), which can snap up to 50 people per minute in motion, to smaller scanners like the EyeSwipe and EyeSwipe Mini, which can capture the irises of between 15 to 30 people per minute. "
I think it's going to be hard not to argue that we are headed, as Carr's frequent reference's to Orwell's novel 1984 implies, towards a world where the government, and anyone else with access to the Iris system data base, has the capacity to track each person through their everyday life using billions of sensors and scanners which are predicted to become ubiquitous in our public and private lives. Theoretically opting into the system will be voluntary, but the disincentives for choosing not to opt-in are large. As Carr notes, refusing to opt-in potentially makes it more likely to make you an object of suspicion or investigation.
"There's a lot of convenience to this--you'll have nothing to carry except your eyes," says Carter, claiming that consumers will no longer be carded at bars and liquor stores. And he has a warning for those thinking of opting out: "When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in."
I don't know about Iris, but I'm not all that comfortable with the thought of Big Brother moving in.
Thursday, August 19, 2010
Shift Happens.
The challenge, created by the present exponential acceleration in technology, facing today's educators is enormous. This video, titled Did you know? Shift Happens 2.0 is four years old now. It is important to remember that some technologies and trends have doubled in speed, size or efficiency two or three times since this video was first made. Smart phones like the iPhone and the Android didn't exist. Since then Facebook, which was hardly on the radar screen in 2006 has grown to over 500,000,000 users. Created by Scott McLeod and Karl Fisch for a group of 150 high school educators, it was intended to illustrate a future of technology moving into hyper drive, and the difficulties of preparing students for that 21st century world.
Did you know? Shift Happens 2.0 is another of the videos highlighted in the Singularity Hub's list of 12 videos that will help you love the future.
Did you know? Shift Happens 2.0 is another of the videos highlighted in the Singularity Hub's list of 12 videos that will help you love the future.
Subscribe to:
Posts (Atom)