“We're not necessarily doubting that God will do the best for us; we are wondering how painful the best will turn out to be." CS Lewis

Monday, March 17, 2014

NASA Study finds that collapse of industrial civilization may be imminent

A new study recently commissioned by NASA's Goddard Space Flight Center found that increased and extensive exploitation of world resources and an increasing divide in the distribution of wealth may contribute to a collapse of industrial civilization within a few decades. The reaearch project was undertaken by a multi-discipline team of researchers from the National Socio-Environmental Synthesis Center and a paper describing its various conclusions will be published in the Elsevier journal, Ecological Economics. Researchers modeled a variety of variables, including Population, Climate, Water, Agriculture, and Energy to look at how each created and contributed to stresses which could cause systemic collapse. The research project is based on a the ‘Human And Nature DYnamical’ (HANDY) model, and was led by applied mathematician Safa Motesharri of the US National Science Foundation. A .pdf copy of the paper submission can be found here: A Minimal Model For Human And Nature Interaction

 The researchers make note of the fact that warnings of civilization wide collapse are perceived as "fringe" and controversial, but note that disruption of civilization due to precipitous collapse lasting centuries, is not uncommon, pointing to the Roman and Mayan civilizations as examples. Studying simulations on a variety of civilizations, the research showed that with the civilization most like our present day system, collapse was almost unavoidable.

Wednesday, November 30, 2011

Post-industrial Jobs, Employment and Entrepreneurism in a 1% Dominated World

In Gallop Chairman Jim Clifton's book The Coming Job Wars, he lays out the coming competition amongst the world's nations for creating and attracting necessary wealth creating jobs and the destructive dangers of long term unemployment and substandard education to a nations competitive stance in a rapidly changing world economy. A Gallup business journal link to the book notes:
Leaders of countries and cities, Clifton says, should focus on creating good jobs because as jobs go, so does the fate of nations. Jobs bring prosperity, peace, and human development -- but long-term unemployment ruins lives, cities, and countries. Creating good jobs is tough, and many leaders are doing many things wrong. They're undercutting entrepreneurs instead of cultivating them. They're running companies with depressed workforces. They're letting the next generation of job creators rot in bad schools. A global jobs war is coming, and there's no time to waste. Cities are crumbling for lack of good jobs. Nations are in revolt because their people can't get good jobs. The cities and countries that act first -- that focus everything they have on creating good jobs -- are the ones that will win.
Clifton notes some of the exact issues that the Occupy Wall Street movement seems to be addressing. We have seen a decade where job growth has been stymied and long term unemployment has swallowed a growing percentage of American workers. Where Corporate/1% interests have dominated the political and economic processes and doing exactly as Clifton describes, "They're undercutting entrepreneurs instead of cultivating them. They're running companies with depressed workforces. They're letting the next generation of job creators rot in bad schools."
Economists have noted that throughout history as a nations financial elites gain an ever larger percentage of a nations total wealth, economic growth of the nation was hindered and middle class entrepreneurial opportunity and growth restricted. A point that I don't see registering in this debate are the implications of exponential growth in productivity in job creation. Consider that in the local Nucor steel mill 100 men produce the same amount of steel a 1000 men produced 20 years ago, and in a decade 10 men, leveraged by AI and robotics, will produce what a 100 men produce today. That is not a recipe for job growth. AI chat programs are predicted to soon be replacing call center workers in a variety of industries. Again, this is not a recipe for job growth. This is not a recipe for equalizing of income distribution necessary to achieve the stability some research suggests our civilization needs to survive.

Wednesday, June 1, 2011

Malintent Detection Comes To An Airport Near You

Since reading James Halprin's The Truth Machine in late 1998 I have been looking for technology like this to come on the scene. The argument laid out in Halprin's book, that mankind's future in a world where technology increasingly leverages the power of the few or the one to cause damage or death to an ever larger number of people, is going to be dependent upon the development of technologies which can to all intent and purposes read peoples minds to determine whether the person has intentions to cause harm to others.

Nature reports that the Department of Homeland Security has been conducting tests of Future Attribute Screening Technology (FAST) over the past few months in northeastern states. The technology is designed to identify individuals who intend to commit a terrorist act using public transportation.

Like a lie detector, FAST measures a variety of physiological indicators, ranging from heart rate to the steadiness of a person's gaze, to judge a subject's state of mind. But there are major differences from the polygraph. FAST relies on non-contact sensors, so it can measure indicators as someone walks through a corridor at an airport, and it does not depend on active questioning of the subject.

The technology is hardly fool proof. DHS claims a 70% success rate in it's tests, which of course means that 30% of the simulated perpetrators got through the system. The concept has received a great deal of criticism, with critics pointing out that there are a number of aspects of traveling on public transportation which can cause a persons heart rate to rise or have other physiological responses. Nature reports the response of one critic:

Steven Aftergood, a senior research analyst at the Federation of American Scientists, a think-tank based in Washington DC that promotes the use of science in policy-making, is pessimistic about the FAST tests. He thinks that they will produce a large proportion of false positives, frequently tagging innocent people as potential terrorists and making the system unworkable in a busy airport. "I believe that the premise of this approach — that there is an identifiable physiological signature uniquely associated with malicious intent — is mistaken. To my knowledge, it has not been demonstrated," he says. "Without it, the whole thing seems like a charade."

I think we will see technologies with this goal continue to be developed and to quickly get better over time. Technologies like this represent a double edged sword to our traditional understanding of liberty, and brings up basic issues like the question of individual privacy vs public safety which will be difficult to resolve. What does privacy mean when, in the near future, the government has the right to read your mind whenever you get on an airplane?

Check out The Truth Machine. It is an interesting read.

Saturday, April 9, 2011

A Cybernetic Weapon Of Mass Destruction

Computer Security Specialist Ralph Langner's thought provoking TED talk on the Stuxnet Virus which targeted Iran's nuclear enrichment program.


Ralph Langner is a german computer security specialist who has received attention world wide for his work in deciphering the mystery surrounding the Stuxnet virus.

Monday, November 22, 2010

Will Our Computers Love Us In The Future?

Or, will our computers become self-programming super intelligences who decide to kill us all.  A leader and expert in the development of Artificial General Intelligence in machines, Dr. Ben Goertzel wrote on his blog The Multiverse According to Ben last October (reposted at the Singularity Hub) why he concludes that it is unlikely that AGI, which will pass up human intelligence in the coming decades, referred to by some observers as the Singularity, will conclude that humans are no longer needed and in the words of Dr. Goertzel, decide to "repurpose our molecules for it's own ends".


In his blog post, Dr. Goertzel argues that the fear that unrestricted development of AGI will lead to an artificial intelligence which does not value humanity's right to exist, a concept he calls the Scary Idea, while possible is highly unlikely, even though he acknowledges that the odds are impossible to estimate.


The argument for the Scary Idea has been floating around science fiction for a long time.  Dr. Goertzel, while disagreeing with the premise, doesn't pretend as if there isn't a basis for the Scary Idea, at least in part.
Please note that, although I don't agree with the Scary Idea, I do agree that the development of advanced AGI has significant risks associated with it. There are also dramatic potential benefits associated with it, including the potential of protection against risks from other technologies (like nanotech, biotech, narrow AI, etc.). So the development of AGI has difficult cost-benefit balances associated with it -- just like the development of many other technologies.

I also agree with Nick Bostrom and a host of SF writers and many others that AGI is a potential "existential risk" -- i.e. that in the worst case, AGI could wipe out humanity entirely. I think nanotech and biotech and narrow AI could also do so, along with a bunch of other things.

I certainly don't want to see the human race wiped out! I personally would like to transcend the legacy human condition and become a transhuman superbeing … and I would like everyone else to have the chance to do so, if they want to. But even though I think this kind of transcendence will be possible, and will be desirable to many, I wouldn't like to see anyone forced to transcend in this way. I would like to see the good old fashioned human race continue, if there are humans who want to maintain their good old fashioned humanity, even if other options are available
As Dr. Goertzel sees it, the Scary Idea has four main points.
As far as I can tell from discussions and the available online material, some main ingredients of peoples’ reasons for believing the Scary Idea are ideas like:


  1. If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low
  2. Human value is fragile as well as complex, so if you create an AGI with a roughly-human-like value system, then this may not be good enough, and it is likely to rapidly diverge into something with little or no respect for human values
  3. “Hard takeoffs” (in which AGIs recursively self-improve and massively increase their intelligence) are fairly likely once AGI reaches a certain level of intelligence; and humans will have little hope of stopping these events
  4. A hard takeoff, unless it starts from an AGI designed in a “provably Friendly” way, is highly likely to lead to an AGI system that doesn’t respect the rights of humans to exist
I emphasize that I am not quoting any particular thinker associated with SIAI here. I’m merely summarizing, in my own words, ideas that I’ve heard and read very often from various individuals associated with SIAI.


If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The proponents of this argument come from members of the Singularity Institute for Artificial Intelligence (SIAI).  Their idea is a derivative of physicist and science fiction writer, Isaac Asimov's idea of the Three Laws of Robotics. where artificial intelligence was embodied in the form of robots.


The Three Laws of Robotics are as follows:



1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
SIAI argues that widespread AGI development should be curtailed until the development of "provably non-dangerous AGI" or friendly AGI can be developed.  Where Goertzel seems to disagree is not on the question of the need for creating ways to engineer AGI software to ensure friendliness towards humanity.  His argument is that we can do both at the same time.
I agree that AGI ethics is a Very Important Problem. But I doubt the problem is most effectively addressed by theory alone. I think the way to come to a useful real-world understanding of AGI ethics is going to be to
  • build some early-stage AGI systems, e.g. artificial toddlers, scientists’ helpers, video game characters, robot maids and butlers, etc.
  • study these early-stage AGI systems empirically, with a focus on their ethics as well as their cognition
  • in the usual manner of science, attempt to arrive at a solid theory of AGI intelligence and ethics based on a combination of conceptual and experimental-data considerations
  • humanity collectively plots the next steps from there, based on the theory we find: maybe we go ahead and create a superhuman AI capable of hard takeoff, maybe we pause AGI development because of the risks, maybe we build an “AGI Nanny” to watch over the human race and prevent AGI or other technologies from going awry. Whatever choice we make then, it will be made based on far better knowledge than we have right now.
So what’s wrong with this approach?  
Nothing, really — if you hold the views of most AI researchers or futurists. There are plenty of disagreements about the right path to AGI, but wide and implicit agreement that something like the above path is sensible.


But, if you adhere to SIAI’s Scary Idea, there’s a big problem with this approach — because, according to the Scary Idea, there’s too huge of a risk that these early-stage AGI systems are going to experience a hard takeoff and self-modify into something that will destroy us all.


But I just don’t buy the Scary Idea.


I do see a real risk that, if we proceed in the manner I’m advocating, some nasty people will take the early-stage AGIs and either use them for bad ends, or proceed to hastily create a superhuman AGI that then does bad things of its own volition. These are real risks that must be thought about hard, and protected against as necessary. But they are different from the Scary Idea. And they are not so different from the risks implicit in a host of other advanced technologies.
Dr. Goertzel concludes that the benefit in the development of AGI is in it's role as an agent of change which contributes to bio-technology, nanotechnology, as well as other emerging technologies which will provide solutions to the myriad of problems we face, but also act to protect us from the risks and dangers of those technologies as well.
I think that to avoid actively developing AGI, out of speculative concerns like the Scary Idea, would be an extremely bad idea.


That is, rather than “if you go ahead with an AGI when you’re not 100% sure that it’s safe, you’re committing the Holocaust,” I suppose my view is closer to “if you avoid creating beneficial AGI because of speculative concerns, then you’re killing my grandma” !! (Because advanced AGI will surely be able to help us cure human diseases and vastly extend and improve human life.)


So perhaps I could adopt the slogan: “You don’t have to kill my grandma to avoid the Holocaust!” … but really, folks… Well, you get the point….


Humanity is on a risky course altogether, but no matter what I decide to do with my life and career (and no matter what Bill Joy or Jaron Lanier or Bill McKibben, etc., write), the race is not going to voluntarily halt technological progress. It’s just not happening.


We just need to accept the risk, embrace the thrill of the amazing time we were born into, and try our best to develop near-inevitable technologies like AGI in a responsible and ethical way.
The either/or arguments about an AGI which either helps us become superhuman or decides to destroy humanity leaves a lot of room between the two poles for many other outcomes.  A concept floated in the 1980 novel, Mockingbird, revolved around the idea of compassionate computer super intelligences which act as a caretaker for an illiterate and drug addled human race in a final decline towards extinction.  With an already measurable negative impact of internet and computer technology on literacy rates and vocabularies, this outcome also bears contemplation.  


The problem, in my own opinion, in speculating on the impact of any particular technology, whether it is artificial intelligence, bio-technology, nano-technology, or any other emerging technology, is that they will all be impacting each other in ways impossible to predict.  I think that in the short term, Goertzel's fears of near AGI being used by humans to leverage other technologies to cause harm to other humans is probably the greatest danger we face in our future.  


In fact, near AGI isn't a requirement.  Human beings themselves are fully capable of causing great mischief using the software we have now.  A cyber worm, such as the Stuxnet worm released into the internet to attack elements of the Iranian Nuclear development infrastructure, which was unleashed on our power grids could lead to real negative outcomes, including death, for vast numbers of people.  In the case of Stuxnet, the NY Times reports that it was aimed at two elements of the Iranian nuclear power/weapons program, the centrifuges used to enrich uranium and the steam turbine at the Bushehr Nuclear Power Plant.  If someone were to do our power grid a similar favor, it would be a very bad thing for us, even if we suffered only a collapse of a regional power grid for an extended time.  Just one example of how software may run amok in our near future, which hardly requires the advent of artificial intelligence.


We live in interesting times.  

Wednesday, October 20, 2010

Technology and Know Nothing-ism Conspire To Make America Dumber. That Is Not A Good Thing.

For my entire lifetime the "controversy" surrounding Darwin's theories on the evolution of a species being driven by biological adaptation of a species to constantly changing environments has bewildered me.  As far as I comprehend that certain sub-cultures, particularly those of conservative religious fundamentalists, reject evolution from what I can only discern as a perceived threat to religious dogma, I don't understand how those attitudes persist, even though it seems to me that understanding the mechanism of an evolutionary process in no way disproves a guiding hand if that's what they are worried about.  I, for one, can not diffinitively discern the hand of God from random chance.  What is alarming me now is how this rejection of knowledge is metastasizing into the wider culture.  Throughout my childhood and early adulthood, you just didn't see major figures on credible news and opinion media endorsing the rejection of knowledge and science as Glenn Beck did the other day.  The impact of Beck's endorsement of Know Nothing rejection of evolution is not just in undermining the undermining of future generations ability to compete in the scientific advancements required to keep the United States competitive in the information technology world, (DNA is simply molecularly encoded information.)  it undermines our future ability to compete on all scientific fronts.  The goal of this growing power in our society aims not educational excellence, but rather requiring educational adherence to sub-cultural dogma.  What was the proclaimed basis for Beck's rejection of science?  The fact that he has never seen a half-man half-ape.  That's right folks.




"I think that's ridiculous. I haven't seen a half-monkey, half-person yet. Did evolution just stop? There's no other species that is developing into half-human?"

Blogger Steve Bennen at the Washington Monthly looked back today at past comments of President Obama regarding how the rest of the world looks at the importance of education.


This got me thinking about a story President Obama told about a year ago, after he returned from a trip to Asia. He shared an anecdote about a luncheon he attended with the president of South Korea.
"I was interested in education policy -- they've grown enormously over the last 40 years," Obama said. "And I asked him, 'What are the biggest challenges in your education policy?' He said, 'The biggest challenge that I have is that my parents are too demanding.' He said, 'Even if somebody is dirt poor, they are insisting that their kids are getting the best education.' He said, 'I've had to import thousands of foreign teachers because they're all insisting that Korean children have to learn English in elementary school.' That was the biggest education challenge that he had, was an insistence, a demand from parents for excellence in the schools.
"And the same thing was true when I went to China. I was talking to the mayor of Shanghai, and I asked him about how he was doing recruiting teachers, given that they've got 25 million people in this one city. He said, 'We don't have problems recruiting teachers because teaching is so revered and the pay scales for teachers are actually comparable to doctors and other professions. '
"That gives you a sense of what's happening around the world. There is a hunger for knowledge, an insistence on excellence, a reverence for science and math and technology and learning. That used to be what we were about."
Which brings me to speculate on whether or not there is some mathematical curve the collective intelligence of a society moves on which might correlate with that societies rise and fall on other measures.  Would it be a leading indicator or a trailing one?  Is it driven by a Poverty of Affluence, or is the growing energy of the Know Nothing crowd driven by other factors, such as a growing unease with the rapid pace of technology?  The differences are stark when you consider that China has more english speaking engineers and scientists graduating from their universities than we have graduating in the United States, especially when you factor in the large percentage of foreign born students we have graduating at post graduate levels from American Universities.  What ever the variables at work it seems as if much of the world is on the upward slope of the curve while we seem to be on the downward slope.

Speaking of the rapid pace of technology, a recent article from MIT's Technology Review, The Ultimate Persuasion Device, speculated on the negative impact on our collective intelligence by the evolution of smart phones into a super iPhone social networking and social information device will have on the near future.


TR: In your book, America is a post-literate society and we lost the ability to interact directly with one another. Did technology lead us to this point? 
GS: In the book there are many culprits. I think technology can be construed as one culprit but I think the main culprit is the fact that we are getting dumber. Whether technology enables this or not is an open question. But compared, relative to other countries in the world, we are constantly on the way down in terms of our scores in a wide variety of things.
This may not be so evident obviously at MIT because the whole world comes to the Institute to get educated, but in terms of primary education, things are really bad.
What's interesting is one study that the Times recently published about-- children's vocabularies are shrinking because their parents are constantly texting and typing away and they don't have enough time to just communicate to the child.
So that's something that really intrigued me and felt like it was already part of this world.
Maybe just as technology creates a problem, it may also create a solution.  Perhaps the solution will be dolls running artificial intelligence programs and faces with the capacity for expressing emotion which take over the role of teaching children to communicate and read.  Whatever the solution, be it cultural or be it technological, we need it now, not later.

Sunday, October 3, 2010

Weekly Address: Solar Power and Clean Energy Economy



We have a stark choice.  Policies which incentivize the development of new energy technologies and the industries and jobs they create or drilling, further deregulation and tax breaks for the oil industry.  Each path leads to it's own particular future.  The election in the fall matters.