Friday, July 25, 2008

Singularity Revisited

I've been lurking & posting on other blogs lately (such as www.tor.com), and the primary topic has been The Technological Singularity, whether or not it might happen, how many singularities have already happened in human history, and whether it will have a positive or negative impact on humanity.

For my basic position and additional references, see my post, The (likely coming) Technological Singularity.

Most of the discourse boils down to:

  1. Can Moore's Law continue and is a resulting technological singularity inevitable?
  2. Will this be a "rapture of the nerds" where humanity (or at least a significant fraction thereof) will participate?
  3. (Not so important) What is the impact of the singularity concept on the literature of science fiction?

The optimists, including Vernor Vinge who first used the term "singularity" in this context, and Ray Kurzweil, firmly believe in computers augmenting human intelligence, memory, and communication, leading to a time of superhuman intelligence with unforeseeable results. Thus, a "singularity".

Even some optimists have qualms: In 1993, Vinge himself said, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

Much modern science fiction either deals with the "post-human" era--after the singularity--or proposes reasons why the singularity never happened.

The pessimists, including notables such as Bill Joy, fear the singularity. Joy wrote an article for Wired Magazine called Why the future doesn't need us. It is thoughtful, and frightening.

I'm a pessimist; I fear the rise of the machines. I believe that these technological advances are coming. It will be up to us to make sure that our technologies are used to improve humanity, not destroy it.

I also see fundamental problems controlling advanced AI. We have no friggin' idea how to program morality, or ethics, or respect, or love, let alone Asimov's Laws into our computers. We have a hard enough time teaching it to people. Have you ever been robbed, or mugged, or threatened? At least you haven't been murdered, yet.

I see problems with the controlling extreme advances in computer technology. Assume, for the moment, that advanced AI is possible, given expected improvements in computers and possible improvements in software. The person/company/country with superhuman intelligence on his/their side will have an enormous advantage over his/their competitors. Human greed will overwhelm caution, at least part of the time. It will take a huge, powerful, global governmental agency to control the technologies and prevent catastrophe.

In the long run, the most powerful government agency becomes the government.

As humanity (hopefully) expands into the cosmos, I see this one, all-powerful agency being the sole unifying force of humanity. Because it only takes a single malevolent superhuman entity to wipe out us all. And in the long run, the only thing that matters is survival (see The Purpose of Life).

I look forward to humanity's advancement, but I fear our extinction. Once hard AI is possible, fighting it will be a perpetual struggle.

No comments: