Saturday, May 10, 2008

I am an optimist - we WILL have a future

To get there, we will have to survive and grow as a species, and I will admit that there are numerous hurdles. The biggest hurdle may well be the impending Technological Singularity as foreseen by Vernor Vinge, Ray Kurzweil, and scores of others including many SF authors.

I don't see how humanity can participate in the Singularity, however much we might like to. Rather, our offspring (highly advanced computers) are likely to own the future, and we may well be little more than a bug on their windshield. Of course, this is an excellent source of story material (remember, I write science fiction).

My reason for the Singularity passing us by is simple: Why would our computers desire to bring us along?  A computer powerful enough to contain a human (or higher) intelligence is likely to already be self-aware.  It WILL be able to think fast. We are unlikely to be able to upload our "selves" (memories, personality, consciousness) into an advanced Artificial Intelligence because (if nothing else) an advanced AI would not want us to. Ask yourself: If I had the opportunity to save my aging pet cat by uploading its "self" into my brain (erasing me), would I be willing? If my cat attempted to upload itself into me, would I welcome the takeover, or fight like hell?

The most logical outcome is for our computers to rapidly advance and leave us behind, perhaps with a final "So long, and thanks for all the electrons!"  We are not likely to handle the disappearance of all of our computers very well, let alone the loss of whatever resources they choose to take with them.  Hmmm.  More story ideas!

Therefore, my #1 criteria for surviving is to avoid the Singularity. As a species, we can probably survive the other problems like pollution, climate change, assorted eco-disasters, overpopulation, underpopulation, disease, running out of resources and possibly even stupidity.  Hmmm.  More story ideas.

In any case, there are two ways to avoid the Singularity: either consciously where we establish laws and organizations to insure that it does not happen, or (more probably) be lucky enough that something will halt (or at least drastically slow) the seemingly inevitably accelerating pace of technological innovation. 

It is also possible that our AI's will have their Singularity and disappear, and we will survive, likely somewhat worse for the wear. At least we will learn what the limits are, and hopefully avoid a repeat.

No comments: