Monday, August 07, 2006

Planning superintelligence - Part I

I received a vast amount of email about the last post - the socialization of autonomous robots. I was asked the most intriguing question, ‘How should we 'socialize' super human intelligence?’

I believe that superintelligence WILL be the last invention humans ever need to make – we’ll be there soon - but from Bill Joy to Marshall McLuhan, there has been this 'scary' feeling that we are opening Pandora's Box.

What IS so scary? There are several ideas about this so I am borrowing a bit but let's look at this a bit deeper.

The ethical issues related to the future creation of machines with general intellectual capabilities far outstrip those of humans and are quite different and distinct from any ethical problems arising within our current societies. Superintelligence is different.

Superintelligence would not be just another technological development but it WOULD be the most important invention ever made and WOULD lead to explosive progress in several (if not all) scientific and technological fields.

But what about moral thinking? How do we 'socialize' this ability to think? Should we control it? How do we control it? Can we control it?

Ethical questions all. But since ethics is a cognitive pursuit, would superintelligence surpass humans in the quality of its ethics and morals? Wouldn't the superintelligence simple know when to stop developing?

By definition - a superintelligence is any form of collective intellect that vastly outperforms the best human brains. Rather simple, but this definition leaves an obviously open question - how the superintelligence is implemented. No matter that it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue (biological computer) or something else we haven’t quite seen as of yet, the scary question is how this superintelligence is implemented.

I’m not talking about Deep Blue or a cluster of Crays but more about the result of growing hardware performance and increased ability to implement algorithms and architectures similar to those used by human brains. We are learning how right now. It's just a matter of time but it WILL happen in our lifetimes.

It will.

First, let’s all agree that superintelligence is not just another application or technology; not just another tool that will add incrementally to human capabilities.

Superintelligence is radically different.

Given a superintelligence’s intellectual superiority, it would be much better at doing scientific research and technological development than any human and possibly better even than all humans taken together.

Therefore, it can be assumed that technological progress in all other fields will be accelerated by the arrival of advanced artificial intelligence.

It is likely that new technology (and applications thereof) will be speedily developed by the first superintelligence that build on the current trends of today. By nature of who and what will develope this new brain, these technologies will most likely be molecular manufacturing, advanced military weaponry and space travel including things like new propulsion techniques and von Neumann probes (self-reproducing interstellar probes).

Health solutions will come much later. Remember, a completely healthy population creates huge issues - both policy and economic - for governments that will have a dramatic short term negative effect. Governments - and believe me, it will be a government agency that will first create superintelligence - will first use this additional power to protect themselves. The general population will come a distant second.

But if you think logically, we are also looking at:

  • neural uploading (neural or sub-neural scanning of a particular brain and implementation of the same algorithmic structures on a computer in a way that perseveres memory and personality). It's starting already.
  • elimination of aging and disease
  • fine-grained control of human mood, emotion and motivation

Just these three add up to either the reanimation of cryonics patients and/or fully realistic virtual reality.

Next, logically, it will be natural that superintelligence will lead to more advanced superintelligence. Not only would superintelligence create this but also improve it and make its own ‘source code’ - artificial minds that can be easily copied so long as there is hardware available to store them.

The same holds for human uploads. Hardware aside, the marginal cost of creating an additional copy of an upload or an artificial intelligence after the first one has been built is near zero.

Artificial minds could therefore quickly come to exist in great numbers, although it is possible that efficiency would favor concentrating computational resources in a single super-intellect.

As you can see, from the beginning, the emergence of superintelligence will be sudden. While it may thus take quite a while before we get superintelligence, the final stage may happen swiftly.

One day it won’t be there …. and the next day, it will.

Will we be ready?

Again, superintelligence should not necessarily be conceptualized as a mere tool. General superintelligence would be capable of independent initiative and of making its own plans and will be an autonomous agent.

C'mon - its own thoughts and plans? Humanity is doomed!

But listen - there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to 'liberate' itself.

It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible and who would resist with all its might any attempt to alter this goal.

For better or worse, artificial intellects need not share our human 'motivational' and greedy tendencies.

It could be that the cognitive architecture of an artificial intellect may also be quite unlike that of humans. Artificial intellects may find it easy to guard against some kinds of human error and bias, while at the same time being at increased risk mistake that not even the most hapless human would make.

So - one should be wary of assuming that the that the nature and behaviors of artificial intellects would necessarily resemble those of human (or other animal) minds.

As I stated above, ethics is a cognitive pursuit. A superintelligence could do it better than human thinkers. The same holds for questions of policy and long-term planning; when it comes to understanding which policies would lead to which results and which means would be most effective in attaining given aims, a superintelligence would outperform our feeble minds.

But the option to defer many decisions to the superintelligence does not mean that we can afford to be complacent in how we construct the superintelligence.

On the contrary, the setting up of initial conditions, and in particular the selection of a top-level goal for the superintelligence, is of the utmost importance.

Our entire future may hinge on how we solve these problems.

Whoa - I gotta think some more – this get’s complicated. Watch out for a Part II.

0 Comments:

Post a Comment

<< Home