Planning superintelligence - Part II
Ok – let’s take this discussion a bit further but let’s step back a step – in order to discuss how to implement super intelligence – let’s look at where this aspect of artificial life MUST be derived from – what tricks does it have up its sleeves?
Super intelligence is complicated. It is body AND soul. It is an artificial mind. It encapsulates several different schools of thought that have to work together. We have to look at:
- Computational biology: bio-networks, development, evolution, and prebiotic evolution and artificial chemistry
- Complex systems and networks: information and complexity, collective behavior and population dynamics, evolutionary and collective games
- Embodied cognition: embodiment and behavior, language and learning
- Achievements and open problems: biologically-inspired computing and technology, and formal as well as philosophical models
So, as Ramos stated a while ago, ‘the emergence of complex behavior in any system consisting of interacting simple elements is among the most fascinating phenomena of our world.’
Imagine a ‘machine’ where there is no pre-commitment to any particular representational scheme: the desired behavior is distributed and roughly specified simultaneously among many parts but there is minimal specification of the mechanism required to generate that behavior, read: the global behavior evolves from the many relations of multiple simple behaviors.
In formal terms, we are talking about a machine or artificial organism that avoids specific constraints and utilizes multiple, low-level implicit bio-inspired mechanisms that end in a transaction.
These transactions (decisions or actions) will be based on almost every field of today’s scientific interest, ranging from coherent pattern formation in physical and chemical systems, to the motion of swarms of animals in biology, and the behavior of social groups.
In the life and social sciences, one is usually convinced that the evolution of social systems is determined by numerous factors such as cultural, sociological, economic, political, ecological, etc.
However, in recent years, the development of the interdisciplinary fields ‘science of complexity’, along with ‘artificial life’ has lead to the insight that complex dynamic processes may also result from simple interactions.
Moreover, at a certain level of abstraction, one can also find many common features between complex structures in very different fields. Francis Heylighen, mentor of the Principia Cybernetic Project, points precisely to this paradigm shift with a remarkable historical perspective, namely in what concerns the view within the social sciences, using biology as a metaphor and more recently those from complexity science.
In 'The Global Superorganism: an Evolutionary-Cybernetic Model of the Emerging Network Society', he writes:
‘ Recently, the variety of ideas and methods that is commonly grouped under the head of Artificial Life, has led to understanding that artificial organisms can be self-organizing, adaptive systems. Most processes in such systems are decentralized, non-deterministic and in constant flux. They thrive on noise, chaos and creativity. Their collective swarm-intelligence emerges out of the free interactions between individually autonomous components.’
In fact, as one can see, those decision making processes or algorithms should be viewed as behaving like a swarm.
Rather than take living things apart, super intelligence will attempt to put living things together within a bottom-up approach. That is to say that it cannot copy life-as-we-know-it but must delve into the realm of life-as-it-could-be. It will have to generate lifelike behavior and focus on the problem of creating behavior generators that are inspired by the nature itself (even if the results that emerge from the process have no analogues in the natural world).
The key insight into the natural method of behavior generation is that it is reflected in the ‘architecture’ of natural living organisms, which consist of many millions of parts - each one of which has its own behavioral repertoire.
As we all know by now, living systems are highly distributed and quite massively parallel.
So this super intelligence must be a property of a system:
- where the collective behaviors of entities interact locally with their environment resulting in the emergence of coherent global patterns
- that provides a basis with which it is possible to explore collective (or distributed) problem solving without centralized control or the provision of a global model
- that applies the formation of a coherent social collective intelligence from the observation and evaluation of individual behaviors
- that stresses the role played by the environmental media - driving force for societal learning - as well as by the positive and negative feedback produced by the many interactions among independent agents
Finally, presenting a simple intelligence based on the above features, one can address the collective adaptation of a social community to a cultural, environmental, contextual and informational dynamic landscape for the purpose of complex decision making processes – read: three-dimensional mathematical functions that change over time (non-causal 4 dementional decision making processes for you 'high-math' types out there).
Therefore, the super intelligence must be a collective intelligence that is able to cope and quickly adapt to unforeseen situations including those with two different and contradictory purposes – this is the only way we would be able to control it - in a bottom-up manner using the mechanics of human-based logic based on ‘similarity’ – a cognitive term.
Similarity underlies the fundamental cognitive capabilities such as memory, categorization, decision making, problem solving and reasoning. Although recent approaches to similarity appreciate the structure of mental representations within an AI, they differ in the processes posited to operate over these representations.
Due to this construction, super intelligence would be unrivalled: it would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal.
It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Even a ‘controlled’ super intelligence that was running on an isolated computer, able to interact with the rest of the world only via text interface, might be able to break out of its confinement.
So we’re back at Step 1 - the risks in developing super intelligence include the risk of failure to give it the supergoal of philanthropy - read: NOT to build it so that it serves only a select group of humans but rather the whole of humanity in general.
More subtly, a super intelligence could decide that the state of affairs that we humans might now judge as desirable turns out to be a false utopia in which things essential to human survival may be irreversibly lost.
Given the state of the world today – and how we ‘respect’ humanity in general - that’s not a pretty insight.
Lots to think about.
0 Comments:
Post a Comment
<< Home