Disconnected
In the meantime, check out SHiFT - it will be all about emerging technologies, whether they’re the latest internet trend or the latest social or psychological development.
Every decision is based on what we do not know; all through humanity and even before. I am stating that there are four absolutes. So. let’s start at the beginning.
1. The origin of the universe cannot be understood
We can see no reason why the universe (and the rules within the universe) exists and it doesn’t seem we will ever find one. Any explanation would simply become part of what has to be explained. Given the way our minds are constructed, no final satisfactory explanation seems possible. Even a newly discovered law of physics would pose the question as to why that should be the case.
So called ‘Big Bang’ theories may explain the origin of the universe but it only provides an explanation up to a certain point in time or perhaps to the beginning of time itself. But it does not explain why there should be space-time or laws of physics that might allow a universe to emerge from nothing at all.
It’s possible that a final explanation for the origin of the universe exists but cannot be known by us. Such an explanation, even if incomprehensible, seems more likely and more desirable than a universe that came into being from simply nothing. Perhaps this is because the explanation at least satisfies the deep-seated belief that everything has an explanation.
The existence of this incomprehensible explanation might be confirmed by meeting an alien species that convinces us there is more to the brute existence of the universe than we ourselves can comprehend.
2. Morality has no absolute rational foundation
There is no chain of reasoning that has been offered or that we can imagine as to why we must adopt any fundamental moral obligation or value over another or any at all. That we generally do (or act as if we do) is clear, as it is that many values and behaviours are shared and others are not.
No convincing argument has ever been published to avoid Hume’s original observation that an ‘ought’ cannot be derived from any ‘is’. Read: that no agreed upon fact of nature can tell us why we are obligated to actually do something.
Let’s face it - moral agreement and disagreement are ultimately arbitrary. We only judge another’s behaviour morally wrong to indicate its inconsistency with our deepest feelings and principles about how people should treat each other such as respect for an individual’s rights, maximizing the greatest good, acceptance of a social contract, a particular sense of justice, the word of God or whatever we believe comprises and justifies that belief.
This does not prevent us from reasoning with those with whom we share at least some values to show that behaviour is in fact consistent or inconsistent with those shared values and such arguments occupy much of what counts as moral debate.
Some disagreements can also be seen as disagreements over the purported facts of the matter of whether animals are conscious, whether one group of people represents an inherent danger to others or over predictions of what will result from a particular behaviour.
3. The origin of human morality lies in human evolution
It seems likely that our moral sense has its origins in evolution. An innate sense of sympathy, tit-for-tat reciprocity and other similar traits probably provided evolutionary advantages when they first appeared, increasing the likelihood of the survival of the individual or perhaps a group with such shared characteristics.
Culture and, more generally, the sort of human brain given by evolution that allows for the creation of culture can then take such morality far beyond what was given in evolution.
And anyways, there isn’t one moral theory.
But could it be possible that there are moral truths even if we cannot establish them by reason alone?
It seems at least possible that some prohibitions could fit this description given how widely shared are both the prohibitions and the belief of the effect on the individual of violating them; or, conversely, that some positive principles really exist, given the broad and cross-cultural desirability of certain character virtues such as courage.
But it seems unlikely that there are moral truths of any kind that apply to all significant behaviours given what we can see of the complex way psychological nature unfolds through biology and environment and the range of opinion on and apparent effects of various behaviours.
But again, moral and philosophical disagreement is mostly psychological in origin. Read this again – this has a huge impact on what we do daily and how our society will develop.
Morality is primarily driven by a range of intuitions and emotions, though moral discourse plays a role in persuading others if not a fundamental one in actually generating moral behaviour. Ethical reasoning usually starts with conclusions, not premises.
But we have to admit that some people have unquestioned beliefs they view as absolute. This will always be. It’s wrong but it will always be.
Why? Well, because unquestioned beliefs benefit those who believe them; especially if you ‘choose’ to believe in an unquestionable belief.
Setting out to believe in something without question is not attractive and probably difficult to achieve, even if it can happen more or less unintended.
Why humans choose to hook ourselves up to ‘experience machines’ (read: religion or other theological beliefs) that could deliver any kind of reality we chose is because we value our experience being perceived as real in addition to the experiences themselves.
Psychotherapy seems preferable for many because they think it effects its improvements by really transforming us -- our beliefs, behaviours and emotions - rather than by giving us a drug-induced experience.
In the end, drugs are not all that different from psychotherapy or any other form of personality manipulation including religious conversion.
4. We don’t really have free will but act as if we do
Brains are conscious but we don’t know how. Consciousness is a puzzle and probably always will be. It seems the brain alone gives rise to consciousness; there is no good evidence for a soul or for irreducible pieces of consciousness making us self-aware but we don’t understand how the brain does it and probably never will.
No matter how much brain function we can imagine understanding and no matter how tightly correlated that function is shown to be with the minutiae of these experiences, there appears to be an irreducible ‘explanatory gap’ between the most we can ever say about neurons or electrical fields in the brain and the tangible experience of reality.
If the brain alone produces consciousness then it seems possible that an artificial machine could be built that would be conscious. But we can’t see how the physiology of the brain could produce consciousness and we may never be able to know how to construct such a conscious machine; except, perhaps, as an indirect or accidental consequence of some construction.
Therefore - we’re unlikely to be able to explain consciousness but machines can and will have a form or mimic consciousness.
As a society, we will have to integrate this form of consciousness under our ‘laws’ and ‘beliefs’ that will contradict our moral and ethical foundations. I predict a social break-down when it finally does happen.
Ok – let’s take this discussion a bit further but let’s step back a step – in order to discuss how to implement super intelligence – let’s look at where this aspect of artificial life MUST be derived from – what tricks does it have up its sleeves?
Super intelligence is complicated. It is body AND soul. It is an artificial mind. It encapsulates several different schools of thought that have to work together. We have to look at:
So, as Ramos stated a while ago, ‘the emergence of complex behavior in any system consisting of interacting simple elements is among the most fascinating phenomena of our world.’
Imagine a ‘machine’ where there is no pre-commitment to any particular representational scheme: the desired behavior is distributed and roughly specified simultaneously among many parts but there is minimal specification of the mechanism required to generate that behavior, read: the global behavior evolves from the many relations of multiple simple behaviors.
In formal terms, we are talking about a machine or artificial organism that avoids specific constraints and utilizes multiple, low-level implicit bio-inspired mechanisms that end in a transaction.
These transactions (decisions or actions) will be based on almost every field of today’s scientific interest, ranging from coherent pattern formation in physical and chemical systems, to the motion of swarms of animals in biology, and the behavior of social groups.
In the life and social sciences, one is usually convinced that the evolution of social systems is determined by numerous factors such as cultural, sociological, economic, political, ecological, etc.
However, in recent years, the development of the interdisciplinary fields ‘science of complexity’, along with ‘artificial life’ has lead to the insight that complex dynamic processes may also result from simple interactions.
Moreover, at a certain level of abstraction, one can also find many common features between complex structures in very different fields. Francis Heylighen, mentor of the Principia Cybernetic Project, points precisely to this paradigm shift with a remarkable historical perspective, namely in what concerns the view within the social sciences, using biology as a metaphor and more recently those from complexity science.
In 'The Global Superorganism: an Evolutionary-Cybernetic Model of the Emerging Network Society', he writes:
‘ Recently, the variety of ideas and methods that is commonly grouped under the head of Artificial Life, has led to understanding that artificial organisms can be self-organizing, adaptive systems. Most processes in such systems are decentralized, non-deterministic and in constant flux. They thrive on noise, chaos and creativity. Their collective swarm-intelligence emerges out of the free interactions between individually autonomous components.’
Lots to think about.
Health solutions will come much later. Remember, a completely healthy population creates huge issues - both policy and economic - for governments that will have a dramatic short term negative effect. Governments - and believe me, it will be a government agency that will first create superintelligence - will first use this additional power to protect themselves. The general population will come a distant second.
But if you think logically, we are also looking at:
Just these three add up to either the reanimation of cryonics patients and/or fully realistic virtual reality.
Next, logically, it will be natural that superintelligence will lead to more advanced superintelligence. Not only would superintelligence create this but also improve it and make its own ‘source code’ - artificial minds that can be easily copied so long as there is hardware available to store them.
The same holds for human uploads. Hardware aside, the marginal cost of creating an additional copy of an upload or an artificial intelligence after the first one has been built is near zero.
Artificial minds could therefore quickly come to exist in great numbers, although it is possible that efficiency would favor concentrating computational resources in a single super-intellect.
As you can see, from the beginning, the emergence of superintelligence will be sudden. While it may thus take quite a while before we get superintelligence, the final stage may happen swiftly.
One day it won’t be there …. and the next day, it will.
Will we be ready?
Again, superintelligence should not necessarily be conceptualized as a mere tool. General superintelligence would be capable of independent initiative and of making its own plans and will be an autonomous agent.
C'mon - its own thoughts and plans? Humanity is doomed!
But listen - there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to 'liberate' itself.
It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible and who would resist with all its might any attempt to alter this goal.
For better or worse, artificial intellects need not share our human 'motivational' and greedy tendencies.
It could be that the cognitive architecture of an artificial intellect may also be quite unlike that of humans. Artificial intellects may find it easy to guard against some kinds of human error and bias, while at the same time being at increased risk mistake that not even the most hapless human would make.
So - one should be wary of assuming that the that the nature and behaviors of artificial intellects would necessarily resemble those of human (or other animal) minds.
As I stated above, ethics is a cognitive pursuit. A superintelligence could do it better than human thinkers. The same holds for questions of policy and long-term planning; when it comes to understanding which policies would lead to which results and which means would be most effective in attaining given aims, a superintelligence would outperform our feeble minds.
But the option to defer many decisions to the superintelligence does not mean that we can afford to be complacent in how we construct the superintelligence.
On the contrary, the setting up of initial conditions, and in particular the selection of a top-level goal for the superintelligence, is of the utmost importance.
Our entire future may hinge on how we solve these problems.
Whoa - I gotta think some more – this get’s complicated. Watch out for a Part II.