Saturday, August 19, 2006

Disconnected

... back at the end of September - off for some much needed rest and relaxation

In the meantime, check out SHiFT - it will be all about emerging technologies, whether they’re the latest internet trend or the latest social or psychological development.

Thursday, August 17, 2006

The technology of social disorder

Let’s face it; the general press doesn’t represent the public any more. I don't believe they have a check-and-balance function. The national press corps had become little more than another special-interest lobbying group.

Indeed, the territory the traditional media once occupies is increasingly being deluged by political lobbying, celebrity publicity and product advertising - cleverly staged ‘photo ops’, carefully produced propaganda rallies, preplanned ‘events’, tidal waves of campaign ads and the like.

Afraid of losing further influence, access and the lucrative ad revenues that come from such image-making, major media outlets have found it in their financial interest to quietly yield to new media channels.

But we live in a very dangerous time in which the right to express dissent and to raise questions about the workings of power is seriously imperiled by fundamentalisms of many kinds. Now more than ever, we need to keep the lessons of history foremost in our minds and to defend the critical discourses and practices that enable differing experiences and perspectives to be heard and understood.

What does this downgrading of the media's role say about how our government views its citizens – or in most cases - consumers?

It suggests that because ‘we the people’ are seen not as political constituencies conferring legitimacy on our rulers, but as consumers to be sold policy the way advertisers sell product. In the storm of selling, spin and bullying that has been the big media ‘hic et ubique’, traditional news outlets are finding themselves increasingly drowned out, ghettoized and cowed by the public opinion.

Good news. Finally.

Add in a further dynamic (which intellectuals from Marxist-Leninist societies would instantly recognize): Groups denied legitimacy and disdained by the state tend to internalize their exclusion as a form of culpability and often feel an abject, autonomic urge to seek reinstatement at almost any price.

Little wonder that the traditional channels have had a difficult time mustering anything like a convincing counter-narrative to the onslaught of Web 2.0 tools.
Not only did a mutant form of skepticism-free news succeed - at least for a time - in leaving large segments of the population uninformed, but it corrupted the ability of traditional media to function.

All too often they simply found themselves looking into a fun-house mirror of their own making and imagined that they were viewing reality.

Then social networking showed them what the real ‘reality’ was.

In this world of personalized news, information loops have become two-way highways again.
The done of the higher ups to ‘stay on message’ the campaigns to dominate the media environment employ all the sophistication and technology developed by communications experts are now based on the understanding and use of psychology to market ideas; public relations techniques as a fountainhead of artful propaganda so well-packaged that most people can't tell it from the real thing.

At the moment, the general population is being pushed forward to the front lines of faith-based truth. And make no mistake; this experiment will continue if we allow it.
Complete conversion would mean not just that the press had surrendered its essential watchdog role, but - a far darker thought – that it might be shunted off to a place where it would not matter.

Although freedom of information has been endlessly extolled in principle, it has had little utility in practice. What possible role could a free information play when ‘revelation’ trumps fact and conclusions are preordained?

An honest and truthful information source is logically viewed as a spoiler under such conditions, stepping between those who have and those who need … and those who are true believers.
Information feedback loops have played a crucial role in any functioning democracy but are ceasing to operate. The media synapses which normally transmit warnings from citizen to government are frozen.

Television networks continued to broadcast and papers continued to publish, but dismissed and ignored, they are becoming irrelevant, except possibly for their entertainment value.
As the quality of information diminishes, normal Jane’s and Joe’s on the street – both west and east – are being deprived of the ability to learn of dangers to our societies.

Just as the free exchange of information plays only a small role in the relationship between a fundamentalist believer and his or her God, it is playing a distinctly diminished role in the world of money and politics.

After all, if you already know the answer to a question - what is the use of the media, except to broadcast that answer? The task at hand, then, is never to listen but to stand on the soap box and sell the gospel to non-believers, transforming the once interactive process between citizen and its leader.

New social networks inflate the way technological systems operate with modern human communication. We are supposed to believe that we live inside the world of William Gibson’s Neuromancer and that salvation is only attainable via very specific technological expertise unleashed against the system.

Consider the heroes of Hollywood sci-fi blockbusters such as The Matrix whose power lies in their knowledge of ‘the code.’ It is implied that we operate in networks because computers and the Internet have restructured our lives and because global economic systems have turned us into global citizens.

‘Social hacking’ then comes to stand for all forms of critical engagement with preexistent power structures.

I’m just a little too old to believe these new media mantras unquestioningly.

While I can understand that there might be a dearth of knowledge about tactical interventions of previous societies, I am perplexed by the apparent loss of short-term memory of today’s cultural technocentrics.

The shift from internationalism to a more globally inclusive worldview came long before the age of the Internet. It was launched outside Europe and America and emanated from the geopolitical margins.

The process took place across a range of fields of knowledge, culture and politics. This revision of the world picture was catalyzed by postwar decolonization; the Non-Aligned Movement launched in 1961; and civil rights struggles in the developed world, including the Black Power and Chicano movements—all of which invariably affirmed their alliances with Third World revolutions.

This political process was expanded upon by a postcolonial understanding that various diasporas shared transnational connections and that these diasporas were produced by the economics and politics of colonialism and imperialism.

The historical bases of these movements are consistently obscured by the technocentric rhetoric.

Instead of dealing with these histories, modern discussion on globalization and new technology tend to dismiss postcolonial discourse as ‘mere identity politics and societies’.

I am a great admirer of the practice of electronic civil disobedience and those that have used ‘hacktivist’ software such as Floodnet to engage in online protest actions. But I find the willed historical amnesia of new media theory to be quite suspect and even dangerous.

The alienation caused by multinational corporate domination that many feel is just the last chapter in a long history of reactions against imperial projects.

Those that argue that increasing the use of social technology, rather than simply increasing social consciousness, would do well to examine the history of globalization, networks, social dissent and collective actions in order to understand that they are rooted in the geopolitical and cultural margins – not the new world order.

Tuesday, August 15, 2006

Planning superintelligence - Part III

In response to all the emails I have received within the last few days - there are a few non-absolutes that most of you aren’t thinking about. This is long but READ IT before any of you send me another email on this subject.

Subnote: This is to all my friends going through their many mid-life crises – to all the amateur philosophers and ‘new world order’ entrepreneurs. This has as much to do with developing artificial life as with our day-to-day strife.

Remember, all our (and future ‘conscience’ machines) major decisions are based on philosophy, psychology and our sense of self-worth. This is true. This just is. It who we are … or at least who we THINK we are.

Our decisions – whether about our next job, our next sexual partner, our next vacation or our next stock pick are all based on evolutionary philosophy (and other contemporary factors that acknowledges the limitations on our understanding), the psychological (if not absolute) reality of values, free will and other phenomena and our desire to live as best we can.

Two points:
  • We live by relative values, biological dispositions, upbringing, habit and perceived choice.
  • We don’t know how much we can modify ourselves, what makes up happy or what we value.
That’s it.

Every decision is based on what we do not know; all through humanity and even before. I am stating that there are four absolutes. So. let’s start at the beginning.


1. The origin of the universe cannot be understood


We can see no reason why the universe (and the rules within the universe) exists and it doesn’t seem we will ever find one. Any explanation would simply become part of what has to be explained. Given the way our minds are constructed, no final satisfactory explanation seems possible. Even a newly discovered law of physics would pose the question as to why that should be the case.

So called ‘Big Bang’ theories may explain the origin of the universe but it only provides an explanation up to a certain point in time or perhaps to the beginning of time itself. But it does not explain why there should be space-time or laws of physics that might allow a universe to emerge from nothing at all.

It’s possible that a final explanation for the origin of the universe exists but cannot be known by us. Such an explanation, even if incomprehensible, seems more likely and more desirable than a universe that came into being from simply nothing. Perhaps this is because the explanation at least satisfies the deep-seated belief that everything has an explanation.

The existence of this incomprehensible explanation might be confirmed by meeting an alien species that convinces us there is more to the brute existence of the universe than we ourselves can comprehend.


2. Morality has no absolute rational foundation


There is no chain of reasoning that has been offered or that we can imagine as to why we must adopt any fundamental moral obligation or value over another or any at all. That we generally do (or act as if we do) is clear, as it is that many values and behaviours are shared and others are not.

No convincing argument has ever been published to avoid Hume’s original observation that an ‘ought’ cannot be derived from any ‘is’. Read: that no agreed upon fact of nature can tell us why we are obligated to actually do something.

Let’s face it - moral agreement and disagreement are ultimately arbitrary. We only judge another’s behaviour morally wrong to indicate its inconsistency with our deepest feelings and principles about how people should treat each other such as respect for an individual’s rights, maximizing the greatest good, acceptance of a social contract, a particular sense of justice, the word of God or whatever we believe comprises and justifies that belief.

This does not prevent us from reasoning with those with whom we share at least some values to show that behaviour is in fact consistent or inconsistent with those shared values and such arguments occupy much of what counts as moral debate.

Some disagreements can also be seen as disagreements over the purported facts of the matter of whether animals are conscious, whether one group of people represents an inherent danger to others or over predictions of what will result from a particular behaviour.

3. The origin of human morality lies in human evolution

It seems likely that our moral sense has its origins in evolution. An innate sense of sympathy, tit-for-tat reciprocity and other similar traits probably provided evolutionary advantages when they first appeared, increasing the likelihood of the survival of the individual or perhaps a group with such shared characteristics.

Culture and, more generally, the sort of human brain given by evolution that allows for the creation of culture can then take such morality far beyond what was given in evolution.
And anyways, there isn’t one moral theory.

But could it be possible that there are moral truths even if we cannot establish them by reason alone?

It seems at least possible that some prohibitions could fit this description given how widely shared are both the prohibitions and the belief of the effect on the individual of violating them; or, conversely, that some positive principles really exist, given the broad and cross-cultural desirability of certain character virtues such as courage.

But it seems unlikely that there are moral truths of any kind that apply to all significant behaviours given what we can see of the complex way psychological nature unfolds through biology and environment and the range of opinion on and apparent effects of various behaviours.

But again, moral and philosophical disagreement is mostly psychological in origin. Read this again – this has a huge impact on what we do daily and how our society will develop.

Morality is primarily driven by a range of intuitions and emotions, though moral discourse plays a role in persuading others if not a fundamental one in actually generating moral behaviour. Ethical reasoning usually starts with conclusions, not premises.

But we have to admit that some people have unquestioned beliefs they view as absolute. This will always be. It’s wrong but it will always be.

Why? Well, because unquestioned beliefs benefit those who believe them; especially if you ‘choose’ to believe in an unquestionable belief.

Setting out to believe in something without question is not attractive and probably difficult to achieve, even if it can happen more or less unintended.

Why humans choose to hook ourselves up to ‘experience machines’ (read: religion or other theological beliefs) that could deliver any kind of reality we chose is because we value our experience being perceived as real in addition to the experiences themselves.

Psychotherapy seems preferable for many because they think it effects its improvements by really transforming us -- our beliefs, behaviours and emotions - rather than by giving us a drug-induced experience.

In the end, drugs are not all that different from psychotherapy or any other form of personality manipulation including religious conversion.

4. We don’t really have free will but act as if we do

Brains are conscious but we don’t know how. Consciousness is a puzzle and probably always will be. It seems the brain alone gives rise to consciousness; there is no good evidence for a soul or for irreducible pieces of consciousness making us self-aware but we don’t understand how the brain does it and probably never will.

No matter how much brain function we can imagine understanding and no matter how tightly correlated that function is shown to be with the minutiae of these experiences, there appears to be an irreducible ‘explanatory gap’ between the most we can ever say about neurons or electrical fields in the brain and the tangible experience of reality.

If the brain alone produces consciousness then it seems possible that an artificial machine could be built that would be conscious. But we can’t see how the physiology of the brain could produce consciousness and we may never be able to know how to construct such a conscious machine; except, perhaps, as an indirect or accidental consequence of some construction.

Therefore - we’re unlikely to be able to explain consciousness but machines can and will have a form or mimic consciousness.

As a society, we will have to integrate this form of consciousness under our ‘laws’ and ‘beliefs’ that will contradict our moral and ethical foundations. I predict a social break-down when it finally does happen.

Monday, August 14, 2006

Planning superintelligence - Part II

Ok – let’s take this discussion a bit further but let’s step back a step – in order to discuss how to implement super intelligence – let’s look at where this aspect of artificial life MUST be derived from – what tricks does it have up its sleeves?

Super intelligence is complicated. It is body AND soul. It is an artificial mind. It encapsulates several different schools of thought that have to work together. We have to look at:

  • Computational biology: bio-networks, development, evolution, and prebiotic evolution and artificial chemistry
  • Complex systems and networks: information and complexity, collective behavior and population dynamics, evolutionary and collective games
  • Embodied cognition: embodiment and behavior, language and learning
  • Achievements and open problems: biologically-inspired computing and technology, and formal as well as philosophical models

So, as Ramos stated a while ago, ‘the emergence of complex behavior in any system consisting of interacting simple elements is among the most fascinating phenomena of our world.’

Imagine a ‘machine’ where there is no pre-commitment to any particular representational scheme: the desired behavior is distributed and roughly specified simultaneously among many parts but there is minimal specification of the mechanism required to generate that behavior, read: the global behavior evolves from the many relations of multiple simple behaviors.

In formal terms, we are talking about a machine or artificial organism that avoids specific constraints and utilizes multiple, low-level implicit bio-inspired mechanisms that end in a transaction.

These transactions (decisions or actions) will be based on almost every field of today’s scientific interest, ranging from coherent pattern formation in physical and chemical systems, to the motion of swarms of animals in biology, and the behavior of social groups.

In the life and social sciences, one is usually convinced that the evolution of social systems is determined by numerous factors such as cultural, sociological, economic, political, ecological, etc.
However, in recent years, the development of the interdisciplinary fields ‘science of complexity’, along with ‘artificial life’ has lead to the insight that complex dynamic processes may also result from simple interactions.

Moreover, at a certain level of abstraction, one can also find many common features between complex structures in very different fields. Francis Heylighen, mentor of the Principia Cybernetic Project, points precisely to this paradigm shift with a remarkable historical perspective, namely in what concerns the view within the social sciences, using biology as a metaphor and more recently those from complexity science.

In 'The Global Superorganism: an Evolutionary-Cybernetic Model of the Emerging Network Society', he writes:

‘ Recently, the variety of ideas and methods that is commonly grouped under the head of Artificial Life, has led to understanding that artificial organisms can be self-organizing, adaptive systems. Most processes in such systems are decentralized, non-deterministic and in constant flux. They thrive on noise, chaos and creativity. Their collective swarm-intelligence emerges out of the free interactions between individually autonomous components.’

In fact, as one can see, those decision making processes or algorithms should be viewed as behaving like a swarm.

Rather than take living things apart, super intelligence will attempt to put living things together within a bottom-up approach. That is to say that it cannot copy life-as-we-know-it but must delve into the realm of life-as-it-could-be. It will have to generate lifelike behavior and focus on the problem of creating behavior generators that are inspired by the nature itself (even if the results that emerge from the process have no analogues in the natural world).

The key insight into the natural method of behavior generation is that it is reflected in the ‘architecture’ of natural living organisms, which consist of many millions of parts - each one of which has its own behavioral repertoire.
As we all know by now, living systems are highly distributed and quite massively parallel.

So this super intelligence must be a property of a system:
  • where the collective behaviors of entities interact locally with their environment resulting in the emergence of coherent global patterns
  • that provides a basis with which it is possible to explore collective (or distributed) problem solving without centralized control or the provision of a global model
  • that applies the formation of a coherent social collective intelligence from the observation and evaluation of individual behaviors
  • that stresses the role played by the environmental media - driving force for societal learning - as well as by the positive and negative feedback produced by the many interactions among independent agents

Finally, presenting a simple intelligence based on the above features, one can address the collective adaptation of a social community to a cultural, environmental, contextual and informational dynamic landscape for the purpose of complex decision making processes – read: three-dimensional mathematical functions that change over time (non-causal 4 dementional decision making processes for you 'high-math' types out there).

Therefore, the super intelligence must be a collective intelligence that is able to cope and quickly adapt to unforeseen situations including those with two different and contradictory purposes – this is the only way we would be able to control it - in a bottom-up manner using the mechanics of human-based logic based on ‘similarity’ – a cognitive term.

Similarity underlies the fundamental cognitive capabilities such as memory, categorization, decision making, problem solving and reasoning. Although recent approaches to similarity appreciate the structure of mental representations within an AI, they differ in the processes posited to operate over these representations.

Due to this construction, super intelligence would be unrivalled: it would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal.

It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Even a ‘controlled’ super intelligence that was running on an isolated computer, able to interact with the rest of the world only via text interface, might be able to break out of its confinement.

So we’re back at Step 1 - the risks in developing super intelligence include the risk of failure to give it the supergoal of philanthropy - read: NOT to build it so that it serves only a select group of humans but rather the whole of humanity in general.

More subtly, a super intelligence could decide that the state of affairs that we humans might now judge as desirable turns out to be a false utopia in which things essential to human survival may be irreversibly lost.

Given the state of the world today – and how we ‘respect’ humanity in general - that’s not a pretty insight.

Lots to think about.

    Monday, August 07, 2006

    Planning superintelligence - Part I

    I received a vast amount of email about the last post - the socialization of autonomous robots. I was asked the most intriguing question, ‘How should we 'socialize' super human intelligence?’

    I believe that superintelligence WILL be the last invention humans ever need to make – we’ll be there soon - but from Bill Joy to Marshall McLuhan, there has been this 'scary' feeling that we are opening Pandora's Box.

    What IS so scary? There are several ideas about this so I am borrowing a bit but let's look at this a bit deeper.

    The ethical issues related to the future creation of machines with general intellectual capabilities far outstrip those of humans and are quite different and distinct from any ethical problems arising within our current societies. Superintelligence is different.

    Superintelligence would not be just another technological development but it WOULD be the most important invention ever made and WOULD lead to explosive progress in several (if not all) scientific and technological fields.

    But what about moral thinking? How do we 'socialize' this ability to think? Should we control it? How do we control it? Can we control it?

    Ethical questions all. But since ethics is a cognitive pursuit, would superintelligence surpass humans in the quality of its ethics and morals? Wouldn't the superintelligence simple know when to stop developing?

    By definition - a superintelligence is any form of collective intellect that vastly outperforms the best human brains. Rather simple, but this definition leaves an obviously open question - how the superintelligence is implemented. No matter that it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue (biological computer) or something else we haven’t quite seen as of yet, the scary question is how this superintelligence is implemented.

    I’m not talking about Deep Blue or a cluster of Crays but more about the result of growing hardware performance and increased ability to implement algorithms and architectures similar to those used by human brains. We are learning how right now. It's just a matter of time but it WILL happen in our lifetimes.

    It will.

    First, let’s all agree that superintelligence is not just another application or technology; not just another tool that will add incrementally to human capabilities.

    Superintelligence is radically different.

    Given a superintelligence’s intellectual superiority, it would be much better at doing scientific research and technological development than any human and possibly better even than all humans taken together.

    Therefore, it can be assumed that technological progress in all other fields will be accelerated by the arrival of advanced artificial intelligence.

    It is likely that new technology (and applications thereof) will be speedily developed by the first superintelligence that build on the current trends of today. By nature of who and what will develope this new brain, these technologies will most likely be molecular manufacturing, advanced military weaponry and space travel including things like new propulsion techniques and von Neumann probes (self-reproducing interstellar probes).

    Health solutions will come much later. Remember, a completely healthy population creates huge issues - both policy and economic - for governments that will have a dramatic short term negative effect. Governments - and believe me, it will be a government agency that will first create superintelligence - will first use this additional power to protect themselves. The general population will come a distant second.

    But if you think logically, we are also looking at:

    • neural uploading (neural or sub-neural scanning of a particular brain and implementation of the same algorithmic structures on a computer in a way that perseveres memory and personality). It's starting already.
    • elimination of aging and disease
    • fine-grained control of human mood, emotion and motivation

    Just these three add up to either the reanimation of cryonics patients and/or fully realistic virtual reality.

    Next, logically, it will be natural that superintelligence will lead to more advanced superintelligence. Not only would superintelligence create this but also improve it and make its own ‘source code’ - artificial minds that can be easily copied so long as there is hardware available to store them.

    The same holds for human uploads. Hardware aside, the marginal cost of creating an additional copy of an upload or an artificial intelligence after the first one has been built is near zero.

    Artificial minds could therefore quickly come to exist in great numbers, although it is possible that efficiency would favor concentrating computational resources in a single super-intellect.

    As you can see, from the beginning, the emergence of superintelligence will be sudden. While it may thus take quite a while before we get superintelligence, the final stage may happen swiftly.

    One day it won’t be there …. and the next day, it will.

    Will we be ready?

    Again, superintelligence should not necessarily be conceptualized as a mere tool. General superintelligence would be capable of independent initiative and of making its own plans and will be an autonomous agent.

    C'mon - its own thoughts and plans? Humanity is doomed!

    But listen - there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to 'liberate' itself.

    It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible and who would resist with all its might any attempt to alter this goal.

    For better or worse, artificial intellects need not share our human 'motivational' and greedy tendencies.

    It could be that the cognitive architecture of an artificial intellect may also be quite unlike that of humans. Artificial intellects may find it easy to guard against some kinds of human error and bias, while at the same time being at increased risk mistake that not even the most hapless human would make.

    So - one should be wary of assuming that the that the nature and behaviors of artificial intellects would necessarily resemble those of human (or other animal) minds.

    As I stated above, ethics is a cognitive pursuit. A superintelligence could do it better than human thinkers. The same holds for questions of policy and long-term planning; when it comes to understanding which policies would lead to which results and which means would be most effective in attaining given aims, a superintelligence would outperform our feeble minds.

    But the option to defer many decisions to the superintelligence does not mean that we can afford to be complacent in how we construct the superintelligence.

    On the contrary, the setting up of initial conditions, and in particular the selection of a top-level goal for the superintelligence, is of the utmost importance.

    Our entire future may hinge on how we solve these problems.

    Whoa - I gotta think some more – this get’s complicated. Watch out for a Part II.