Wednesday, July 19, 2006

Abusive agents

In discussions with my 'darker' friends, we've noticed that research into developing socially intelligent agents has increased over the last decade with the main focus being on how they can enhance human computer interaction. Remember - enhance human computer interaction

Most research related to the use of embodied agents has tended to concentrate on the benefits that these agents bring to an standard interface and how they can arouse positive emotional states that enhance cognitive functions (read: learning and problem solving skills).

But the flip side of this research could concentrated on the potential that these agents have to manipulate our behavior for unethical purposes.

Hard to believe? It's happening right now.

Look at the potential negative impacts that embodied agents have on an interaction? As they are becoming more socially intelligent, there is the increased possibility that they will be able to 'abuse' us (the user) in a number of ways.

As i see it, we seem to treat computers as social entities. Its interesting to note how we can make use of social skills in human-human interaction and use them in HCI to enhance human computer relations. Ask Jeeves and Amazon are just a few examples - Lufthansa and Swiss use agents to 'suggest' different flying options 'more suited to your requirements'.

Foriegn or 3rd party agents are building and maintaining long-term relationships with your agents right now and they are useing many of the relational strategies that humans often use (read: small talk and talk of the relationship, trading of 'secrets' to enforce 'trust').

Alarmed? Come on.

Clearly, we seem to prefer interacting with embodied agents that have some form of social intelligence. This is despite the fact that the level of intelligence demonstrated by such systems is very limited in comparison to our own.

However, with computer processing speeds doubling every year, I believe this ability is likely to change drastically in the near future.

Kurzewil predicts that by 2010 we will have virtual humans that look and act much like real humans, although they will still be unable to pass the Turing Test.

By 2030, he believes that it will be difficult to distinguish between virtual and biological humans. Singualrity.

This potential increase in agent intelligence and representation raises a number of troubling issues.

Our tendency to treat computers as social actors suggests that socially skilled agents may be able to utilize many of the strategies and techniques that humans use to manipulate other peoples' behavior.

For example, in human-human interaction, we tend to act on the advice of people we like and trust rather than people we dislike and distrust.

It is possible that the same principle might apply in HCI. If, in fact, we like and trust socially skilled agents over ones which have no such skills, these agents may be able to manipulate human behavior more effectively than agents with no social skills built into them.

Wow - could this be a new type of social hacking, online or virtual memes that spill-over into the physical world?

Could a government or other 'subversive' parties create and maintain an 'independant' blog that is totally automated to spit out gramatically sincere propoganda? Could it be happening right now?

Socially intelligent agents also have a number of advantages over humans when attempting to manipulate our behavior, including their ability to persistently make use of a wide variety of persuasive techniques without ever becoming tired or deterred (read: asking somebody to register for a product every time they start up their computer).

They can also make requests at times when it is more likely that the request will be complied with (read: a computer game or product that asks children to provide personal details before being able to progress to the next stage). Remember - these agents can also analyze data and situations 1000 times faster than we can.

In some circumstances, users may also trust computers more than they do other humans. Whether deserved or not, some professions have a reputation for being manipulative and deceptive (read: Geneva landlords, used car salesmen) and people often tend to be cautious when interacting with such people.

However, if users were to interact with a computational sales agent, they may drop their guard and be more open to manipulation as computers generally do not have a strong reputation for deception and attempting to manipulate peoples' behavior.

Is it acceptable for agents to manipulate (perhaps deceive) people in this way? Simply in order to help companies sell more products? Governments catch more terrorist or tax evaders?

Perhaps so, as long as the user believes that they have received good value for their money and do not feel exploited. But would they even know IF they were being exploited?

This is a form of manipulation (and deception), and most people are aware that many salespeople are like this. While this may not please people, they are unlikely to mind if they feel they have received value for money and a good service.

On the other hand, if customers feel cheated they will be unlikely to return with their money again. As embodied agents' social skills improve over the coming years, the danger of them being used to manipulate our behavior will increase.

In fact, there are many embodied agents available today that attempt to manipulate peoples' behavior in questionable ways.

The success of agents such as these is yet to be fully tested, but the potential for them to manipulate user behavior certainly exists.

As we move more towards managing computer systems rather than directly manipulating them, we will work more closely with agents in everyday activities as they undertake tasks on our behalf.

This means that people are likely to develop long-term relationships with agent entities in their interactions, which (who) they will grow to know and trust.

It may be that these agents are then in a very strong position to alter their behavior and start becoming more and more manipulative over time (like a cult: nice to begin with, drawing a person in and then changing and starting to abuse the trust that has been created).

This may happen by initial malicious design, or more intriguingly, by external people cracking an agent and making it turn on its user!

Perhaps a new form of virus writer may emerge.

It is vital that we begin studying in more detail how socially intelligent agents can manipulate our behavior.

A deeper understanding of these areas will enable us to take steps toward avoiding agent abuse against users, both now and in the future.

Remember - Evolution never refactors its code. It is far easier for evolution to stumble over a thousand individual optimizations than for evolution to stumble over two simultaneous changes which are together beneficial and separately harmful.

Now this is the deep part - a bit heavy but it will explain WHY manipulative agents will happen.

Human intelligence, created by evolution, is characterized by evolution's design signature. The vast majority of our genetic history took place in the absence of deliberative intelligence; our older cognitive systems are poorly adapted to the possibilities inherent in deliberation.

Evolution has applied vast design pressures to us but has done so very unevenly; evolution's design pressures are filtered through an unusual methodology that works far better for hand-massaging code than for refactoring program architectures.

Now imagine a agent built in its own presence by intelligent designers, beginning from primitive and awkward subsystems that nonetheless form a complete supersystem.

Imagine a development process in which the elaboration and occasional refactoring of the subsystems can coopt any degree of intelligence, however small, exhibited by the supersystem.

The result would be a fundamentally different design and a new approach to Artificial Intelligence which Eliezer Yudkowsky termed 'seed AI'.

Seed AI is AI designed for self-understanding, self-modification, and recursive self-improvement.

This has implications both for the functional architectures needed to achieve primitive intelligence, and for the later development of the AI if and when its holonic self-understanding begins to improve - but improvement is in the eye of the beholder - improved cunning and self defense techniques will eneable an agent to 'defend' itself (read: HAL)

Seed AI is not a workaround that avoids the challenge of general intelligence by bootstrapping from an moral core; seed AI will begin to yield benefits once there is some degree of available intelligence to be utilized.

The later consequences of seed AI (such as true negative self-improvement) only show up after the agent has achieved significant holonic understanding and general intelligence.

The question is, 'What happens afterwards?' And this is a serious question.

0 Comments:

Post a Comment

<< Home