Thursday, July 27, 2006

The ethics of social robots

Skype conference and the question was, 'when will robots finally be here?'

To answer that. let's look at what just happened first ....

It's interactive: like the telephone and the telegraph (and unlike radio or television), people can overcome great distances to communicate with others almost instantaneously.

It's a mass medium: like radio and television (and unlike the telephone or fax), information can reach millions of people at the same time... fast!

It has been vilified as a powerful new tool for the devil, awash in porn, causing users to be addicted to hours each day during which they are away from their family and friends resulting in depression and loneliness for the individual user and further weakening neighborhood and community ties.

Whew ... but wait - there's more!

It has been hailed by western leader as the ultimate weapon in the battle against totalitarianism and tyranny and credited by Federal Reserve Board Chairman Alan Greenspan with creating a 'new economy.'

It was denounced by the head of the Miss France committee as 'an uncontrolled medium where rumormongers, pedophiles, prostitutes, and criminals could go about their business with impunity' after it facilitated the worldwide spread of rumors that the reigning Miss France was, in fact, a man.

'I’m terrified by this type of media!' he/she said.

OK – we all know what I’m talking about but that's the human angle. I'm thinking about Robots today. What do youthink their opinion is?

Listen, electronic agents have been using this 'new medium' for 20 years – long before humans created a way to interface with the zeros and ones.

Why don't we ask them?

Social Robots (as opposed to industrial robots) will become increasingly important in our society, but, oddly enough, their social role remains unclear.

So, how do we define and enable robots to follow human social conventions?

Humans have all sorts of conventions that make interaction easier, including how to pass each other in hallways, how to go through doors and in and out of elevators and how to enter and wait in line.

There are several schools of thought looking at techniques that enable robots to use such social conventions by modifying their nominal behaviors. Eventually, one whould surmise, that robots would learn such conventions on their own.

Past discussions were all based on the 'Media Equation' - which explores how people treat computers as social actors. Now this is being applied to robots. But the Media Equation is too limited in that it studies human-robot interaction by focusing only on robot abuse.

But that is changing .... attitudes are changing. Humans are starting to be nice to robots - respecting them.

Robots have been important in our society. Now, robotic technologies that integrate information technology with physical embodiment are now robust enough to be deployed in industrial, institutional AND domestic settings.

The United Nations, in a recent robotics survey (PDF), identified personal service robots as having the highest expected growth rate over the next few years.

These robots will help the elderly, support humans in the house and improve communication between distant partners but we need research vehicles for the study of human-robot communication.

Why is clear enough. How these robots should behave and interact with humans remains largely unclear. When designing robots, we need to make judgments on what technologies to pursue, what systems to make and how to consider context. Robots in a human society? Or just 'society'?

Researchers and designers have only just begun to understand these critical issues of how to ‘design’ robots as social actors and how to 'train' robots to act within a society.

This is more than AI, high math and way more than appearance – it’s really about social class systems and strongly questions the inherent ‘ethics’ of technology. Should rules of social conduct apply to technology itself and the products of technology implementations?

Wow, that’s a big question.

The Media Equation started the discussion on how social norms apply to robots and at the basic level, robots often have an anthropomorphic embodiment and human-like behavior.

But what about the interaction - under what conditions should humans treat robots like social actors or even like humans? What happens when this social illusion shatters and we treat them again like machines that can be switched off, sold or torn apart without a bad consciousness?

Are robots punishable? Can 'killing' a robot be socially acceptable?

Ultimately, this discussion eventually leads to legal considerations of the status of robots in our society.

First studies around this theme are becoming available but thay take a very ‘non-human’ approach. To examine this borderline in human-robot interaction it is necessary to step far out from normal conduct (and I’m not talking about the Scientological view).

Logic would say that the next step is that robots must be designed and implemented so that they will be capable of performing legally binding actions (as electronic transactions do today). These advances necessitate a thorough treatment of their legal status and consequences.

But first we must demonstrate that these 'electronic agents' behave structurally similar to human agents. Then one needs to know how declarations of intention stated by robots are related to ordinary declarations of intention given by natural persons or legal entities and also how the actions of robots in this respect have to be classified under a national law.

But does this mean that robots, in the social context, must have a national citizenship and respect national laws over and above the very basic Three Laws of Robotics written by Isaac Asimov?

But then, if the ‘electronic person’ is a legal nationalized entity, what civil and 'robot rights' do they have?

As you see, we are not ready – the ethics of of this type of technology haven’t been discovered yet. There are so many questions that have to be answered and so many questions that haven't even been asked yet.

Time to begin.

They've started to build the robotic society already - but they forgot to set the rules.

Wednesday, July 26, 2006

MySpace vs. the US Military

Recent events in light of the new global battlespace created by information technology, the meaning of security, the hallmarks of asymmetric warfare and the resources we need to get by are changing.

Two things:

  • The social networking hero, MySpace was hit by a powerful worm about 10 days ago.
  • The US military saw in coming and didn't stop it.
Why?

They were involved in the annual Cyber Defence Exercise (CDX), an annual competition between students at the five U.S. Service Academies that has developed into an exercise where defensive technologies are implemented and tested.

Remember - these are the guys that use terms like 'Adversary Characterization and Scoring Systems', 'motivational counter-agent subtypes' and 'intrusion signature analysis'. They have great gadgets like the remote active operating system fingerprinting tool, Xprobe2_vb.

They propose that to make the internet safer – it should be attacked even more.

Truth in point - the two similar events that have been publicized recently are the DEFCON 'Capture the Flag' (CTF) competition and the military Cyber Defence Exercise. These two competitions follow different paradigms.

The DEFCON event set all teams to be both attackers and defenders, while the Cyber Defence Exercise focuses the teams on defensive operations only. So why wouldn't they alert MySpace?

According to the Internet Storm Center and a recent announcement by Hitwise, MySpace has become the #1 most popular destination on the Web.

An unusual aspect of this worm was that it resided purely on MySpace pages, rather than installing itself on personal computers of its victims.

The essential component of the worm, which Symantec called ACTS.Spaceflash, was a Flash object that was embedded in the victims' profile pages on MySpace. The offending code resided in the redirect.swf file, and looked like this:

getURL("http://editprofile.myspace.com/index.cfm?
fuseaction=blog.view&friendID=93634373&blogID=144877075", "_self");

The viewer of the Flash object was redirected to a page that, through clever scripting, modified the victim's profile. As a result, whenever someone viewed the victim's profile, the viewer's profile would also get infected.

Essentially, the weakness that these attacks exploited was the ability of users to embed active content in the form of Flash objects in MySpace pages. This - in some convoluted way - brings me to the Honeypots.

Honeypots are information system resources, whose value lies in unauthorized or illicit use of these resources. The Honeypot Project that has established a world-wide distributed sensor system of honeypots. All platforms send all logging data to a central database, enabling some major mining and data correlation.

Why?

To see how the collected data can be used to learn more about cyber-attack patterns. In addition, they are trying to define the root-causes of attacks, specific tools or techniques used by attackers.

Why?

Here we go - almost all aspects of our life read: internet, fix or mobile phone, online banking, depend heavily on computer systems. Due to the growing pervasiveness of computers and ubiquitous mobility of users and devices, this dependence is steadily increasing.

Nevertheless, there are more and more security threats in communication networks: we are flooded with unsolicited bulk e-mail spam, we have huge problems with viruses, worms and other malware, Denial-of-Service (DoS) attacks, electronic fraud and crackers are often able to break into systems - downsides of the digital economy, social networking sites and in general, Web 2.0.

An approach to learn more about attacks and attack patterns is based on the idea of electronic decoys, called honeypots. A honeypot is an information system resource whose value lies in unauthorized or illicit use of that resource.

Honeypots can also be combined into networks of honeypots (honeynets) to learn more about the diverse proceeding of attackers. Honeypots are cool but there are several projects that exist to observe malicious traffic on a large-scale base or the whole Internet.

They often consist in monitoring a very large number of unused IP address spaces to monitor malicious activities. Network telescopes, blackholes, darknets or Internet Motion Sensor (IMS) are the better known ones.

All of these projects have the same approach: they use a large piece of globally announced IPv4 address space and passively monitor all incoming traffic.

For example, the network telescope run by the University of California (which the several national militaries have access to) uses 224 IP addresses. This is 1/256th of all IPv4 addresses.

This means that one packet is 256 is 'read', stored and analyzed. That's allot!

And here's the catch - the telescope contains almost no legitimate hosts, so inbound traffic to nonexistent machines is always anomalous in some way, i.e., the principle of honeynets is also used in this context. By analyzing all packets, they are able to infer information about attackers.

While remaining 'unseen'.

So you see, the MySpace attack was perfect. Finally some of the boys and girls in black hats saw some real-time action.

Valuable? Definitely.

Fun? Oh yaaaaa!!

Moral? Well .....

Legal? Hmmm ......

Tuesday, July 25, 2006

Digital Citizenship

Let's talk about the inverse of the digital divide - digital citizenship.

Citizenship in today's world, however one defines its characteristics and practice, draws its sustenance from the access to information - read: education; within the information society, especially democratic ones, citizenship must come to terms with technology; and that the level access to technology - as means, object and context - influences education is undeniable.

These three points are the driving force of dynamism that is currently underway in countries undergoing a transition towards the development of knowledge or information societies.

This particular constellation of social, political, economic and technological phenomena is complicated but there are a few things that one should consider and I bet that every one reading this can come up with current examples:

  • the development of sophisticated digital technologies of information and networked communication and their rapid deployment across a wide range of social, political and economic practices and institutions, domestically and globally;
  • the emergence of novel and powerful biological-based technologies that provoke moral, ethical and political controversy;
  • the 'commitment' post-industrial states to encourage application of technology as crucial to economic growth and material prosperity, national cultural autonomy, democracy and social well-being;
  • the post-war restructuring of capitalist economies around priorities euphemistically styled as 'innovation', 'flexibility' and 'competitiveness';
  • the increased attention to the role played by research academics in generating opportunities, innovation and sustaining flexibility in knowledge-based economies;
  • the rapid integration of new information and communication technologies into society at all levels;
  • the growth of private, non-profit, commercial enterprises;
  • the crisis of democratic citizenship in most western democracies as seen by the decreasing rates of formal political participation and civic engagement, declining levels trust in political institutions, diminished civic capacity and political knowledge and normalization of state repression of civil liberties;
  • the widespread popular hope in the potential for new information and communication technologies to reinvigorate democratic citizenship and governance.

Individually or together - these points testify to the role of 'information access' in addressing the concrete challenges of living well in today's, contemporary technological society.

As we concentrate and write about the digital divide, we should remember how we are defining the digital citizen. This is not only a problem/solution for developing nations. This is a global issue. In there own ways, every nation asking themselves what this new 'information society' is.

Let's not hide behind the notion that this is only a problem of the developing nations.

Monday, July 24, 2006

Bad usability that is good

I am interested in some of the darker aspects of human nature when it comes to technology: I would like to understand frustration when things go wrong in order to design new tools with the right emotional impact.

Why not ‘build in’ a navigational workflow order that creates anger to help select those who need to defuse fraught situations.
Why not deliberately design ‘bad’ situations; obviously this is necessary to study issues like frustration, but also we could design bad things in order to understand what is good!

There are times when good is dark and the bright light of day needs to be shrouded just a little frustration.

Gamers know this. Game developers know it better.

Slowly you edge down the dark corridor, distant daylight dimly illuminates the walls on either side, your heart races you know there are others in these corridors and they are after you. You near the bend. What is beyond? Too late you wheel round only to be momentarily blinded by a bright light, and then you hear a pistol crack and see the ground race towards you, already red with blood, your blood.

Game Over.

Video games are escapist, virtual, just a game, but in the heat of the moment the emotions can be very real. Research on affective gaming seek in various ways to understand, measure or infer the emotions or more normally simply arousal of the gamer in order to adapt the game and create a more engaging, more immersive experience.

Early work used heart monitoring to measure arousal and create a game that modified the level of challenge accordingly, low levels of arousal led to more enemies attacking, although easier to kill ones in order to maintain the same level of difficulty.

More recently they've focused on frustration, both the 'proper' frustration when you get shot by a cleverer opponent for the 10th time, but also the frustration when a moment’s delay in the controller means you can't duck in time. Cruel, cruel design that everyone loves.

We grow up in the real world, physical things that respond to gravity, bump into each other, have weight, solidity and stay where they are put until moved. Then we move into the electronic world whether virtual reality or simply a desktop interface.

Things are no longer so simple and the laws of physicality breakdown: there are delays between action and effect, things change without apparent agency; it is a world of magic and not a little superstition. Where is my information? How come search isn’t working?

I think that we need to understand what usability is and the ways in which design can recruit our natural understandings of the natural world to create better tangible interfaces and ubiquitous environments.

Some of this we can find by examining existing artifacts, mining the implicit knowledge within the organization – understanding this and investing in proper technology to enhance what is working.

This has enabled us to produce putative design guidelines, but there is only so much you can learn from good design.

In neurology it has been the freak accidents and illnesses, skull fractures and cancerous growths which have revealed much of the structure of the brain. It is when systems fail that we begin to understand how they succeed.

So why not employ cruel design, experiment on systems designed to be strange, hard, annoying or simply impossible to use? By manipulating the level of physical coherence of physical-digital mappings we are delving into the properties that make things work well by making them work badly.

Why not create a bit of anger?

Abuse, violence and emotional turmoil are a day-to-day part of many peoples lives. How do you train people to deal with traumatized, angry, upset clients? Training videos, I guess , but how do you design a system that deal with aggression?

Here’s the point: When you can't help you need to be helpful. That is exactly what usability design should be about.

Hey Mr. Designer – Hey Ms. Usability - can you soothe the angry user before there is blood on the keyboard?

Wednesday, July 19, 2006

Abusive agents

In discussions with my 'darker' friends, we've noticed that research into developing socially intelligent agents has increased over the last decade with the main focus being on how they can enhance human computer interaction. Remember - enhance human computer interaction

Most research related to the use of embodied agents has tended to concentrate on the benefits that these agents bring to an standard interface and how they can arouse positive emotional states that enhance cognitive functions (read: learning and problem solving skills).

But the flip side of this research could concentrated on the potential that these agents have to manipulate our behavior for unethical purposes.

Hard to believe? It's happening right now.

Look at the potential negative impacts that embodied agents have on an interaction? As they are becoming more socially intelligent, there is the increased possibility that they will be able to 'abuse' us (the user) in a number of ways.

As i see it, we seem to treat computers as social entities. Its interesting to note how we can make use of social skills in human-human interaction and use them in HCI to enhance human computer relations. Ask Jeeves and Amazon are just a few examples - Lufthansa and Swiss use agents to 'suggest' different flying options 'more suited to your requirements'.

Foriegn or 3rd party agents are building and maintaining long-term relationships with your agents right now and they are useing many of the relational strategies that humans often use (read: small talk and talk of the relationship, trading of 'secrets' to enforce 'trust').

Alarmed? Come on.

Clearly, we seem to prefer interacting with embodied agents that have some form of social intelligence. This is despite the fact that the level of intelligence demonstrated by such systems is very limited in comparison to our own.

However, with computer processing speeds doubling every year, I believe this ability is likely to change drastically in the near future.

Kurzewil predicts that by 2010 we will have virtual humans that look and act much like real humans, although they will still be unable to pass the Turing Test.

By 2030, he believes that it will be difficult to distinguish between virtual and biological humans. Singualrity.

This potential increase in agent intelligence and representation raises a number of troubling issues.

Our tendency to treat computers as social actors suggests that socially skilled agents may be able to utilize many of the strategies and techniques that humans use to manipulate other peoples' behavior.

For example, in human-human interaction, we tend to act on the advice of people we like and trust rather than people we dislike and distrust.

It is possible that the same principle might apply in HCI. If, in fact, we like and trust socially skilled agents over ones which have no such skills, these agents may be able to manipulate human behavior more effectively than agents with no social skills built into them.

Wow - could this be a new type of social hacking, online or virtual memes that spill-over into the physical world?

Could a government or other 'subversive' parties create and maintain an 'independant' blog that is totally automated to spit out gramatically sincere propoganda? Could it be happening right now?

Socially intelligent agents also have a number of advantages over humans when attempting to manipulate our behavior, including their ability to persistently make use of a wide variety of persuasive techniques without ever becoming tired or deterred (read: asking somebody to register for a product every time they start up their computer).

They can also make requests at times when it is more likely that the request will be complied with (read: a computer game or product that asks children to provide personal details before being able to progress to the next stage). Remember - these agents can also analyze data and situations 1000 times faster than we can.

In some circumstances, users may also trust computers more than they do other humans. Whether deserved or not, some professions have a reputation for being manipulative and deceptive (read: Geneva landlords, used car salesmen) and people often tend to be cautious when interacting with such people.

However, if users were to interact with a computational sales agent, they may drop their guard and be more open to manipulation as computers generally do not have a strong reputation for deception and attempting to manipulate peoples' behavior.

Is it acceptable for agents to manipulate (perhaps deceive) people in this way? Simply in order to help companies sell more products? Governments catch more terrorist or tax evaders?

Perhaps so, as long as the user believes that they have received good value for their money and do not feel exploited. But would they even know IF they were being exploited?

This is a form of manipulation (and deception), and most people are aware that many salespeople are like this. While this may not please people, they are unlikely to mind if they feel they have received value for money and a good service.

On the other hand, if customers feel cheated they will be unlikely to return with their money again. As embodied agents' social skills improve over the coming years, the danger of them being used to manipulate our behavior will increase.

In fact, there are many embodied agents available today that attempt to manipulate peoples' behavior in questionable ways.

The success of agents such as these is yet to be fully tested, but the potential for them to manipulate user behavior certainly exists.

As we move more towards managing computer systems rather than directly manipulating them, we will work more closely with agents in everyday activities as they undertake tasks on our behalf.

This means that people are likely to develop long-term relationships with agent entities in their interactions, which (who) they will grow to know and trust.

It may be that these agents are then in a very strong position to alter their behavior and start becoming more and more manipulative over time (like a cult: nice to begin with, drawing a person in and then changing and starting to abuse the trust that has been created).

This may happen by initial malicious design, or more intriguingly, by external people cracking an agent and making it turn on its user!

Perhaps a new form of virus writer may emerge.

It is vital that we begin studying in more detail how socially intelligent agents can manipulate our behavior.

A deeper understanding of these areas will enable us to take steps toward avoiding agent abuse against users, both now and in the future.

Remember - Evolution never refactors its code. It is far easier for evolution to stumble over a thousand individual optimizations than for evolution to stumble over two simultaneous changes which are together beneficial and separately harmful.

Now this is the deep part - a bit heavy but it will explain WHY manipulative agents will happen.

Human intelligence, created by evolution, is characterized by evolution's design signature. The vast majority of our genetic history took place in the absence of deliberative intelligence; our older cognitive systems are poorly adapted to the possibilities inherent in deliberation.

Evolution has applied vast design pressures to us but has done so very unevenly; evolution's design pressures are filtered through an unusual methodology that works far better for hand-massaging code than for refactoring program architectures.

Now imagine a agent built in its own presence by intelligent designers, beginning from primitive and awkward subsystems that nonetheless form a complete supersystem.

Imagine a development process in which the elaboration and occasional refactoring of the subsystems can coopt any degree of intelligence, however small, exhibited by the supersystem.

The result would be a fundamentally different design and a new approach to Artificial Intelligence which Eliezer Yudkowsky termed 'seed AI'.

Seed AI is AI designed for self-understanding, self-modification, and recursive self-improvement.

This has implications both for the functional architectures needed to achieve primitive intelligence, and for the later development of the AI if and when its holonic self-understanding begins to improve - but improvement is in the eye of the beholder - improved cunning and self defense techniques will eneable an agent to 'defend' itself (read: HAL)

Seed AI is not a workaround that avoids the challenge of general intelligence by bootstrapping from an moral core; seed AI will begin to yield benefits once there is some degree of available intelligence to be utilized.

The later consequences of seed AI (such as true negative self-improvement) only show up after the agent has achieved significant holonic understanding and general intelligence.

The question is, 'What happens afterwards?' And this is a serious question.

Monday, July 17, 2006

The next future

What a lot of us (notably VCs and typically, only the most informed Web 2.0ers) are attempting these days is to look ahead to the future of the current information culture.

Already our technological capabilities have created a world in which ubiquitous connectivity is, or is becoming, a reality, even for emerging countries which, for example supply village to village connectivity via a WiFi-enabled motorcycle that drives through the Cambodian countryside.

And with ubiquitous connectivity comes the effect of pervasive proximity.

Our experience of reality – literally what we feel – is changing. We touch and are touched in ways that transcend the apparent visual barrier between the cyber and the physical worlds.

It is a only a misconception, and soon-to-be artifact, that the screen represents a DMZ between reality from non-reality. When measured information exchanges, it is clear that this interface is quickly vanishing.

Experience effected through the processes of pervasive proximity means that what we feel online – those whom we touch and those who touch us – is quite real, despite its lack of physicality and materiality. What this means is that under conditions of pervasive proximity, experience transcends our traditional conception of media boundaries. And it is through transmedial experiences that we can begin to observe the emergence of a culture for the global village.

The dominant technology of the previous era was the book and the printed word. Among the memes that came along with the book were the acceleration, intensification, and reinforcement of vernacular languages, and with that the distinct cultural separations that created 'the other side of the story'.

Along with the book came the development of the individual mind that could not exist without reading; the whole concept of the individual and the public as distinct entities, the notion of privacy, secrecy, guilt, superiority, class systems and shame.

Among the creative classes, the book created the author (and, some say, even authority), it created the artist and the composer – and it also created the audience, again as a distinct and separate entity.

And with that dominant technology, it was always the case that the “text” – the words, the art, the music – could be removed from both its creator and its creative context.

The content creator is always engaged in writing a detailed history of the future because s/he is one of the few that were aware of the present. But remember that the creator is a distinct entity from her or his audience or consumer. Today, we are no longer merely consumers of culture. We are instead – all of us – producers of our indicative cultural creations that exist for as long as we are experiencing them — and no longer.


The hallmarks of of this new playground for creativity (one that we are only beginning to recognize) include collaborative creation, transmediality and the elimination of the interfaces – the stark demarcations – between the physical and virtual worlds.

Such a conception almost evokes aspects of magic and mysticism - the image of the tribal shaman who acts as a medium between the visible and invisible worlds, practicing forms of magic that exert control over what otherwise appear as natural events.

The ability for everyone to actively engage and participate in creation and reflexive consumption of culture is paramount. This, however, flies directly in the face of cultural cartels in whose interest it is to maintain a monopoly on production and distribution of information and who therefore seek to control the means of creation, connection, and collaboration.

Therein lies the role of governments, conventions, treaties and summits: to actively resist partisan commercial interests in order to protect and nurture the subtle beginnings of the next cultural epoch, the beginnings of which we are privileged not only to witness, but privileged as well to actively participate as its midwives.

Since we are all creators, creativity – and the means to express and experience creativity – belongs to everyone, collectively as a public trust.

It's not about the technology - it's about how this technology is modifying individual and collective behavior.

Wednesday, July 12, 2006

Internet Control

It is commonplace among technologists to support a policy that intermediaries on the Internet should ‘pass all packets.’ This so-called end-to-end principle calls for intelligence to be located at the edges of the network, if at all possible.

While the end-to-end principle has been both challenged, this principle remains a sacred concept among true believers in the openness of the Internet's original design.

Over the past decade, most states—the United States among them—have established rules that sometimes encourage and sometimes require intermediaries to block or to inspect packets as they travel through the Internet.

These rules prompt private actors to violate the end-to-end principle, at least theoretically in the name of the public interest.

We must now considers the changes over the past ten years to the rules that require private parties to control packets at various points in the network, a trend brought into relief by the current public debate over competing ‘net neutrality’ proposals - a political and economic concept often conflated with the end-to-end principle of network design.

One (if one were looking hard enough) can see a rough trajectory is emerging; fewer controls are imposed at the end-points and more controls closer to the center of the network. We note also an increasing emphasis on governments (and corporations) requiring or otherwise causing private parties – intermediaries, in both a technical and a literal sense – to exercise control of packets as they pass through the network. This trend is clearest in those states that are seeking to impose content-based filters on Internet content.

Isn't the idea is to focus on a key question of Internet law in the context of what is now commonly known as ‘Web 2.0’: What actions are governments taking when they do not want certain types of packets to pass through today’s increasingly interactive and distributed network, or when they seek to learn more about the packets that are passing and those who are sending and receiving them?

read: censorship and countries that monitor internet traffic

Let’s face it - technological innovation, participatory democracy, cultural development, generatively, and other wonderful things could no doubt continue to develop without the Internet.

These interests can plausibly be vindicated in ways other than by upholding the end-to-end principle of network design. It would be a drastic overstatement of the problem to contend that any given incremental online legal, or combined legal and technical, control means the end of free expression on the Internet.

A reasonable legislator or judge might find in favor of potentially more effective ways of solving the problems of online life - whether to do with sex, commerce, culture, and politics - over the benefits that end-to-end bears with it.

Information technology continues to evolve rapidly and to bring with it new and tricky puzzles to solve.

The job of the policy-maker, who has to set rules in a time of technological innovation, is challenging, if not unenviable. Social and economical development depend on these people (not the ones selling music, running shoes or even Bono!)

In such a fast moving environment, towing the end-to-end principle is a consistently safe bet.

However, it is a bet that is not easily draped in language that has legal force, other than as end-to-end solutions themselves tend to support and foster greater free expression online.

If history is any guide, the preservation of an end-to-end network will mean promotion of a flourishing democratic culture, potentially on a global scale - cultural innovation in an unusually rich, empowering sense that should be the goal of the policy-maker and technologist alike.

This is why WSIS is important – it starts the dialog, asks the questions.

The current trend is to move away from legal controls consistent with end-to-end principle, toward controls that involve blocking content works against innovation, development of democratic institutions, and the aspiration of semiotic democracy.

In a particularly worrisome development, intermediaries - such as technology service and content providers - are increasingly being placed in the position of carrying out some of the most egregious of these proprietary controls as a condition of competing in highly attractive emerging markets (read: Google – Yahoo – Microsoft).

As the online regulatory environment continues to shift toward more control, the job of the technologist (that is – you and me) must be to articulate better the aspects of the threatened network designer - whether translated as net neutrality, generatively or under other monikers - that are necessary to be preserved.

The job of non-profits and universities is to express the power and the possibilities of the network in its most open form.

The job of the legislator, the regulator, and the judge should include listening carefully to the technologists and determining how to preserve those essential elements of the end-to-end principle in the public interest.

The most difficult job may ultimately prove to be the challenge facing the us that want to see the inequities on the digital divide vanish. We are caught in the cross-hairs of government and corporate playground games.

Our challenge is to shape and then to adhere to a set of best practices for participating in markets where repressive regimes mandate excessive control of technology injections. In the end – it’s an information war; those who have and those who don’t.

Control the access to information and control the citizen behavior.

Control.

Tuesday, July 04, 2006

Generation C = New Consumer

At the core of all consumer trends is the new consumer, who creates his or her own playground, own comfort zone, own universe.

It's the 'empowered' and 'better informed' and 'switched on' consumer combined into something profound. At the core is control: psychologists don't agree on much, except for the belief that human beings want to be in charge of their own destiny.

Or at least have the illusion of being in charge.

And because they can now get this control in entirely new ways, aided by an online, low cost, creativity-hugging revolution that's still in its infancy, young and old (but particularly young) consumers now weave webs of unrivaled connectivity and relish instant knowledge gratification.

They exercise total control over creative collections, including their own creative assets, assume different identities in cyberspace at a whim, wallow in DIY / Customization / Personalization / Co-Creation to make companies deliver whatever and whenever, on their own terms.

And it's not all about Adidas, Levi's and online travel. It's about philanthropy, developing nations, education, health and leveling out some of the inequalities of this world.

So, what's next in the metaverse?

Second Life will no doubt continue to expand, especially if they manage to partner with the Flickrs, eBays, Playaheads, MySpaces and Yahoos of this world. Meanwhile, Google is said to be eyeing a metaverse entry, combining Google Earth and Google Sketchup (a 3-D modeling program).

The company is now encouraging developers to build 3-D layers on top of Google Earth. For examples, see the 3D warehouse. This prompted the designers at Form Fonts 3D to create 97 virtual pieces of IKEA furniture, ready to download, which can then be used to furnish one's SketchUp dream house.

If you ignore online branding, rest assured a member of Generation C will launch your brand for you, just keep your fingers crossed that they'll do it to your liking.

** Gen C = consumers who produce and share content. They mix their own music, edit their own videos, post their photography to the Internet, or publish a blog or a book.
They are a big group, and one that's constantly growing. More than 53 million adults in the US have created online content, according to a recent report from the Pew Internet and American Life Project.


Also keep an eye on new metaverse platforms like Active Worlds, Open Croquet Project and Multiverse, all aiming to help independent game developers create high-quality Massively Multiplayer Online Games (MMOGs) and non-game virtual worlds for less money and in less time than ever before.

As this is still early days, this is prime territory to claim many a 'first' in online branding (read: IOs and NGOs - wake-up).

Time to set up a meeting with these guys?