Wednesday, July 12, 2006

Internet Control

It is commonplace among technologists to support a policy that intermediaries on the Internet should ‘pass all packets.’ This so-called end-to-end principle calls for intelligence to be located at the edges of the network, if at all possible.

While the end-to-end principle has been both challenged, this principle remains a sacred concept among true believers in the openness of the Internet's original design.

Over the past decade, most states—the United States among them—have established rules that sometimes encourage and sometimes require intermediaries to block or to inspect packets as they travel through the Internet.

These rules prompt private actors to violate the end-to-end principle, at least theoretically in the name of the public interest.

We must now considers the changes over the past ten years to the rules that require private parties to control packets at various points in the network, a trend brought into relief by the current public debate over competing ‘net neutrality’ proposals - a political and economic concept often conflated with the end-to-end principle of network design.

One (if one were looking hard enough) can see a rough trajectory is emerging; fewer controls are imposed at the end-points and more controls closer to the center of the network. We note also an increasing emphasis on governments (and corporations) requiring or otherwise causing private parties – intermediaries, in both a technical and a literal sense – to exercise control of packets as they pass through the network. This trend is clearest in those states that are seeking to impose content-based filters on Internet content.

Isn't the idea is to focus on a key question of Internet law in the context of what is now commonly known as ‘Web 2.0’: What actions are governments taking when they do not want certain types of packets to pass through today’s increasingly interactive and distributed network, or when they seek to learn more about the packets that are passing and those who are sending and receiving them?

read: censorship and countries that monitor internet traffic

Let’s face it - technological innovation, participatory democracy, cultural development, generatively, and other wonderful things could no doubt continue to develop without the Internet.

These interests can plausibly be vindicated in ways other than by upholding the end-to-end principle of network design. It would be a drastic overstatement of the problem to contend that any given incremental online legal, or combined legal and technical, control means the end of free expression on the Internet.

A reasonable legislator or judge might find in favor of potentially more effective ways of solving the problems of online life - whether to do with sex, commerce, culture, and politics - over the benefits that end-to-end bears with it.

Information technology continues to evolve rapidly and to bring with it new and tricky puzzles to solve.

The job of the policy-maker, who has to set rules in a time of technological innovation, is challenging, if not unenviable. Social and economical development depend on these people (not the ones selling music, running shoes or even Bono!)

In such a fast moving environment, towing the end-to-end principle is a consistently safe bet.

However, it is a bet that is not easily draped in language that has legal force, other than as end-to-end solutions themselves tend to support and foster greater free expression online.

If history is any guide, the preservation of an end-to-end network will mean promotion of a flourishing democratic culture, potentially on a global scale - cultural innovation in an unusually rich, empowering sense that should be the goal of the policy-maker and technologist alike.

This is why WSIS is important – it starts the dialog, asks the questions.

The current trend is to move away from legal controls consistent with end-to-end principle, toward controls that involve blocking content works against innovation, development of democratic institutions, and the aspiration of semiotic democracy.

In a particularly worrisome development, intermediaries - such as technology service and content providers - are increasingly being placed in the position of carrying out some of the most egregious of these proprietary controls as a condition of competing in highly attractive emerging markets (read: Google – Yahoo – Microsoft).

As the online regulatory environment continues to shift toward more control, the job of the technologist (that is – you and me) must be to articulate better the aspects of the threatened network designer - whether translated as net neutrality, generatively or under other monikers - that are necessary to be preserved.

The job of non-profits and universities is to express the power and the possibilities of the network in its most open form.

The job of the legislator, the regulator, and the judge should include listening carefully to the technologists and determining how to preserve those essential elements of the end-to-end principle in the public interest.

The most difficult job may ultimately prove to be the challenge facing the us that want to see the inequities on the digital divide vanish. We are caught in the cross-hairs of government and corporate playground games.

Our challenge is to shape and then to adhere to a set of best practices for participating in markets where repressive regimes mandate excessive control of technology injections. In the end – it’s an information war; those who have and those who don’t.

Control the access to information and control the citizen behavior.

Control.

0 Comments:

Post a Comment

<< Home