Sunday, July 31, 2005

The Machine

What the web is and what it will become by Kevin Kelly, in Wired. The first 3 pages go over old ground but he sizzles in pages 4 and 5, at the point he starts talking about 2015. I quote a brilliant extract:

Today, the Machine acts like a very large computer with top-level functions that operate at approximately the clock speed of an early PC. It processes 1 million emails each second, which essentially means network email runs at 1 megahertz. Same with Web searches. Instant messaging runs at 100 kilohertz, SMS at 1 kilohertz. The Machine's total external RAM is about 200 terabytes. In any one second, 10 terabits can be coursing through its backbone, and each year it generates nearly 20 exabytes of data. Its distributed "chip" spans 1 billion active PCs, which is approximately the number of transistors in one PC.

This planet-sized computer is comparable in complexity to a human brain. Both the brain and the Web have hundreds of billions of neurons (or Web pages). Each biological neuron sprouts synaptic links to thousands of other neurons, while each Web page branches into dozens of hyperlinks. That adds up to a trillion "synapses" between the static pages on the Web. The human brain has about 100 times that number - but brains are not doubling in size every few years. The Machine is.

Since each of its "transistors" is itself a personal computer with a billion transistors running lower functions, the Machine is fractal. In total, it harnesses a quintillion transistors, expanding its complexity beyond that of a biological brain. It has already surpassed the 20-petahertz threshold for potential intelligence as calculated by Ray Kurzweil. For this reason some researchers pursuing artificial intelligence have switched their bets to the Net as the computer most likely to think first. Danny Hillis, a computer scientist who once claimed he wanted to make an AI "that would be proud of me," has invented massively parallel supercomputers in part to advance us in that direction. He now believes the first real AI will emerge not in a stand-alone supercomputer like IBM's proposed 23-teraflop Blue Brain, but in the vast digital tangle of the global Machine. In 10 years, the system will contain hundreds of millions of miles of fiber-optic neurons linking the billions of ant-smart chips embedded into manufactured products, buried in environmental sensors, staring out from satellite cameras, guiding cars, and saturating our world with enough complexity to begin to learn. We will live inside this thing.

Today the nascent Machine routes packets around disturbances in its lines; by 2015 it will anticipate disturbances and avoid them. It will have a robust immune system, weeding spam from its trunk lines, eliminating viruses and denial-of-service attacks the moment they are launched, and dissuading malefactors from injuring it again. The patterns of the Machine's internal workings will be so complex they won't be repeatable; you won't always get the same answer to a given question. It will take intuition to maximize what the global network has to offer. The most obvious development birthed by this platform will be the absorption of routine. The Machine will take on anything we do more than twice. It will be the Anticipation Machine.

One great advantage the Machine holds in this regard: It's always on. It is very hard to learn if you keep getting turned off, which is the fate of most computers. AI researchers rejoice when an adaptive learning program runs for days without crashing. The fetal Machine has been running continuously for at least 10 years (30 if you want to be picky). I am aware of no other machine - of any type - that has run that long with zero downtime. While portions may spin down due to power outages or cascading infections, the entire thing is unlikely to go quiet in the coming decade. It will be the most reliable gadget we have.

And the most universal. By 2015, desktop operating systems will be largely irrelevant. The Web will be the only OS worth coding for. It won't matter what device you use, as long as it runs on the Web OS. You will reach the same distributed computer whether you log on via phone, PDA, laptop, or HDTV.

In the 1990s, the big players called that convergence. They peddled the image of multiple kinds of signals entering our lives through one box - a box they hoped to control. By 2015 this image will be turned inside out. In reality, each device is a differently shaped window that peers into the global computer. Nothing converges. The Machine is an unbounded thing that will take a billion windows to glimpse even part of. It is what you'll see on the other side of any screen.

And who will write the software that makes this contraption useful and productive? We will. In fact, we're already doing it, each of us, every day. When we post and then tag pictures on the community photo album Flickr, we are teaching the Machine to give names to images. The thickening links between caption and picture form a neural net that can learn. Think of the 100 billion times per day humans click on a Web page as a way of teaching the Machine what we think is important. Each time we forge a link between words, we teach it an idea. Wikipedia encourages its citizen authors to link each fact in an article to a reference citation. Over time, a Wikipedia article becomes totally underlined in blue as ideas are cross-referenced. That massive cross-referencing is how brains think and remember. It is how neural nets answer questions. It is how our global skin of neurons will adapt autonomously and acquire a higher level of knowledge.

The human brain has no department full of programming cells that configure the mind. Rather, brain cells program themselves simply by being used. Likewise, our questions program the Machine to answer questions. We think we are merely wasting time when we surf mindlessly or blog an item, but each time we click a link we strengthen a node somewhere in the Web OS, thereby programming the Machine by using it.

What will most surprise us is how dependent we will be on what the Machine knows - about us and about what we want to know. We already find it easier to Google something a second or third time rather than remember it ourselves. The more we teach this megacomputer, the more it will assume responsibility for our knowing. It will become our memory. Then it will become our identity. In 2015 many people, when divorced from the Machine, won't feel like themselves - as if they'd had a lobotomy.

0 Comments:

Post a Comment

<< Home