Lots of folks are talking about Ray Kurzweil's new book The Singularity is Near. His argument, basically, is that true artificial intelligence is a function of computing power, we currently haven't created it because we don't have the computer power, given current trends we will have it in about 20 years, then our artificially intelligent robots will begin working on speeding up the process ever-more, making human intelligence almost useless in a relatively short period of time. Kurzweil kindly goes in for the "this will help humans and make us all much happier" explanation rather than the "we're all gonna be robot-slaves" argument. To me, that one seems a coin toss. In any case, the singularity is when it happens, when our intelligence becomes increasingly non-biological and and the world becomes Totally Awesome.
Reactions vary. Kevin thinks he's right, but that he cheated on a graph. Matt thinks he's wrong, and points to our dashed hopes for nuclear power as proof. Tyler wonders why IE still crashes if we're so damn smart. And so on.
Count me critical. Inventions don't tend to follow the tracks we think they will. If they did, we'd have long ago had our flying cars, phaser guns, and teleportation devices. So the graph that Kevin references showing the increasing speed of technological innovation strikes me as a point against. We're getting better at making things, but we don't tend to improve them in the way futurists expect. Very rarely folks get a few things right and so the reputation of futurists everywhere is saved, but given what we've seen in the past, Kurzweil's analysis strikes me as too easy an extrapolation to accurately describe where we're going to end up. In my read, the one thing we aren't is linear.