Today marks what would be the 100th birthday of Marshall McLuhan, the Canadian media scholar best known for coining the phrase "the medium is the message." His work had no empirical component (a theoretician, he began his career as an English professor), but the aphorisms that made him famous have proved remarkably persistent: Look around today, and the question posed by McLuhan's most notable idea is becoming more and more urgent: Is the medium really the message? And if so, is that good or bad?
What McLuhan meant was that the content of communication delivered by a particular medium is less important than the form in which it arrives. Reading words printed on a page has particular effects on the way we think, understand, and remember; assimilating pictures or sounds has fundamentally different effects. McLuhan speculated that various media could reshape our brains, and today, armed with new techniques, researchers are beginning to investigate whether that may, in fact, be true. For instance, brain-imaging data shows that when we read text online, we activate brain areas devoted to decision-making that are quiet when we read printed text (one needs to decide whether to stay on a page or follow the links). The long-term consequences of these new patterns of brain activity and the effects on the developing brains of young people are yet to be fully understood.
In today's world, multiple communication technologies could be altering our behavior, if not our biology. Unlike our ancestors, we are being forced to react to constant change in the ways we communicate. A few thousand years passed between the creation of the alphabet and the development of movable type. More recently, a quarter of a century passed between the invention of the telephone and the first radio transmission; it was another quarter-century after that before the invention of television. But in the Internet age, every few years sees the emergence of a potentially transformative technology. That isn't to say that Facebook or Twitter, in and of themselves, are developments of the same magnitude as the printing press or television. But the time between dramatic new technologies has shrunk. Facebook was founded in 2004 and now has 750 million users; Twitter began in 2006 and has a reported 200 million users. Chances are good that five years from now, a network or technology that we haven't yet heard of will be the hottest thing on Earth.
When those changes come, we seem to divide predictably into a few groups, roughly (though not entirely) demarcated by age: the enthusiastic, the befuddled, the frightened, and the curmudgeonly. Periodically, someone prominent writes an article pronouncing the new medium a force for dehumanization or, at the very least, the propagation of trivia that will distract us from what's truly important. "Our inventions are wont to be pretty toys, which distract our attention from serious things," complained Henry David Thoreau in 1854 in reaction to the development of the telegraph. "We are eager to tunnel under the Atlantic and bring the old world some weeks nearer to the new; but perchance the first news that will leak through into the broad, flapping American ear will be that Princess Adelaide has the whooping cough." These kinds of remarks are inevitably followed by others rising to the technology's defense, declaring the dissenter a fuddy-duddy longing for a past to which we should bid good riddance.
You could spend the next month reading recent books warning of the negative consequences of the Internet and social networking (see here, here, here, or here). These books are written by technology experts, which shows that it's not just Luddites and technophobes who see possible problems every time the Next Big Thing comes around.
Few technologies have nothing but positive effects, and the earlier we can have a thoughtful discussion about their drawbacks, the better we'll be able to mitigate them. One thing that never happens, though, is that we as a society decide the costs of a new technology outweigh its benefits and then put it into a box and tuck it away.
Consider the spread of facial recognition software. Last week The Wall Street Journal reported that police departments around the country will soon be taking delivery of small devices capable of immediate facial recognition, and more: "With the device, which attaches to an iPhone, an officer can snap a picture of a face from up to five feet away, or scan a person's irises from up to six inches away, and do an immediate search to see if there is a match with a database of people with criminal records. The gadget also collects fingerprints." You can bet that within a few years, the technology (or its succeeding iterations) will spread to every law-enforcement agency in the country, and before long, a conversation with a police officer will be likely to feature her taking your photo. In Chicago, the police maintain an operations center that continuously monitors more than 10,000 surveillance cameras that keep an eye on the city's residents. (We have nothing on our friends across the pond on this score -- in Britain there are an incredible 2.4 million surveillance cameras, or one for every 14 people.)
When most of us hear stories about one of the many ways we're being tracked and monitored, we say, "Wow, that's pretty creepy. I don't like that at all." Then we forget about it. And it isn't just law enforcement, of course. Private companies are also moving us toward the kind of world without anonymity that is such a regular feature of dystopian science fiction. Facebook, where 100 million photos are uploaded every day, quietly added a facial-recognition tool to its services. Google has developed its own facial-recognition software and is considering adding the feature to its augmented reality app, Google Goggles (concerned about a privacy backlash, the company is holding off for now).
Now that the technology is there, it will spread. Although we may have misgivings, "soon, though, we'll all learn to live with it," writes Slate's Farhad Manjoo. "Etiquette and even regulations will develop around when it's OK to point your camera at someone and get her name. It's too late to turn back now: If your face and your name are online today, you've already made yourself searchable." So unless you're stuffing your head under your shirt whenever someone holds up their smartphone, you're being pulled into our democratized panopticon whether you like it or not.
These kinds of questions and problems are going to become more difficult with each passing year. Consider this: As time goes by and electronics get further miniaturized, eventually we will be able to integrate certain kinds of technologies into our bodies. Fifty years from now, the idea that you have to sit down in front of a screen and tap on a keyboard in order to access databases of knowledge will seem absurd. Instead, the Internet will be projected onto your contact lenses or into your brain itself. That doesn't mean everyone will get the implants that allow such inputs and outputs; there will no doubt be holdouts, much as today a full 2 percent of American households don't have televisions. But communication technology will eventually not only be all around us; it will be within us. At that point, you will be both the medium and the message.