Communion Of Dreams


“Doom, DOOM, I say!!”

A good friend sent me a link to a longish piece in the latest edition of The Atlantic titled Is Google Making Us Stupid? by author Nicholas Carr. It’s interesting, and touches on several of the things I explore as future technologies in Communion of Dreams, and I would urge you to go read the whole thing.

Read it, but don’t believe it for a moment.

OK, Carr starts out with the basic premise that the human mind is a remarkably plastic organ, and is capable of reordering itself to a large degree even well into adulthood. Fine. Obvious. Anyone who has learned a new language, or mastered a new computer game, or acquired any other skill as an adult knows this, and knows how it expands one’s awareness of different and previously unperceived aspects of reality. That, actually, is one of the basic premises behind what I do with Communion, in opening up the human understanding of what the reality of the universe actually is (and how that is in contrast with our prejudices of what it is).

From this premise, Carr speculates that the increasing penetration of the internet into our intellectual lives is changing how we think. I cannot disagree, and have said as much in several of my posts here. For about 2/3 of the article he is discussing how the hyperlinked reality of the web tends to scatter our attention, making it more difficult for us to concentrate and think (or read) ‘deeply’. Anyone who has spent a lot of time reading online knows this phenomenon – pick up an old-fashioned paper book, and you’ll likely find yourself now and again wanting explanatory hyperlinks on this point or that for further clarification. This, admittedly, makes it more difficult to concentrate and immerse yourself into the text at hand, to lose yourself in either the author’s argument or the world they are creating.

But then Carr hits his main point, having established his premises. And it is this: that somehow this scattered attention turns us into information zombies, spoon-fed by the incipient AI of the Google search engine.

Huh?

No, seriously, that’s what he says. Going back to the time-motion efficiency studies pioneered by Frederick Winslow Taylor at the turn of the last century, which turned factory workers into ideal components for working with machines, he makes this argument:

Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”

Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.

The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.

Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?

Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

Do you see the pivot there? He’s just spent over a score of paragraphs explaining how the internet has degraded our ability to concentrate because of hyperlinked distractions, but then he turns around and says that Google’s increasing sophistication at seeking out information will limit our curiosity about that information.

No. If anything, the ability to access a broader selection of possible references quickly, the ability to see a wider scope of data, will allow us to better use our human ability to understand patterns intuitively, and to delve down into the data pile to extract supporting or contradicting information. This will *feed* our curiosity, not limit it. More information will be hyperlinked – more jumps hither and yon for our minds to explore.

The mistake Carr has made is to use the wrong model for his analogy. He has tried to equate the knowledge economy with the industrial economy. Sure, there are forces at play which push us in the direction he sees – any business is going to want its workers to concentrate on the task at hand, and be efficient about it. That’s what the industrial revolution was all about, from a sociological point of view. This is why some employers will limit ‘surfing’ time, and push their workers to focus on managing a database, keeping accounts balanced, and monitoring production quality. While they are at work. But that has little or nothing to do with what people do on their own time, and how the use the tools created by information technology which allow for much greater exploration and curiosity. And for those employees who are not just an extension of some automated process, those who write, or teach, or research – these tools are a godsend.

In fairness, Carr recognizes the weakness in his argument. He acknowledges that previous technological innovations on a par with the internet (first writing itself, then the development of the printing press) were initially met with gloom on the part of those who thought that it would allow for the human mind to become lazy by not needing to hold all the information needed within the brain itself. These predictions of doom proved wrong, of course, because while some discipline in holding facts in the brain was lost, increasing freedom with accessing information needed only fleetingly was a great boon, allowing people to turn their intellectual abilities to using those facts rather than just remembering them.

Carr ends his essay with this:

I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.

Wrong. Wrong, wrong, wrong. This is a complete misreading of what happens in the movie. Kubrick’s vision was exactly the opposite – HAL was quite literally just following orders. Those orders were to preserve the secret nature of the mission, at the expense of the lives of the crew whom he murders or attempts to murder. That is the danger in allowing machinelike behaviour to be determinant. Kubrick (and Arthur C. Clarke) were, rather, showing that it is the human ability to assess unforeseen situations and synthesize information to draw a new conclusion (and act on it) which is our real strength.

*Sigh*

Jim Downey

(Hat tip to Wendy for the link to the Carr essay!)


2 Comments so far
Leave a comment

Very interesting — you’d likely find this piece by Aaron Barlow to be of interest, as well.

In a nutshell, Barlow reviews an article that should have (but did not explicitly) as the question “can online reading be merged with more traditional reading forms and methods to develop a new (and more culturally and technologically appropriate) form of reading?” — you might find his musing appropriate; also, he has another piece entitled Babbling to Babel that is even more directly related to your observations and conclusions above.

Comment by GreyHawk

[…] Oh, and this is another argument for the proposition that the Google search engine is an actual Artificial Intelligence, just in its early form, as I have discussed previously. […]

Pingback by Another prediction win! Well, sorta. « Communion Of Dreams




Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s



%d bloggers like this: