Rylan Schaeffer

Kernel Papers

Book Summaries

Book: The Shallows: What the Internet Is Doing to Our Brains

Author: Nicholas Carr

Phil Mann, a friend of mine who’s a voracious reader, recommended reading Nicholas Carr’s The Shallows: What the Internet is Doing to Our Brains. It’s an expose extending his 2008 article in the Atlantic titled Is Google Making Us Stupid?, which advances the claim that frequent use of the internet makes us better at making simple, quick decisions (such as choosing which search result has the desired answer) while detrimentally affecting our ability to concentrate deeply on a small number of topics.

To the uninitiated, that one-line summary is probably sufficient. But I think that the irony would be overwhelming if I stopped there, so let’s explore a little further. First, if the premise sounds interesting to you, read the book! The beginning is a little disconcerting, as it sounds less like a scientific investigation into an issue and more like Dr. Oz about to introduce his newest nutritional supplement: “Over the last few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going - so far as I can tell - but it’s changing. I’m not thinking the way I used to think. I feel it most strongly when I’m reading. I used to find it easy to immerse myself in a book or a lengthy article. My mind would get caught up in the twists of the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration starts to drift after a page or two. I get fidgety, lose the thread, begin looking for something else to do.” Carr explores why this might be.

His explanation starts with the plasticity of the brain, which is the notion that our brain adapts itself to the task(s) it needs to perform. Like weightlifting at the gym, using part of your mind strengths those parts, and like binge watching Netflix on the couch, failing to use parts of your mind leads to their weakening. He draws examples from monkeys using tools, professional violinists switching hands and taxi drivers creating mental maps of London’s road system, but in my opinion, the best evidence comes from the world-wide web’s use of hypertext. Initially, hypertext was seen as a way of enhancing ordinary text, allowing for readers to stop, consider a different subject, and then return to the initial article or book. However, as Carr writes, “Research was painting a fuller, and very different, picture of the cognitive effects of hypertext. Evaluating links and navigating a path through them, it turned out, involves mentally demanding problem-solving tasks that are extraneous to the act of reading itself. Deciphering hypertext substantially increases readers’ cognitive load and hence weakens their ability to comprehend and retain what they’re reading. A 1989 study showed that readers of hypertext often ended up clicking distractedly “through pages instead of reading them carefully.” A 1990 experiment revealed that hypertext readers often “could not remember what they had and had not read.” In another study that same year, researchers had two groups of people answer a series of questions by searching through a set of documents. One group searched through electronic hypertext documents, while the other searched through traditional paper documents. The group that used the paper documents outperformed the hypertext group in completing the assignment. In reviewing the results of these and other experiments, the editors of a 1996 book on hypertext and cognition wrote that, since hypertext “imposes a higher cognitive load on the reader,” it’s no surprise “that empirical comparisons between paper presentation (a familiar situation) and hypertext (a new, cognitively demanding situation) do not always favor hypertext.” But they predicted that, as readers gained greater “hypertext literacy,” the cognition problems would likely diminish. That hasn’t happened. Even though the World Wide Web has made hypertext commonplace, indeed ubiquitous, research continues to show that people who read linear text comprehend more, remember more, and learn more than those who read text peppered with links.”

Carr also made a few other points that I think are worth touching upon. One addresses relaxation. Frequently, after working for a substantial block of time, I’ll want to take a break by checking my inbox or Facebook or Reddit. Carr points out that while these tasks may seem less mentally arduous, the flood of information that needs to be read and processed makes these breaks less relaxing than one would think. He references a study about how to reset one’s ability to focus: “A team of University of Michigan researchers, led by psychologist Marc Berman, recruited some three dozen people and subjected them to a rigorous, and mentally fatiguing, series of tests designed to measure the capacity of their working memory and their ability to exert top-down control over their attention. The subjects were then divided into two groups. Half of them spent about an hour walking through a secluded woodland park, and the other half spent an equal amount of time walking along busy downtown streets. Both groups then took the tests a second time. Spending time in the park, the researchers found, “significantly improved” people’s performance on the cognitive tests, indicating a substantial increase in attentiveness. Walking in the city, by contrast, led to no improvement in test results.” Although walking through a city isn’t scrolling through Reddit and Facebook, I think that the information density is comparable. I’m reminded of J.F. Harding’s recollection of Alan Turing (Harding was the secretary of a local running club that Turing ran with): “I asked him one day why he punished himself so much in training. He told me ‘I have such a stressful job that the only way I can get it out of my mind is by running hard; it’s the only way I can get some release.’”

Carr also explores how human memory works. He fairly critizes the common perception that because both human memory and computer memory are called “memory,” the two are equivalent or can be treated as equivalent. He points out that in a computer, memory is separate from the CPU, but in humans, our memory is intimately connected with our higher level thinking; this means forming and connecting memories is far more important to deep understanding than one might think. He concedes that much is unknown about how memory formation works, but emphasizes what we do know: “What determines what we remember and what we forget? The key to memory consolidation is attentiveness. Storing explicit memories and, equally important, forming connections between them requires strong mental concentration, amplified by repetition or by intense intellectual or emotional engagement. The sharper the attention, the sharper the memory. “For a memory to persist,” writes Kandel, “the incoming information must be thoroughly and deeply processed. This is accomplished by attending to the information and associating it meaningfully and systematically with knowledge already well established in memory.”” I’ve noticed that when I take the time to write about a day’s or a week’s events, I can remember them much more clearly. Finding time to reflect is difficult, especially with ample distractions, but I find that exercise works for this as well.

The last point of Carr’s that I want to touch on is, as he calls it, “control over the flow of our thoughts and memories.” He makes two points with respect to this. The first is that internet services are based on clicking - clicks, to the financial side, imply customer use and advertising opportunities. This creates an incentive for companies to prolong the time you spend using their website or service, which is why things like extra features, hypertext and embedded ads come into play. The problem with this, he points out, is not just the penalty of constantly context-switching to evaluate each of these inticing new paths, but that remaining focused on your original objective becomes much more difficult. The second point Carr makes is that algorithms built to find “recommended” or “popular” articles, links, videos, etc. create group-think that hurts independence and narrows the scope of what you might otherwise find or look for. The impact, he warns, might be substantial: “James Evans, a sociologist at the University of Chicago, assembled an enormous database on 34 million scholarly articles published in academic journals from 1945 through 2005. He analyzed the citations included in the articles to see if patterns of citation, and hence of research, have changed as journals have shifted from being printed on paper to being published online. Considering how much easier it is to search digital text than printed text, the common assumption has been that making journals available on the Net would significantly broaden the scope of scholarly research, leading to a much more diverse set of citations. But that’s not at all what Evans discovered. As more journals moved online, scholars actually cited fewer articles than they had before. And as old issues of printed journals were digitized and uploaded to the Web, scholars cited more recent articles with increasing frequency. A broadening of available information led, as Evans described it, to a “narrowing of science and scholarship.”” Eli Pariser gave a TED talk on a similar subject in 2011, but with respect to social networks. With respect to the citations, it’s not clear to me what the solution is, because while the risk of something new, significant and revolutionary remaining undiscovered is real, it’s almost impossible for academics to keep pace with the ever increasing rate of research being done. With respect to everything else, don’t block people because they post something you disagree with.