The Age of Spectacle, No. 13
Chapter 4. Underturtle III: From Deep Literacy to Cyber-Orality, part 1
Having tucked Chapter 3 under our belts, The Raspberry Patch continues with the first part of Chapter 4. Chapter 4 will probably require three or four parts to complete—not sure yet. Yesterday, July 12, turned out to be a day too complex to post before sundown. Sorry to have kept you waiting for another day or so.
This third underturtle is harder for the typical educated reader to intuit than the ones about fragilized affluence and the end of modernity (a.k.a. the pluralization of our mythopoetical core) simply because it is less often discussed and its impact is more recent. More than in the other two underturtles, too, we are treading here on the neuro-cognitive impact of the digital disruption that will occupy us a good deal in Part II. That is a bit off the beaten path for those most used to reading and studying social sciences and humanities material. But it is not so esoteric and jargon laden that you will not be able to master it. So we’re getting ever closer to Part II, slowly but surely. (No, I’m not calling you Shirley; knock it off.)
Patience remains the watchword, as before, and indeed also as before your willingness to read through this manuscript with me as I try to polish it is good for you. It talks back to the temptations of our time toward the sophistic, the superficial, the lazy, the decadent, and the emotionalizing cultural demons that will masticate your mind into a surreal mush if you let them. So don’t you let them.
Last by way of introduction, I did a calculation yesterday to estimate how long it would take to complete rolling out The Age of Spectacle manuscript as it currently exists. The rough estimate I came up with is 36 or 37 more posts, so totally 50 altogether from start to finish. If I post once each week, as has been the norm so far, that would take us past the election all the way to the end of the calendar year, but he series would end before Inauguration Day. I’ve considered quickening the pace, but it’s hard for me to estimate readers’ tolerance for two long-form posts each week. I will ponder further.
And so we’re off to Chapter 4;
The erosion of deep literacy has come upon us simultaneously with and on top of the sharpest spikes in fragilized affluence these past thirty to fifty years, and with the worst of elite-deferred myth maintenance. That erosion has deprived many of the tools they need to think rigorously and seriously, even assuming the existence of sufficient energy and motivation to do so amid the constant swirl of wow-inducing entertainment and the widespread affluence to indulge in it.
More specifically, the attrition of cognitive focus and patience thanks to a series of uncontrolled and mostly unwitting technological innovations is having major dysfunctional effects throughout all cyber-wired societies. The main focus of our concern is the shrunken willingness and capacity of increasing percentages of would-be American adults to engage in inferential (i.e., inductive) reasoning. Deductive reasoning, going from the general and abstract to the specific is a lot more popular, though not necessary better practiced. The capacity to empathize with non-kindred others is also more or less decayed depending thanks to immature theories of mind—a kind of acquired autism. The ability to plan coherently, using linearly coherent memory as a model, is attenuated because the coherence of memory is attenuated. The ability to slow down long enough to grapple with difficulty of any sort is plunging downward. All of this is a logical consequence of the erosion of deep literacy. That erosion isn’t the only cause of these developments, but it is probably by far the most important cause.
Some numbers may help illustrate the situation. At the end of 2023 46 percent of American adults admitted to reading no books during the year, and 5 percent claimed to have read only one book. Possibly many counted in that 5 percent pool were referring to the Bible they looked at as much as read during a church service, and they did not mean that they had read the entire book, or any other book.[1] If so, then it is fair to conclude that an absolute majority of Americans of pre-retirement age reads no books, virtually ever, in their post-high school lives. They do not often or at all read remotely serious non-fiction in magazines or newspapers either.
Longitudinal data on Americans’ reading habits confirm the downward trend. Between 2004 and 2023 the percent of Americans who read anything more that a grocery list or highway signs in a given day dropped from 28 percent—already pretty low and much lower than before the internet age, and much, much lower than before the age of television—to 17 percent. Most of the decline is owed to men reading less than women. Otherwise, the longitudinal data contain few surprises: older age cohorts read more than younger ones; college graduates read more than non-college graduates; Caucasians and Asians read more than Hispanics and African-Americans. But pretty much all groups registered non-trivial declines in all areas, except one: Readers in the 15-24 years of age bracket are reading more often than before, although the levels are still low and the time spent reading in given sessions is short—less than 20 minutes.
Longitudinal data from across the pond is worth a look, too, because the American situation isn’t unique. One UK study concluded that in the decade from 2012-22 the number of under-17 year-olds who regularly read anything for pleasure, even comic books let alone actual books, dropped 15 points, from 38 percent to 23 percent.[5] That tracks with comparable U.S. data noted above. During this same period the rise in screen time has been even more dramatic. The two are obviously connected, which makes old debates about teaching reading almost pointless—it’s like two people in a canoe debating which direction to head across the lake without reference to the fact that the canoe is rapidly taking on water. Even if phonics is better than “whole word” methods as a stand-alone approach, the fact is that neither will be very successful if screen-addicted children spend virtually no time outside the English classroom with written language.
Deep literacy erosion, especially since the advent of the internet and then our ubiquitous magic rectangles, cannot be understood fully except in connection with rampant cyberaddiction to screens. It is not just the content of screen-borne media that matters but, perhaps even more, the inherent effects of the technology itself on the brain and endocrine system—the core subject of chapter 9. It is clearly the combination of these “evil twins” that is doing most and the most rapid damage.
The erosion of deep literacy is not the same, and does not does inflict the same cognitive/neurophysiological damage, as the debilities created by mass screen addiction, but the two phenomena are often compounded because the latter typically displaces the former.[2] Here enter to center stage the cyberlution, a shorthand for the cybernetic revolution.
Technovel screen addiction in the post-television era among those who never developed a deep-reading capacity in the first place sums to around two-thirds of American adults, who in turn encompass the aforementioned 54 percent who, according to 2022 Department of Education data, cannot comprehend standard newspaper copy. This large group, a statistical majority, forms the basis within the electorate for the large minority who compose the illiberal, Caeserist populism rising against American liberal democratic traditions.[3] Indeed, to repeat, rightwing populism, in other democracies as well as in the United States, in the past as well as in the present, may be defined simply as mass mobilization in an electoral democracy, beyond the rational apathetic norm, that falls below the mean of deep literacy.[4]
In normal times rational apathy, summing to the so-called consent of the governed, is enough to allow a creative minority to get on with the tasks of leadership, or to screw such tasks up, without too much muss and fuss. But when democratic mobilization spreads out from its extant demographic base, it has the potential to circumscribe the maneuverability and efficacy of that creative minority. That is what Ortega y Gasset described in Italy, Spain, Germany, and elsewhere in the interwar period as mass man insinuated himself into politics borne on the low-flying wings of a bitter anti-modernist romanticism. That is similar to what we have seen happening in the United States since 2007-08. By now, fully fifteen years on, the impact on the political culture of this malign mélange is unmistakable.
The erosion of deep literacy both vitiates the cognitive capability to comprehend and manage abstract concepts, necessary to grasp complex and changing social realities, and privileges the emotive oversimplifications of adolescent-like ideological thinking. Under such circumstances ideological entrepreneurs peddling spectacle-laced rhetoric (e.g., “carnage”) can easily hijack a lion’s share of the nation’s collective political agency.
This matters enormously for political life in an electoral democracy, because the process of reasoning together in a democratic agora to tackle complex, open-ended, and morally infused policy challenges requires certain nurtured and generationally transmitted character virtues. The erosion of deep literacy undermines all of them.[6] When that erosion comes to pervade not just ordinary citizens but members of the political class and supposedly elite journalists, the situation predictably becomes dire sooner or later. Sooner, it seems; propelled by the unexpected and protracted angst supplied by COVID-19, we’re just about there. The madness is manifest, and it is clearly manifest in our politics for some years now.[7]
These days it is almost impossible to make the point about the impact of deep literacy erosion on politics too often: The era of print and long-form reading that followed the invention of movable type and the Protestant Reformation cultivated the habits of mind that profoundly structured liberal democratic discourse and ultimately institutions. The digital revolution has made the advent of the television age look like a mere surface scratch when it comes to the damage it has done to those habits of mind. The discursive norms of liberal politics are all but vanished from American life. Deep literacy has been replaced with a version of an “attention economy” so void of real substance that not even the coiner of that phrase, Herbert A. Simon, could have imagined it. The key: Images have replaced words as the currency of persuasion, with all that implies about the affective/rational balance images deploy compared to words.
Writing, too, is now transformed as a consequence, political writing in particular. Every paragraph, writes Mary Harrington in UnHerd,
. . . now has to compete with a trillion others, meaning the incentive is for engagement, thrilling stories, grotesquerie and clickbait. . . . Anyone clinging to any residual optimism about the norms of objectivity or civil discourse under those conditions hasn’t been paying attention. . . . For along with an attention economy comes attention politics: the tussle to control what people notice, or who gets a platform. . . . In this context, ‘speech’ itself fades in significance, relative to control over the algorithmic parameters of speech, particularly so in the artificial-intelligence suffused environment fast coming our way.[8]
Harrington notes the change in tone in the New York Times since it reoriented its business model away from print and to digital subscriptions. What she does not note is that shadow headlines suddenly began appearing over every story and column with an estimate of how many minutes it takes to read it. This innovation will push copy into ever-shorter forms, privileging columnists who can write succinctly—not necessarily a bad thing, of course—but also privileging reportage that is radically short on context—not a good thing if reporting is actually its purpose.
Add to this the relatively sudden sharp popularity of podcasts, and one need not do a deep dive into the data to see the next stage aborning in the eclipse of reading. Podcasts are arguably a positive development in communicating serious content to those who these days would not read that content were it to appear in print. That was the burden of much commentary half a dozen or so years ago, and some optimists still cling to that view.[9] Others of those commentators, however, have since changed their minds.
Writing in the January 20, 2023 Financial Times, Janan Ganesh admits he was an early booster of podcasts. He now argues that podcasts have replaced the television drama as “a way of not reading and feeling good about it,” just as television dramas, propelled and justified by the same basic lie, replaced reading novels already many years ago.[10] Podcasts are rarely as educational as most people think they are because much of the time they function as background noise for multitaskers, and the result is the same as it is in all multitasking: the division of attention so that no task is done particularly well.
This is not always true of podcasts: During a long car ride on a typical multi-lane highway a listener can focus on a podcast’s content without much distraction. But when multitasking is in play, the hard content of a podcast, if there is any, is secondary to the virtual company the listener enjoys from the practiced presenters. And what hard content there is tends to be much truncated from the days of truly long-form audio content with the likes of Carl Sagan, A.J.P. Taylor, and Kenneth Clark. It’s light entertainment or at best infotainment, or what Ganesh disparagingly but shrewdly calls “conversational muzak.”
What content there is tends not to stick because podcasts ask almost nothing of the audience as they prance along at banter speed. They are passive compared to deep reading, which requires readers to bring to bear their own resources dialectically onto the printed page. “I wonder about the ‘stickiness’ of knowledge,” Ganesh writes, “that you don’t have to fight for.” The same observation for some of the same and for some different technological reasons, by the way, also applies to much so-called educational TV.[11]
Put differently and even less benignly, podcasts most often function as highbrow distractions in a world increasingly addicted to distractions; in that sense they are part of a major problem and a solution to none. Cybertech has accelerated a kind of acquired attention-deficit/hyperactivity disorder: AADHD. Podcasts are piling on the problem. Some evidence lies in a famous, or infamous depending on one’s interpretation, 2014 experiment in which, given a choice between sitting quietly with one’s own thoughts uninterrupted for a mere 15 minutes, or administering a low-level electrical shock to alleviate boredom and get to exit the room, a surprising number of subjects hit the shock button.[12]
What does it mean when so many people are afraid to be alone with their own thoughts? Why is that? It is just a reflection of a lack of patience, or a failure to see the benefits of occasional lulls of boredom?
We’ll not be able to answer those questions on the basis of the experiment just described, for it left much to be desired. Putting people in a soundproof sterile room with no windows simulates no remotely normal environment, certainly not a room where meditation or silent prayer regularly takes place, let alone outside among the phytochemicals as in shinrin yoku (forest bathing).[13] It is hard indeed to imagine a forest bather enjoying the natural aerated oils of ancient trees getting up from repose after ten minutes and whacking himself with a switch. The “disengaged mind” Timothy Wilson and his colleagues produced in their experimental design does not exist in nature, any more than optical illusions—sharply drawn symmetrical images laid on sharply contrasting black-and-white homogeneous two-dimensional planes[14]—do. So one has to wonder what it could possibly prove about real-world circumstances, despite the fact that it is obviously true that patient introspection seems about as popular these days as herpies.
Worse, but not the experimenters’ fault, the study was a mere snapshot, not a video, of the phenomenon under study. One really wants to know if more or less the same results would have obtained before the advent of smartphones and the many other ubiquitous apparata of cyberaddiction. Probably not, if one credits the reality that brain wiring is continuously shaped and reshaped, even into adulthood, by the stimuli to which it is subject—a proposition not subject to serious doubt.
Ganesh avers that podcast habits are actually worse than prods to AADHD. Podcasts, he argues, contribute to the massive hemorrhage of patience—which Ganesh nicely describes as “an atrophied muscle in the smartphone age”—while reading cultivates it. Indeed, podcasts have other questionable functions Ganesh does not mention. One is that they lead people to think they are doing something serious when they are not, thus debasing what the word serious means. If the podcast being aired is work related, it may also dumb down the standards of what work as an avocation means, as well. After all, podcasts are forms of orality, so the false sense that the way language is used in them dwells at the heights of intellection infects the culture with debased standards of what narrative language is really capable of when it is set down in writing.
More: Many use podcasts to relieve loneliness at a time of epidemic loneliness—a problem certainly not helped by the two-year COVID experience—and to salve insomnia at a time when no shortage of that misanthropy exists, as well. Podcasts band-aid these afflictions; they don’t much assuage them and may even obstruct efforts at more satisfying and lasting solutions. They also prompt mendacity, to the self and others, because they ascribe to things that are not books the intellectual status of books, and they do it, so far at least, without the slightest social stigma. Lastly, they take time to listen to, time that can be reckoned in many cases as an opportunity cost lost to time used to read.
Clearly, as a substitute for reading, to the extent that is what they are for a given person, podcasts are another dumbing-down vehicle, for the brain does not process oral speech the same way it processes writing. As a throwback to what Walter Ong called orality, podcasts represent a pre-modern methodology joined to a postmodern ethos that is, in Ganesh’s apt term, “bibliophobic.”[15] Podcasts are a methodology of cyber-communication that, to some extent at least, places passivity and affect above an active striving to master content. “It should be obvious what is going on,” concludes Ganesh: “People are willing to do almost anything than read at length.”
The Reading-Writing Dialectic
If the benign habits of deep reading deteriorate, it is certain that habits of expository writing will do the same. Like reading, orality has two sides—in this case, the listening and the speaking. The implications are several, however, and some matter a lot more than others. For one manifestation that does not matter so much, consider the still-recent hybrid form of orality that characterizes the phenomenon of social media “writing”—the reason for the scare quotes will soon become evident.
Given no more than a moment’s thought, one might suppose that the vast majority of the copy on Facebook, Nextdoor, and other niche forms of social media is composed in the mental mode of written language. It does seem, after all, to be writing: letters put together to form words, words put together to form sentences, sentences put together to enter the network of symbolic meanings that writing and reading are all about. You might be someone who supposes that because right now you are reading a book, and your habituated orientation to the copy is that of written as opposed to oral language.
If so, you might be someone who has been aghast at the abysmally poor technical as well as literary quality of the writing in social media, and you might well have wondered whether the semi-literacy you consistently see in such domains is better explained by the sad fact that most people don’t know how to write proper English sentences anymore, or because they don’t care to do so—or some entwined combination of the two. I, too, used to be perplexed by this mystery until it dawned on me that most of those writing on social media actually do not approach it as though it were a form of written language; they approach it as a form of oral language, since most use voice-activation software to “write” their messages and responses.
Some “writers” do review what gets laid down on the screen to make sure it makes sense, ferreting out glaring errors in the voice-to-text system and occasionally adding punctuation. Far fewer, it seems, will go further than that to check for proper usage conventions, let alone anything resembling style or elegance. Most do not bother to review anything, and one frequently sees whole long paragraphs with no punctuation whatsoever. This seems to mimic video streaming, which mesmerizes viewers precisely because no electronic punctuation is allowed to interrupt the electron flow. Shadow effects in the digital age are nearly everywhere, and can be detected if one looks for them.
What we see in social media writing is partly a reflection of the impatience pandemic from which we as a society are suffering, and partly a reflection of the fact that most participants do not hold themselves and others to the same standards as they would, or might, if they were genuinely oriented to written language tasks. In short, the apparently coherent choice between “they don’t know how to write” and “they don’t care” is a category error that completely misses the point.
We even have a hybrid form of mixed writing and orality: emoticons. Emoticons are a rough 21st-century equivalent of hieroglyphics. They seem to be very popular especially among those younger than about 40. The effect is, however, worse than if modern hieroglyphics were used the way they were five thousand years ago in Egypt, for example. Emoticons require no syntax as the more advanced renditions of ancient hieroglyphics did; they work either as a kind of singular exclamation mark or, when bunched together, like a multi-vehicular emotional collision. Even worse, the emoticon user picks from a list of emotions devised by some anonymous others instead of trying actually to create his or her own through the process of expressing it in writing.
Thanks in part to the ubiquity of social media in tandem with the decline of serious reading, many people now apply sharply degraded standards to written language as a shadow effect of the new orality dominance. On the Nextdoor feed I get locally here in the Maryland suburbs of the nation’s capital, the words “there,” “their” and “they’re” are often used interchangeably. If anyone points out such errors, inevitably someone will chime in “it’s just social media” or, worse, that it doesn’t matter anymore. Indeed, one fellow living in a retirement complex called Leisure World assured everyone that “those rules are not enforced anymore.” That would have to mean that this fellow does not read newspapers, magazines, or books in English, because yes indeed, thank heaven and professional editors, those rules are still enforced in places where written language endures.
But how many places does that come to? If indeed it is true, as noted above, that 54 percent of adult Americans cannot comprehend standard newspaper copy, then it’s no big surprise that we are witness to otherwise intelligent people debating which sources of news that come to us on a screen are less biased than others, as though this were a real question. All of it is infotainment—as cable news has been called Kabuki theater—and as such bears the sorry characteristics of the age: shallow and undemanding; image-saturated and thus emotionalized; ahistorical and focused relentlessly on the now—the “Breaking News” syndrome, the cocaine of the media industry. All of this has a history, and it is a history worth reviewing in brief.
Where Did the News Go?
Grabbing attention when it comes to political jousting has changed form in recent years, and it is worth reminding ourselves of what was before what is now. One way infotainment producers grabbed attention, and market share for advertisers so that they would continue to fork over large pots of money to pay for the show and the jobs associated with it, was to turn political discourse and supposed news broadcasts into a blood sport by matching extreme positions on any given issue and encouraging the debaters to be assertive, arrogant, and to wave their arms around and make faces a lot. This was the framework for the gold standard of “attention economy” television political commentary shows, CNN’s “Crossfire.”
This was only possible, however, because in the quarter century previous to this new tact the dialectic between politicians and media types had undergone a rolling series of cumulative and deliberate distortions, all for the sake of entertainment and gaining market share from its production, that simply astonishes us today when we calmly review its stages of development. When John Cameron Swayze held court on black-and-white television selling Timex watches, a pretense of imparting actual information to viewers still existed and even happened from time to time. But after Kennedy and Nixon and color-television a mutual self-consciousness among politicians and media corps led to a competition over which group could pull one off against the other as supposed news broadcasts became, in essence, a serialized made-for-television movie.
As this competition escalated and became more professionalized, the bottom dropped out of anything resembling information provision in favor of pure entertainment, and weird recursive layers began to aggregate. Politicians learned how to perform for the media; the media then ranked how politicians were doing in trying to influence them and gain attention; the media then specialized in incentivizing politicians to perform for them; politicians then pretended that they were not captives of their own rigged pseudo-events; media then called them out as hypocrites; and so on until reportage on the backstage competition itself became the stage.[16] It was as though the viewer was challenged to determine who was really calling the shots between Charlie McCarthy and Edgar Bergen as the layers of challenge and pretense thickened and blurred.
While all this was happening ethical standards in both politics and journalism crashed down to the stage floor, bounced into the bathroom and landed face down in the toilet. It took only some imagination and a few designer hallucinogens to get from there to Hunter S. Thompson’s 1971 gonzo classic Fear and Loathing in Las Vegas: A Savage Journey to the Heart of the American Dream, which fully merged journalist and subject matter in a mélange in which entertainment was the point and reality, whatever that was, became just the prop. The merger took explicit political form in Thompson’s 1973 sequel, Fear and Loathing on the Campaign Trail ’72. Both books were based, as anyone old enough to remember can tell you, on Norman Mailer’s seminal 1968 semi-novel Armies of the Night, a triumphalist send-up of ego over experience based on the 1967 March on the Pentagon.
Gonzo journalism has fallen on hard times in the past few decades since, obviously, to thrive it needs enough of a readership to get attention and pay its way for publishers and authors alike. In even more recent times its problem has become even more dire: It’s just not possible to parody what is already a farce.
Alas, in the face of the new orality we are reduced to the John Stewarts, Steven Colberts, and Jimmy Kimmels of the screen world, meek imitations in our post-literary era in which tame comedy has replaced blood-on-the-saddle sarcasm of the Lenny Bruce type. At least Brucean wit displayed moral honesty if not always good judgment, while the comedic screen shadows of the 21st century succeed only through the artifice of ridiculing their targets for the purpose of entertaining the look-down-their-noses audiences who depend on them for their next dose of snobease. This is not progress.
Nor have feature news opinion shows been headed in a good direction. When “Crossfire” first aired in 1982 it was fairly buttoned down, but it soon evolved along the lines of the bloodsport model, looking day by day more like a seated, un-costumed version of a WWF bout. It became a high-energy circus act pretending to be an evenhanded presentation of views.
Viewers liked it, so said the ratings. Critics worried about its polarizing impact and its capacity to incentivize incivility generally. One observer aptly characterized the then-emerging trend as a “flood of sensationalistic infotainment bullshit . . . that panders to the public’s worst instincts and whips both sides into a mutually antagonistic frenzy, all to maximize media company revenues.”[17] Carl Sagan and Ann Druyan were more circumscribed but just as accurate in their comment on the media back in 1995: “The dumbing down of America is most evident in the slow decay of substantive content in the enormously influential media, the 30-second sound bites (now down to 10 seconds or less), the lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance.”[18]
All of this was true, of course, but nothing could stop the vesuvial surging of entertainment unbound, and the billions of dollars it earned for its promoters. Even evangelical religion got turned into puerile entertainment thanks to television and in due course cable television in particular. This was, again, nothing wholly new in America. Tent-meeting revivals were great fun; many a European visitor to 19th-century America was shocked by how democratized religion in the United States contrasted to tradition in Europe. Billy Sunday and Dwight Moody were entertainers, too, and cultural artifacts like the 1960 film Elmer Gantry explored them. Billy Graham understood the deal then, just as Senator Tim Scott understands it now. But nothing prepared the TV screen-glued world for Jerry Falwell, Tammy Faye Baker, John Hagee, and all the other devil floggers and pocket fleecers we have witnessed on the televised gospel circus all these years.
Nine years after Sagan’s prophecy came an iconic moment in American television history: John Stewart appeared on “Crossfire” on October 15, 2004 ostensibly to plug his new book, but instead used his appearance to excoriate the show’s very essence: “It’s hurting America,” he said to the surprise of his hosts. “Here is what I wanted to tell you guys: Stop. You have a responsibility to the public discourse, and you fail miserably.” The criticism resonated; the show was cancelled in September 2005. (It briefly revived in September 2013 and lasted until August 2014.)
Steward’s criticism was not unfair, and his boldness was nothing if not well intentioned, even brave by professional standards. But look what has happened since: With the further proliferation of self-referential designer political commentary on screens, opportunities for viewers to hear two sides of an issue on the same show have all but vanished, except when staged for entertainment effect—as when, for example, in March 2024 NBC temporarily hired then-recently dumped RNC chair Ronna McDaniel not to analyze the news but to play a role. (The hire did not stick, but that was not the reason for it.)
The extremes are no longer allowed even to touch. . . .
The Age of Spectacle:
How a Confluence of Fragilized Affluence, the End of Modernity, Deep-Literacy Erosion, and Shock Entertainment Technovelty Has Wrecked American Politics
Foreword [TKL]
Introduction: A Hypothesis Unfurled
PART I: Puzzle Pieces
1. The Analytical Status Quo: Seven Theories of American Dysfunction
2. Underturtle I: Fragile Affluence and Postmodern Decadence
3. Underturtle II: Our Lost Origin Stories at the End of Modernity
4. Underturtle III: From Deep Literacy to Cyber-Orality
5. Underturtle I Revisited: The Net Effect
6. Underturtles Summed: The Cultural Contradictions of Liberal Democracy
PART II: Emerging Picture
7. We Do Believe in Magic
8. “Doing a Ripley”: Spectacle Defined and Illustrated
9. The Neuroscience of Spectacle: Shiny Electrons and the Novelty Bias
10. The Mad Dialectic of Nostalgia and Utopia in the Infotainment Era
11. Beyond Ripley: Spectacle and the American Future
12. What Our Politics Can Do, What We Must Do
[1]See Andrew Van Dam, “The unfortunate way Americans afford their Christmas spending sprees, and more!,” Washington Post, December 22, 2023, under the subhead “Does Anyone Still Read?”
[2] A primer on the neurobiology of screen addictions is Richard Cytowic, “Your Brain on Screens,” The American Interest (July/August 2015). See also the groundbreaking work of Maryanne Wolf, especially Reader, Come Home (Harper, 2018).
[3] Argued in my “The Erosion of Deep Literacy,” National Affairs (Summer 2020).
[4] Yes, apathy toward electoral politics is both normal and functional in a mass electoral democracy, hence we may speak of rational apathy on the level of the political system as a whole. Rational apathy’s impact on electoral outcomes is affected by electoral systems’ design. In Australia, mandatory voting negates the effects of rational apathy on behalf of a quasi-authoritarian impulse. Contrarily, Irish democracy uses a form of rank-choice voting combined with vote distribution techniques common to parliamentary systems when a party fails to meet the minimum level for entry into the legislature, called PR-STV (single transferrable vote). PR-STV reduces turn out but, by so doing, tends to stabilize political life. The system is not currently under critical pressure; most Irish citizens are either in favor or are resigned to it.
[5] National Literacy Trust, “Children and young people’s reading engagement in 2022,” September 7, 2022, updated January 23, 2023.
[6] Here see my “The Erosion of Deep Literacy” and the many citations therein, especially those of Maryanne Wolf.
[7] See my “The Present Madness,” The American Interest, June 15, 2020.
[8] Harrington, “Humans will defeat the chatbots,” Unherd, December 15, 2022.
[9] For example, Justin Kempf, “The Revolution Will Be Podcasted,” American Purpose, January 19, 2024.
[10] Ganesh, “Podcasts aren’t as smart as you think,” Financial Times, January 20, 2003.
[11] See Jerry Mander, Four Arguments for the Elimination of Television (Harper Collins, 1978).
[12] Timothy D. Wilson et al., “Just Think: The challenges of the disengaged mind,” Science 345:6192 (July 4, 2014).
[13] For those unaware, forest bathing is based on the fact that groups of trees exude phytochemicals known to help relax urban-stressed humans, who after all evolved in their presence. Not only does inhaling phytochemical lower blood pressure and reduce cortisol levels, some trees release phytochemicals that stimulate the release of anti-bacterial proteins in animals, including humans. For those still perhaps a bit unclear but who happen to be spatropic, think of shinrin yoku as “aroma therapy” at scale.
[14] We come back to optical illusions in Part II. They are much more revealing than many suppose if properly understood.
[15] See Walter J. Ong, Orality and Literacy: The Technologizing of the Word (Methuen & Co., 1982).
[16] Excellent accounts of this process are Gabler, Life: The Movie, chapter 3; and Postman, Amusing Ourselves to Death.
[17] Brink Lindsay, “Fighting in a Burning House,” The Permanent Problem, substack, February 7, 2023.
[18] Carl Sagan and Ann Druyan, From The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995), p. TBK.

