The Age of Spectacle, No. 17
(New) Chapter 4. Underturtle III Continued: The Rise of Cyber-Orality, part 2
The Raspberry Patch’s rollout of The Age of Spectacle is working, in its own somewhat twisted way. I awoke this morning with a major organizational adjustment in mind: Chapter 5 has been eliminated except for a few summary paragraphs placed at the end of what is now Chapter 1; Chapter 12 is mostly eliminated, turned back into the brief epilogue it once was; and former Chapter 1 is sharply summarized and put into the introduction, the rest tossed to the cutting room floor. Some of the material in the introduction is eliminated or moved. I may also consolidate the discussion of magic, leaving early cameo foreshadowings but putting most of what Chapter 2 has to say about it into what is now Chapter 6.
Chapter 4, the chapter we have been on lately, was too long and needed splitting into two. That means we are now amid new Chapter 4, part 2, with the beginning of it included in last Friday’s post. The newly redesigned outline at the end of this post, compared with the earlier version, will show all these changes.
Sorry about this, and for having already set before you some several paragraphs that will not be included in any finished product, if there ever is one. Of course I wish I had hit upon this reorganization earlier. I wish a lot of things. I also wish that this were the end of reorganizing The Age of Spectacle project, but I cannot promise that.
My tardiness at seeing the necessary is what comes, I think, partly at least, from not being in regular face-to-face collegial contact with others who share some of my same interests. This often happens, alas, to retired people, but it also happens increasingly in the work-from-home routines we are now lunging into. That saves a lot of commuting, true; but it ain’t all it’s cracked up to be…. Email, texting, telephone conversations, and even FaceTime, WhatsApp, and Zoom do not, cannot, amount to comprehensive in-the-flesh human engagement. (No, I’m not talking just about sex.)
And so we continue:
. . . Television. . . What had been unnatural became so common and well assimilated that the process hardly drew notice. This upending of linear time was, it now seems obvious, a kind of structural portal to the unreal, and from the unreal, with a little splash of willful creativity, to the surreal—and so to the 24/7 spectacle opportunities now at our beck and call. . .
We rarely focus anymore on how television changed in a fairly short time and how that affected perceptual norms. Early black-and-white television that was dominant until around 1963 or 1964 provided both less and more verisimilitude with real life than did later color television: less because it was black-and-white but more since more of it was live. Then things flipped: Color TV was more like real life but the canned nature of most programming was not.
We also typically ignore how producers tried to augment the verisimilitude of canned offerings by taping them with a live audience, or editing canned laughter into the sound track for comedies. That was designed to make viewers at home feel like they had lots of like-minded company, and to give the impression that the show was originally a live performance when it typically was not.
This was a form of fraud but few seemed to care and, after a while, even to notice. Fraud and assorted monkeyshine characterized the entire business, given the dominant advertising function of the medium. TV was based on snake-oil-inflected fraud of several sorts, and that the fraud got normalized in a Faustian bargain not for immortality but for mostly mindless entertainment. The whiz-bang oscilliscopic shiny-object distraction machine made the problem vanish for most viewers; alas, it still does.
That may qualify as a not-so-mere datum of our recent socially at-scale cognitive evolution for, without a doubt, television provided the on-ramp for the wigged out para-reality so many Americans seem to live in today. One infrequently mentioned effects is how television diluted parental authority, changed tastes, and perforated the privacy of family life. In his famous—or infamous—1987 book The Closing of the American Mind, Allan Bloom got one thing wrong in his observation of 20th-century technology’s cultural impact, but two bigger things right:
First radio, then television, have assaulted and over-turned the privacy of the home, the real American privacy, which permitted the development of a higher and more independent life within democratic society. Parents can no longer control the atmosphere of the home and have even lost the will to do so. With great subtlety and energy, television enters not only the room, but also the tastes of old and young alike, appealing to the immediately pleasant and subverting whatever does not conform to it.
Bloom was mistaken to conflate radio and television. Radio is word-heavy; television is image-heavy and so empowers Romanyshyn’s “despotic eye” even as it focuses overwhelmingly on entertainment at the expense of everything else that might be done with that technology. He was correct to argue that television is all about the immediately pleasant at the expense of everything else that might be done with that technology, and doubly right to focus on what TV did to home life. It not only privatized entertainment by dint of a technology that families around the hearth could never have developed for themselves, and thus enhanced their dependency on commercialized technology and magnified their exposure to industrial folklore; as Robert Putnam later concluded, it also further isolated the family from the wider community and contributed to social trust erosion. It advanced the switchout of our stories, too. And it brought the aforementioned time warp inherent to the medium right into our living and bedrooms.
The same time manipulation was true, of course, of “finished” oral presentations, of sagas and other set, memorized stories in pre-literate times. The same is also true in a slightly different way for books: The story is done beforehand, secretly as far as the intended audience is concerned, then presented. Modern technology has taken the time parallax to new levels: We are more fully engrossed, are less inclined to interrupt the scripting, and are wowed—which is to say cognitively astounded, in more scientific terms—by the technical aspects of screen fiction in ways that oral and literary presentations of the past have only rarely, and by different means, achieved. We can wallow for multiple hours each day in temporal shape-shifting dreamworlds because we are so affluently leisure-rich and hence so well-practiced at it. Not surprisingly, available data on the leisure-time habits of Americans tell us that this techno-wallowing is exactly what many people do with the great bulk of their leisure time.
There is more. Vaulting ourselves toward or fully into a dreamlike state, however we do it, is related to forms of religious experience—indeed, to ecstatic religious experience and to related but distinct experiences of mysticism. As any cultural anthropology adept knows, early theater arose out of pre-literate religious culture. The acting out of literally unrealistic storylines was capable of putting the audience into a collective mystical mood; it was akin to a religious experience in that it took the dominant theology of the time and presented it in an exciting (because only the actor-priests knew what came next) and intersubjectively available manner. It essentially brought the theology alive in the moment in a way that no other available behavioral modality could achieve. Music and dance in crowds around a central evening fire constituted precursors of theater, but theater marked a major advance over clipped repetitive lyrics in early song because it differentiated roles and depended on a more elaborately predesigned storyline with fleshed-out dialogue.
Even in Shakespearean times fully two thousand years later, when the Bard as the first truly modern playwright sought to naturalize theater away from its original theological vocation—and which in his day persisted as “passion plays—the audience at the Globe Theatre sometimes had difficulty maintaining the bracket between the lebenswelt and the acting on stage. In a stellar example of art imitating reality imitating art, this historical fact is brilliantly displayed in a scene from the 1998 Tom Stoppard/John Madden film Shakespeare in Love, when members of the inner audience within the film frantically call out to the stage to stop Romeo from killing himself over the body of a drugged and sleeping, but not dead, Juliette. We of the 21st century are far more sophisticated: We know the difference between what is real and what is being acted out fictionally before us. Really, we do? Always? Still?
The religious dimension of early theater—and perhaps not only early theater—also shows in the fact that it created a form of immortality. That, too, is a way of screwing with time by in a sense defying it. Real people always die but roles need not; the actors who play those roles borrow some of that immortality whilst they are performing. As immortals of the moment while on stage, they therefore become as gods to the audience. In some ancient cultures, not to exclude ancient Greece and Israel each in its own way, only priests who knew how to perform the sacred rituals, so only they could be actors. Priests in ancient Greece and Israel were at the least portals to the gods for their audiences, and in many cases they reported themselves “possessed” as they acted out their sacred theatrical duties.[1]
With television and movies today’s actors, particularly the famous ones, are godlike to their fans in the context of celebrity culture. In several non-trivial ways celebrity culture is dime-store religious culture, a kind of fast-food form of it. The key similarity is that both feature mythic narrative storylines of combined aspirational and moral uplift that are in essence timeless—immortal, in other words. As the occasionally philosophical rocker Ray Davies of The Kinks put it in a 1972 song lyric: “Celluloid heroes never feel any pain, and celluloid heroes never really die.”[2]
In that sense, for example, June Allyson was the (supposedly secular) 20th-century goddess of happy endings. John Updike’s 1996 novel In the Beauty of the Lilies captures the interweaving of the two mythic streams perfectly: The book’s first main character in what is a multigenerational family saga is a Protestant clergyman who loses his faith, renounces his congregational position, and ends up substituting going to the movies, in a cathedral-like venue, for quasi-spiritual solace.[3] One of his progeny ends up a faithless Hollywood celebrity goddess, and her son in turn becomes a returnee to faith, albeit as a member of a doomed Waco-like cult.
Indeed, film celebrities are the little Hollywood gods of modern times, with the female stars from Fay Ray and Jean Harlow to Marilyn Monroe and Raquel Welch being well named “sex goddesses.” But male and female leads alike are all “stars,” and what are stars? The ancient Greeks believed they were gods twinkling in the dome of heaven, so no doubt they would have understood and approved our contemporary use of the word. As we move further into whatever postmodern times will eventually be named by posterity, stars loom ever larger as avatars of our evident obsession with imaginative, childlike play in the form of fantasy entertainment.
This should come as no surprise: Unstuck times always unlock the romance cupboard. The Industrial Revolution in due course spawned the romanticization of childhood, and we got J.M. Barrie’s Peter Pan out of it. Now, it sometimes seems, we all want to be Peter Pan, soon perhaps to possess networked virtual-reality headsets.
In the meantime, our 21st-century variety of spectacalized bread-and-circus makes due with, for example, a recent “Ghostbuster” sequel featuring a revenant evil goddess, Gosar, returned to earth, only to be foiled by the two grandchildren of one of the deceased original trio, Egon Spengler. Except that the genius who anticipated the return of Gosar out of a mountain near a small Oklahoma town isn’t entirely dead: Harold Ramis, the actually deceased actor and original “Ghostbusters” scriptwriter who died in February 2014, appears in silvery ghostly aura toward the end of the film to join his beloved but abandoned daughter and his now-aged former comrades (Bill Murray and Dan Akroyd) in save-the-world scale victory. In the film, the ghostly Ramis, recreated digitally in yet another example of cinematic necromancy, embraces his daughter before surreally dematerializing and rising in a swirling shimmer toward the dome of heaven. The film’s subtitle is, naturally (or is that unnaturally?), Afterlife. If this is not a secularized quasi-religious fantasy, a mash up of the Book of Revelations with the popular teen adventure flick genre, nothing is. You have to see it to not believe it.
We know at some level of consciousness that much contemporary fantasy entertainment is quasi-religious fare to one extent or another, whether we take the time to work out the particulars or not. We are doing what Updike’s fictional faithless Reverend Wilmot did at a time when going to the cathedral of the movie house was by far the cheapest form of quasi-religious visual fantasy entertainment available. Ortega y Gasset, sounding a bit like an early Iberian existentialist, saw it clearly in The Revolt of the Masses: “For the truth of it is that life on the face of it is a chaos in which one finds oneself lost. The individual suspects as much, but is terrified to encounter this frightening reality face to face, and so attempts to conceal it by drawing a curtain of fantasy over it, behind which he can make believe that everything is clear.”
It is the same, and different, now. The basic dynamic is unchanged, but now the wow of spectacle is much headier even though the screens are far smaller than those of pre-multiplex movie houses. (The images, however, are much sharper and the color fidelity far closer to reality.) Availability is for all practical purposes infinite, constant, and mobile, and the cost is both trivial and hidden in nominal subscription fees. It is a rare viewer who thinks much about any of this because we now seek out the fictive so often that, for many, the recognition of its actual reality status melts seamlessly into the normal flow of our experience. We conspire with the presenters so that we may become engrossed in what we are watching, and that world is real to us “whilst it is attended to,” to recall William James’s famous if quaint-sounding phenomenological language from more than a century past.
But here is the rub: “whilst attended to” seems to have lost its original cognitive framing brackets because the categories themselves have melted into one another for many people. Thus James, again, commenting on the human craving for abstract templates of meaning to give order to the puzzling flow of life: “We believe everything we can, and would believe everything if we could.” Having willingly become apprentice cyborgs of a sort, and having evidently reached the point where reality has receded far enough from once-universal default cognitive frames, some of us now do believe everything—at any rate everything we want to believe.
This is not a new idea. Film critic and scholar Neal Gabler suggested back in 1998 in Life: The Movie: How Entertainment Conquered Reality that people in entertainment-saturated cultures increasingly default to the bracketing conventions, and presumptive moral attitudes, of fictive presentations when they contemplate reality. Here is how he expressed it: Movies not only allowed the viewer to identity with the hero on screen, but to experience
. . . a vicarious identification with ourselves. This suggested something terribly important. It suggested that the mind had begun processing life the way it processed the movies, and consequently that if the movies were a metaphor for the condition of modern existence, the moviegoer was a metaphor for how one could cope with that existence.[4]
That was then, when real life dominated a person’s sensory experience. No one literally lived in a movie theater. What about now? Technovelty enables people to dwell almost continuously in fantasy if they wish to do so. Ever fewer of us really must use lebenswelt-grounded sensory flows to adjust back to reality; some never do. Expose particularly a non-reader’s limited theory of mind to enough vivid fiction and the brain’s default disposition to reality will flip like a Necker Cube from actuality to reality-TV mode, with its simplistic plotlines and limited vocabulary.
That exactly is what has been happening at scale in the United States now for more than four decades, the past two decades particularly, and as the capacity for technological glitzification has risen and the costs to engineer it have declined, technical events have become more numerous and thus the pace of cognitive transformation has likely accelerated. As Ray Davies wrote in the same song cited above, “Everybody’s a dreamer, everybody’s a star; everybody’s in movies, it doesn’t matter who you are. . . . I wish my life was a nonstop Hollywood movies show.”
Losing the Lebenswelt
Now for many, life sort of is a non-stop movie show, but also sort of isn’t; here an important point gleaned from the nexus of neuroscience and phenomenology begs greater clarity.
Phenomenologists from William James to Alfred Schütz to Erving Goffman have referred variously to provinces of meaning, to definitions of the situation, to frameworks of reference. They mean by these terms that the brain stores all sorts of sensory data in a kind of hierarchical, fractalized categorical arrangement kind of resembling tree roots and lightning strikes, but one that is fairly flexible when it comes to recombinant recall. The same experience stored in storyline A1 can be exported in a mylenized-sheathed axonic neuronal trice to storyline A4 or D9, say, where it will be synthesized into a new ideational gestalt rising into that person’s consciousness. The human capacity to associate and rearrange pieces of memory and then successfully communicate the rearrangement to others is quite remarkable; without it, there could be no metaphor, no art, no abstract intersubjectivity, no Popperian cloudlike social world at all. Culture itself, then, could not exist for, as Peter Berger summed it up: “The cultural world is not only collectively produced, but it remains real by virtue of collective recognition. To be in culture means to share in a particular world of objectivities with others.”
Thus, whether a person is triggered by some perception to define a situation as fully lebenswelt-quality real or as a laminated keying of the lebenswelt, from a distinct province of meaning like watching and understanding a stage play—not the lebenswelt but not crazy either—all the way to full-frontal encompassing surrealist immersion, depends on both that person’s sum of learning from experience and context. Not every person will tend to go off the surrealist diving board the same way or as a result of the same stimuli. Some people, like Jacob Chansley for example, seem to be pretty much always floating in a hallucinated pool of fragmented fantasies, a condition no doubt aided in Chansley’s case by a decades-long steady diet of recreational drug use.[5] (Indeed, drug use patterns are a significant factor in explaining which people seem to jump first, quickly, and most often off that diving board and which do not.)
Otherwise, it seems reasonable to conclude that those who spend the most time imbibing screen-delivered fictions, especially of the fantasy sort, are more easily primed to default to surrealist definitions of any given situation than those who imbibe least, and those who balance their screen-based sensory diet with both reading and nature-abiding, fully three-dimensional time.
Choosing a framework orientation in response to a particular set of stimuli seems to follow Herbert Simon’s definition of satisficing: Whatever works well enough first is the framework that gets the nod. If fantasy frameworks are always on the tip of a person’s mind, so to speak—as if that person were still locked in the imaginative play world of a seven-year old—then one of those frameworks is more likely to prevail. That is the structural form of a shadow effect, described above, at work. And that is especially the case if others in the ambit of one’s presence are thinking and doing more or less the same things.
Finally on this point, those definitions of the situation that engage ways of human knowing which are tethered to the senses (empirical, rational, introspective, and memory-based) are less likely to be taken for a surrealist ride than those a person knows only from testimonial sources.[6] Now consider ways of knowing about national politics: Very few people have direct knowledge of how politics actually happens because they are participants in the process. Everyone else knows about politics by testimonial means, and when those means do not include professionally vetted examples of the written word, they can be extremely fragile with regard to truth. So among all the definitions of situations, or provinces of meaning, out there, politics is one province of meaning likely to be taken on a cognitively propelled surrealistic ride. This precisely is how elements of magical causality—the mythic mindset—colonize ground-level political understanding and discourse at scale in the current technovelized environment—of which more in Chapter 6 and beyond. It is why to lebenswelt-grounded people the magic-infused fantasists of both Right and Left seem, well, crazy.
What is the result of all this for cultural life writ large? No one has yet improved on Neil Postman’s 1985 summation, written before he could possibly have imagined the changed environment of today:
When a population becomes distracted by trivia, when cultural life is redefined a perpetual round of entertainments, when serious public conversation becomes a form of baby-talk, when, in short, a people becomes an audience and their public business becomes a vaudeville act, then a nation finds itself at risk: culture-death is a real possibility.[7]
There can be little doubt that the quasi-magical status of the technology itself plays a part here, even as the technology changes. “Any sufficiently advanced technology,” wrote the British sci-fi writer Arthur C. Clarke of Space Odyssey: 2000 sci-fi fame, “is indistinguishable from magic.” Allow me a personal anecdote.
I was born in June 1951, a time when television was just beginning to reach consumer-saturation point in the United States. Some of the early live shows featured normal people, often as game show contestants, not actors or professional athletes, being on television where other normal people could see them. Just being on television conferred a heady reputational status to the lucky person. When I was five years old I was chosen from the studio audience to go up on stage to pick a card out of a spinning wire device—something like a large guinea pig exercise wheel—on a local Washington, D.C.-based kids’ show called “The Pick Temple Show” on WTTG, Channel 5. I remember to this day, nearly seventy years later, my five-stairs ascent onto the stage, as if it were nothing short of Olympus or, in my case, Mount Sinai. I even remember the surreal feeling of the camera pointing at me as I picked some lucky viewer’s card out of that wire cage. A few years later, as a far more sophisticated television veteran at age eight, I was called out of a studio audience in New York City to be on Jan Murray’s “Treasure Hunt” show. For my trouble I received a “Treasure Hunt” board game, the top cover signed by Jan Murray himself. I wasn’t a star, but I was nonetheless elevated and, in a secular sense, enchanted by these experiences while they lasted. At age eight I had not yet heard of Henry James, nor had I yet encountered Mike Teavee in Willy Wonka and the Chocolate Factory. But when I beheld miniaturized and then over-stretched Mike Teavee and did read James, my “Treasure Hunt” experience leapt to mind as personal connectives to both.
Obviously, things have moved fast and far since those days of the second Eisenhower Administration. When David Riesman noted the transition of American society from a predominantly inner-directed personality mode to an other-directed one in 1950, he stood at the inceptive edge of a then-novel trend that has now reached near galactic scale. What I experienced as special before reaching age nine kids of all ages now experience daily as not-at-all-special on social media. Nearly everyone, adults included unless they know better and take precautions, is perpetually “on” to one extent or another, always acting, always performing, always seeking others’ attention. As was the case in the television age, there is something just below-the-surface surreal about not just being on the screen so others can see us, but also about seeing ordinary others on screen instead of celebrities. That’s the main allure of reality TV, and the reason is that it suggests at some inarticulate level of our residual mythical consciousness that just as celebrities can be immortal, so can we mere ordinaries.[8]
So life now imitates entertainment (not exactly art) not only by way of content, but also by way of cognitive structure as repeated and thus habituated morphologies of time communicated in screen fictions inflect our cognition of normal, real, wide-awake reality. It qualifies as an embedded religious experience at least in the limited mythic-mode manner suggested here. And once it invades and colonizes a political culture that political culture changes, and may even change dramatically.
In the American case the key point, again, is clear: A preliterate mentality cannot sustain political institutions based on a literate cultural origin and supported by the corresponding attitudes that emerge from a literate consciousness. A moral conscience based on individual understanding of scripture, as with Protestantism as well as Rabbinic Judaism, needs a political order that in some fashion must sum to the same correspondence between moral reasoning and its institutional embodiment. That institutional embodiment in turn reinforces the centrality of the individual consciences of those who populate and participate in it. Rip the floor out from under the practice of moral reasoning itself, as happens if everything going on is imagined to be already finished, scripted, and destined, and the institutions soon lose their intelligibility.
The Political Fallout of Digital Decadence
So of course the erosion of deep literacy, particularly among elites, is affecting American political culture. How could it not? The ongoing substitution of new hybrid forms of reality for the real thing, conjured from both Left and Right subcultural extremes, and particularly of orality for literacy, means in terms of political discourse that vocabulary derived from knowledge of etymology and historical usage disappear into farcical shells of themselves. Seen through an anti-meliorist lens, if one likes for example, there is nothing particularly odd about opposing welfare fraud among immigrant communities or over-generous disability payments that responsible taxpayers get billed for, but such opposition need not entail the rape of the English language. Among the para-literate, however, it often does.
The virtual disappearance of genuine news also has obvious political implications. Older Americans still rely mainly on television and younger age cohorts on designer-targeted social media feeds, but in terms of quality content there is not much difference. This means, to put not too fine a point on it, that anyone who does not read has no chance of understanding any policy-relevant issue beyond bumper-sticker depth. That fact has crashed the level of public discourse to “I know you are but what am I?” schoolyard taunting levels. Some label the state of public discourse in the United States “polarized.” Yes, well so are many kindergarten spats.
The newsless may, however, suppose they understand the issues, and that’s a problem: It can spoil a family Thanksgiving gathering, for sure. More than that and much more important, it feeds a pretense of earned egalitarianism where all opinions are deemed valid no matter if the opinion-giver knows shit from Shinola about the topic to hand.[9] It is one-thing to crowd-source matters of taste, another to subject reality to ignorant statistical whimsy. We will return later to the all-too-common error, pointed out some time ago by both Plato and Aristotle, of presuming that because people are considered equal in some things they are therefore presumed equal in all things.
The Age of Spectacle: How a Confluence of Fragilized Affluence, the End of Modernity, Deep-Literacy Erosion, and Shock Entertainment Technovelty Has Wrecked American Politics
Foreword [TKL]
Introduction: A Hypothesis Unfurled
The Analytical Status Quo
Technovelty
The Republic of Spectacle: A Pocket Chronology
A Spectocracy, If We Can Keep It
Why This Argument Is Different from All Other Arguments
Opening Acts and the Main Attraction
The Path Forward
Obdurate Notes on Style and Tone
PART I: Puzzle Pieces
1. Fragilized Affluence and Postmodern Decadence: Underturtle I
Government as Entertainment
The Accidental Aristocracy
The Agora’s Deafness to Classical Liberalism
The Culture of Dematerialization
Affluence and the Changing Image of Leadership
Neurosis, Loneliness, and Despair
Wealth and Individualism
Hard Times Ain’t What They Used to Be
Affluence Fragilized
Real and Unreal Inequality
The Net Effect
Dysfunctional Wealth
Searching for the Next Capitalism
2. Our Lost Origin Stories at the End of Modernity: Underturtle II
What Is a Mythopoetical Core?
Aristotle’s Picture Album
Faith, Fiction, Metaphor, and Politics
The American Story
How Secularism Was Birthed in a Religious Age
Regression to the Zero-Sum
Industrial Folklore
Bye, Bye Modernity
Mythic Consciousness and Revenant Magic
Progress as Dirty Word, History as Nightmare, Equality as Godhead
Attitudes and Institutions Misaligned
3. Deep Literacy Erosion: Underturtle III
The Reading-Writing Dialectic
The Birth of Interiority
A Rabbinic Interlude
Remember This
Dissent
The Catechized Literacy of the Woke Left
Reading Out Tyranny
Fakery Cubed: The Chat Claptrap
4. The Rise of Cyber-Orality: Underturtle III Continued
The Second Twin
Structural Mimicry and Fantasized Time
Losing the Lebenswelt
Podcast Mania
The Political Fallout of Digital Decadence
Zombified Vocabulary
Where Did the News Go?
Democracy as Drama
Optimists No More
5. The Cultural Contradictions of Liberal Democracy
A Big, Fat, Ancient Greek Idea
Footnotes to Plato
Opinion Gluttony
The New Children’s Crusade
Revering the Irreverent
The Wages of Fantasy
Pull It Up By the Roots
The Great Morphing
An Ohio Coda
PART II: Emerging Picture
6. We Do Believe in Magic
Fear and Delusion in the 21st Century Funhouse
Culture, Politics, and Regression to the Mythic Consciousness
Word Magic Redux
Our New/Old Stories
Myth, Magic, and Childishness
Addiction to AI-Spectacle as the Ultimate Danger?
Mythic Politics
7. “Doing a Ripley”: Spectacle Defined and Illustrated
The Graphic Revolution, Memory, and the Triumph of Appearances
Astounding Complexes from TV to Smartphones
Working Definition
Tricks
Illusions
Cons
Fakers and Frauds With Halos
Magnificos
Projectionist Fraud as a Way of Life
Old Ripleys, New Ripleys
Trump as Idiot-Savant Fraudster
Conspiracy Soup
Facticity Termites
Conditioning for Spectacle
8. The Neuroscience of Spectacle
Glancing
Seeing the Light
Eye-to-Eye
Surfing Your Brainwaves
McLuhan Was Wrong, and Right
Structural Shadows
Surfing a New Wave
Suffer the Children
Some Informed Speculations
9. The Mad Dialectic of Nostalgia and Utopia in the Infotainment Era
Ripleys on the Left
The Hylton-Brown Case
From Left to Right and Back Again
The Root Commonalities of Illiberalism
Spectacle Gluttony
Gratuitous Harm in Black and White
The Touching of the Extremes
The Wrongness of the Right
Now Sex
Beyond Feminism
The Irony of Leveling
Abortion: Serious Issues and Silly Arguments
The Imperfect Perfect
Vive la Difference?
Human Nature
10. Spectacle and the American Future
Bad Philosophy, Bad Consequences
Up from the Television Age
The Crux
Cognitive Illusions
Another Shadow Effect
Myth as Model
The AI Spectre
A Sobering Coda
Epilogue. What Our Politics Can Do, What We Must Do
Policy Forlorn
Who Will Create the Garden?
[1] A substantial literature exists on the psychology of acting—how actors feel when they are “in role,” how many types of acting psychology exist, how their profession affects their off-stage personalities, and so on. I regrettably know little of it.
[2] The Kinks, Everybody’s in Show Biz, Everybody’s a Star (RCA Victor, 1972).
[3] Note that Warren Sussman’s essay “’Personality’ and the Making of Twentieth-Century Culture,” Media Studies Press, January 1, 1984, concludes with an argument that the movies do for post-religious U.S. society what the church did for an earlier religious age, with “character” replacing “virtue” as the core metric of judgment.
[4] Gabler, Life: The Movie, p. 240. Since Gabler several authors have plowed this fertile field. The broadest, most recent, and insistent is Walt Hickey, You Are What You Watch: How TV and Movies Affect Everything (Workman, 2023). Hickey’s data-rich approach tends to trivialize the relationships he is describing by combining the frivolous with the serious. He also argues that violence in entertainment may decrease actual criminal violence. This argument might be plausible if Hickey relied on catharsis as an explanation, if for example he argued that fictive violence allows some people to get it out of their system so they do not indulge in the real thing. But his argument instead is that people who spend so much time in front of screens can’t at the same time be committing violent crimes. Alas, it doesn’t take much time to commit an impulse-control deficient violent crime.
[5] See Frederick Kaufman, “Jacob’s Dream,” Harpers, April 2024.
[6] More on these ways of knowing, and their implications, below in chapter 10.
[7] Postman, Amusing Ourselves to Death, pp. 155-56.
[8] This is not a new phenomenon, but the technology for mixing ordinary people and celebrities has vastly changed. In pre-photography 18th-century Great Britain, for example, families that could afford it often hired artists to paint them, and many chose mythological or historical contexts for the purpose. A perfect example captured in fiction is from Chapter XVI of Oliver Goldsmith’s The Vicar of Wakefield (1766): “My wife desired to be represented as Venus, and the painter was desired not to be too frugal of his diamonds in her stomacher and hair. Her two little ones were to be as Cupids by her side, while . . . Olivia would be drawn as an Amazon, sitting upon a bank of flowers, drest in a green joseph, richly laced with gold, and a whip in her hand. . . . Our taste so pleased the ‘Squire, that he insisted on being put in as one of the family in the character of Alexander the Great, at Olivia’s feet.”
[9] See Tom Nichols, The Death of Expertise: The Campaign Against Established Knowledge and Why It Matters (Oxford University Press, 2017).

