The Age of Spectacle, No. 31
Chapter 7: The Neuroscience of Spectacle: Research and Implications, part 3
There is no need for lengthy introductory remarks this week in The Raspberry Patch. True, the incubatory period of the onrushing second Trump Administration daily furnishes examples of any number of themes prominent in The Age of Spectacle argument: the de-rationalization of our rapidly deep-literacy eroded, oralizing culture flowing over into politics in the disparagement of experts and expertise and the rise of conspiracists and assorted other reality-resistant magical thinkers to the highest levels of government. Over our shoulders there looms, too, the bubbling national security implications of a world looking on in wonderment and more than occasional worry at the mayhem and madness, now more vividly obvious after November 5, as to the core characteristics of We the People at full socio-political scale in the brave still-new digital world.
It is, frankly, a challenge to keep up with it all, to get the gist into the manuscript, but leave off the froth, before I finally conclude that it is completed. I may not be up to it. I can no longer run as fast as I once did.
The Age of Spectacle rollout continues today with part 3 of Chapter 7; the final part of the chapter, part 4, will follow sooner than next Friday because of the Thanksgiving holiday. Two weeks from today we will begin rolling out Chapter 8. As I do occasionally, I’ve placed the book’s extended outline at the end of the post; it has changed again since the last time it appeared here.
Chapter 7: The Neuroscience of Spectacle: Research and Implications, part 3
Easy Rider
Easy is the key term here. Most people prefer what is easy to what is hard, but what is easy usually isn’t worth much. A young person listening to a story either told or read to them, whether from a person or even from the radio, is obliged to actively generate mind’s-eye images of what is happening. The listener must think for himself to become engrossed in the plotline of the story. So of course does reading a book without a lot of pictures. Watching a show on a screen, being dosed by industrial folklore in other words, does not require the receiving mind to do much of anything. It’s very easy, and all else equal, most would rather be in an alpha/theta mode than in beta unless an urgent decision point begs attention.
So if the complexity and seriousness of political life is framed technologically in ways that make it easy to assimilate, no matter the expense in banalization, most people will prefer easy over difficult—especially if affluence has persuaded them below the line of focused thought that none of this is really important anyway. Could that, at rock bottom, be why American politics has become so devoid of substance and so full of narcissistic performative personalities? Could the mediaization of politics and the sharp decline of serious print journalism have anything to do with the reframing of politics as “easy” because it has become so image/visual dominated?
Well, yes—and that is not all. Things feel easy for a reason: The technology built into various devices can induce characteristic modal brainwave frequencies in human beings. As a general rule, routine users of technology-infused devices are not aware that machine use can condition their brainwave modalities, can reset them, so to speak, from their naturally evolved frequency mix characteristic of a given day’s experience sans advanced machines. This is even more true today for smartphones as it was half a century ago for those 24,000-volt cathode ray tubes aimed at people’s heads, called television—and that was after manufacturing adjustments took care of the overly abundant x-ray emanations of the early models, which affected more than brainwave modalities.
We like to be in alpha leaning toward theta because it relaxes us; we need to find ways to relax because our man-made environment stresses us out; so just being the way we were before that man-made environment existed makes us, for lack of better word, happy. There is a well-known joke about a guy living happily in a tropical paradise doing not much of anything being persuaded by a visiting entrepreneur to go into the tour-guide business to make a lot of money, which he does. He works very hard and makes a lot of money which, twenty years of exhaustion and worry later, enables him to….live happily in a tropical paradise doing not much of anything. Never mind the “what abouts” the story may bring to mind: The point gets through.
When happy is easy, time poses no burden and even mortality itself is occluded for the duration; doing anything hard becomes a chore in part because it reintroduces awareness of time as a burden, and as a finite personal destination. So the more we experience this easy state of being the more abrupt and disconcerting it feels when we are forced back into having to do anything that drives us into or even toward beta. We are less happy there; we are not entertained; we are no longer imaginatively immortal either. The digital gadgets that make us alpha-toward-theta happy are in a sense a kind of prosthesis that we would not need were we not self-injured to start with.
What is doubtless true, at the least, is that when we are walking or forest bathing in the woods the author of our environment is nature itself in glorious three-dimensional natural color, the world coming to us directly via our senses, the world in natural circadian time unscripted save by what some believe to be nature’s own Author. Time thereby becomes open-ended and so real. When we are watching dramas or comedies or space fantasies or out-of-time magic or adventure stories (e.g., “Harry Potter” or “Game of Thrones”) on screens, on the other hand, the author is a less-than-disinterested mediator posed between us and unvarnished reality. Some usually anonymous person or persons acting as producers, doyens of industrial folklore, have sponsors even less disinterested whose raison d’être is to sell us something. The script and its enactment have long since been put in the can, and in that sense normal time does not exist in the conjunction between our watching and the production of what is being watched.
Finally in this regard, we generally recognize what context means in spatial terms, but we less often think about what context means in temporal terms. If in spatial terms there is no foreground without background, no understanding of anything in particular without context, the same is true in temporal terms. Technology has long since broken the flow of natural time—even scrolls and books do that, since they may have been written years or even centuries before we read them; so even does live theater when the script was written long ago. The same is true for radio and sound recordings, and as far as image-heavy communications go also for photographs. But moving image-heavy transmission devices like television, and now digital screen conveyances of images, are different because they create enveloping, fully engrossing multi-sensory worlds. They do not live very temporarily aside and in perceivable parallel from the real flow of time in the real world we inhabit; they have the capacity to override, even to occlude, our sense of natural time to the extent that we marinate ourselves in those technologies. Some people are more vulnerable to that temporal disjuncture than others, mostly people who for one reason or another have become habituated to sub-beta modal brainwave states.
When the orienting real-world context for sensory input is ripped away often enough and for long enough periods of time, useful understanding as prelude to intervening effectively in reality becomes at best accidental. Most of us still perceive real space and time backgrounds and contexts to make sense of things. It is not that these things have gone away; they cannot go away. But the the way human brains perceive them is changing, and to understand how the brain relates to reality we need to understand those changes. That is what we are about to do: understand what it means for mediated images to replace unmediated multi-dimensional perceptions; understand what that means for how memory works to feed our dreams and our imaginations; understand how technology inflects these processes; and understand how those inflections reshape our politics.
The Graphic Revolution, Memory, and the Triumph of Appearances
Again, the differences between mediated (still or moving) and real images (relatively still or moving) are profound despite the ease of the former’s ingestion. Here is how Robert Haas put the despotic eye point in his 1984 book Twentieth-Century Pleasures:
Images are not quite ideas, they are stiller than that, with less implication outside themselves. And they are not myth, they do not have explanatory power; they are nearer to pure story. Nor are they always metaphors; they do not say this is that, they say this is.[1]
Nearer to pure story, even without pulling on the power of metaphor—this is the key. An image does not beg our interpretation. It is what it is, seemingly whole and self-explanatory. No cognitive tasking is necessary on our part to take it in. We don’t absorb it so much as it, or the makers and projectors behind it, absorbs us. Commercial mediated-image fare is to reality, and even to face-to-face fictive theater, what photographs of food are to real food.
That can be a problem. We pay a price for indulging too much in the near verisimilitude of images. We will soon detail that price, but it is easy to describe in simple terms how the problem arises. Humans being are mimetic creatures; we imitate what we perceive if it impresses us sufficiently, and we combine and refine it all over time into what we come to think of as our own persona, and what by extension defines our personality as it comes across to others. To the extent we are aware of our own functional mimesis, we may focus on how we project images of ourselves to others.[2] The mirror that technology holds up before us encourages both capabilities.
We do this as well as we do, whether self-aware or not, because young humans learn mimetically, a critical survival trait programed into us through evolution. The arboreal forests and savannahs of prehistory were rich in unmediated reality, sparse to the point of non-existent in opportunities for wide-awake fiction and fantasy. Today, however, our mimetic aptitude attaches to the highly far-fetched as easily as it does to the quotidian down-to-earth. Our personalities, therefore, may be mixtures of mimetically adopted traits and styles from worlds that do not exist as well as from the one that does. People who marinate themselves, especially during their formative years, in fantasy spectacle entertainment are therefore liable to build up personas and self-images that differ from, let’s say, the iconic models for Grant Wood’s famous 1930 “American Gothic” painting.
The availability of novel raw materials available for mimesis is obviously a function of technology. Daniel Boorstin’s pioneering 1961 book The Image: A Guide to Pseudo-Events in America spelled out one impact of television in that regard, and earlier Walter Benjamin essays—“Art in the Age of Mechanical Reproduction” (1935) and “A Short History of Photography” (1931)—did something similar for earlier technological entrants. Indeed, when technologies that we regard today as antique were new, artists as well as intellectuals made distinctions that we must strain to recall and understand. “Photography is a reaction,” said Henri Cartier-Bresson, for one example, while “drawing is a meditation.” Cartier-Bresson was well situated to understand the difference, being both artist and photographer.
It is easy enough to sum up what all this literature has for years been trying to tell us: Life imitates artifice and entertainment frameworks as well as art, to the point where we sometimes lose sight of the difference. We are so adept at symbolic reasoning, and at moving seamlessly back and forth among ontological levels of the refracted social and cultural worlds, that we may trip over our own reasoning mastery concerning it if we’re not careful. And we have lately not been careful.
So many examples exist of image having casually displaced reality in the mentality of the current Age of Spectacle that an entire book could be written just describing them. Let just one example illustrate the class. In the National Portrait Gallery here in Washington a sign adorning an exhibit about the period of the 1930s-1960s reads in part as follows: “The Great Depression of the 1930s, with its shantytowns and millions of unemployed people, tarnished the country’s image as the land of opportunity and plenty.” Aside from the factual errors in the rest of the sign—two obvious ones stand out—the museum worker who drafted and the supervisor who approved this language for public consumption were apparently unaware of the mentality tic that led them to raise the country’s image to a more important plane than the actual suffering that took place during the Great Depression.[3]
An orthogonal example of the mimesis point is provided in the late Alison Winter’s brilliant 2012 book Memory: Fragments of Modern History. Winter showed that what we remember and how we remember it is significantly affected by the technology to hand. If, for example, an image makes its way from our working memory into long-term memory by dint of a photograph, the memory will be stored differently in the brain than if it came directly from our eyes. This suggests a seriously attention-arresting observation in the making.
Daniel Boorstin pointed out in 1961 the obvious but nonetheless overlooked fact that before the development of photography in the 19th century images of people, and everything else, were confined to what artists could do, and to what a reflection and later a mirror could provide. The problem with art was that it was basically flat, two-dimensional, and so lacked verisimilitude with the real thing. The problem with reflections in water or metal, and with mirrors, is that the second the subject turned away the image disappeared. Photographs, first invented in 1826 by a Frenchman using an adapted camera obscura, struck people in the 1830s, when they were first available to Americans to view, as a kind of frozen mirror, creating images that persisted, and so enabling a kind of out-of-body time flux one could hold in one’s own hands—say, looking at photographs of a small child become an adult become a parent become an elder become a figure in a parlor coffin—that had never before been possible except in memory. In the third decade of the 21st century we think little if at all about what an astonishing development this was to those born 150 years before us.
Photographic technology developed rapidly, became less expensive and rare, and so became commercially enmeshed in then-advanced industrial revolution-era economies. Between photographs—just black-and-white and sepia, but not multicolored—and the first silent moving pictures at the end of the century was a relatively brief interlude when an intermediate graphic technology held pride of place. After the Civil War stereopticons became quite popular, and by the late 1870s some entrepreneurs had figured out how to project stereopticonic photographs on large screens, to the astonished delight of audiences. By far the most successful of these entrepreneurs was John Lawson Stoddard, a fascinating character who became one of the most famous and wealthiest men of his time, even once lecturing before a joint session of Congress. The advent of the movies, even before talkies, made Stoddard’s act almost instantaneously obsolete and he subsequently disappeared from common history memory—but wraiths of memory exist frozen in American literature if one knows how to find them.[4]
Color photography and color movies took the availability of graphic images from the still-novel and relatively scarce at the beginning of the 20th century to first the routine and then the ubiquitous a mere thirty years later. With the introduction of sound tracks the verisimilitude of movies became so close to reality than, for the first time, it became possible for human beings to confuse what was real and what was not in their subconscious, associationally mimetic minds. The confusion was the result of how technical events engender astounding complexes not here and there, once in a while, but more or less continuously while attention is arrested inside the frame of the operating technology. This is what the seemingly innocent colloquial phrase being “glued to the screen” really means. It is not innocent.
A riveting description of how movies achieved this gluing function comes to us again courtesy of a novelist—John Updike, in this case:
At first, stage plays and music-hall routines were filmed as if through the eyes of a rigid front-row theatre-goer, but from year to year the camera had grown in cunning and flexibility, finding its vocabulary of cut, dissolve, close-up, tracking and dolly shot. Eyes had never before seen in this manner; impossibilities of connection and disjunction formed a magic, glittering sequence that left real time and its three rigid dimensions behind. Books rose up like radiant thunderheads out of the gray flatness of the printed page. . . .[5]
A few decades later what had been subconscious became conscious to the point that images took on a life and functional diversity of their own. By the end of the 20th century the interplay of images and conscience agency had resulted in an array of deliberate recursive uses of imagery for purpose of entertainment, art, and the commercial-entrepreneurial elements of both.[6]
We went from a perceptual world of natural three-dimensional images-only to an ornate, multi-sensory and multi-use palette of man-made imagery in less than two centuries, a blink of an eye in human evolutionary history. And this was before the digital revolution, before cybernetics and artificial intelligence were terms anyone had heard let alone understood. It behooves us then to ask, what have been the cultural, social and political effects wrought by the Graphic Revolution?
Marshal McLuhan was one of several observers who looked into Boorstin’s insight for its sociological implications. Photography, he argued, created a new sense of the human self involving “a development of self-consciousness that alters facial expressions and cosmetic makeup as it does bodily stance, in public or private.” He speculated further that the “age of Jung and Freud is, above all, the age of the photograph, the age of the full gamut of self-critical attitudes.”[7]
Novel mechanisms that allowed for greater self-consciousness may also be the key to David Riesman’s seminal observation in the classic The Lonely Crowd as to why the modal American personality type shifted during this same skein of years from an inner-directed to an other-directed majority. We could see ourselves as others saw us--to paraphrase and possibly misuse Robert Burns—like never before, and so we became conscious of ways to project our images to others after our preferences as was never possible before the Graphic Revolution.[8] Riesman speculated that the shift had much to do with the revolution in advertising technique that accompanied the Graphic Revolution, but it may have had an even broader source than that. That broader source was the mere possibility of constructing what we call today a “selfie” in the age of obsessive self-image consciousness.
And that, argued Warren Susman not long after Boorstin published The Image, is what turned the Puritan concern with character into the new concern with personality, which set aside virtue and piety in favor of charm and likability. After all, you cannot see virtue and piety via visual images the way you can see evidence of charm and likability. “The social role demanded of all in the new culture of personality was that of a performer. Every American was to become a performing self.”[9]
It is remarkable, in a way, that it took long for someone to give the selfie its name, and equally as remarkable that we still usually fail to make the connection between obsession with appearances, aided and abetted by the technology of the ongoing Graphic Revolution, and the general shift from the substantive to the performative in all American cultural domains, very much including politics.[10] An elite affair, politics used to be cushioned by a creative political minority that took its vocation seriously and still read deeply, providing a buffer from the full onslaught of the galloping Graphic Revolution. That buffer is no more.[11]
The Graphic Revolution may account as well for the then-new consciousness that linked fashion to class, and pushed people to dress “up,” even at great expense, from the homespun to the new commercially marketing clothing so as to project a higher status. Europeans noticed this on visits to America, commenting that Americans, frontier westerners and New England Yankees alike, seemed like born actors in fancy outfits that belied “a sense of display,” in the words of cultural historian Constance Rourke.[12]
One could go on, linking the wide shadow of the Graphic Revolution to the emergence of expressive individualism, the state of mind in which the imperial-I is never without a mental mirror of what he or she looks like to others. The concept of a Potemkin Village is of course Russian, but Russians have nothing on Americans when it comes to full-frontal fakery—fakery that often enough takes in the obsessive faker—of the kind that discounts substance to appearance.
A more granular breakdown of the data on cosmetic and beauty product purchases, mentioned in passing in Chapter 1, may be instructive here. GenZers (born between 1995 and 2010) spend on average $2,048 annually on cosmetics and beauty products. Millennials (born between 1980 and 1994) spend a bit more on average—$2,670—presumably because some GenZers are still teenagers so have less cash to spend and fewer needs to spend what they have that way. GenXers, born between 1965 and 1979, spend on average much less: $1,517. The appearances-over-substance shift was still relatively weak when GenXers were in their formative years. Baby Boomers, born between 1946 and 1964, spend only $494 on average annually. Work out the reasons for that number on your own; being one of them, I haven’t the heart to just come right out and say it. I can, however, offer a hint: You can put as much rouge on an old hen as you like, but it will still be and look like an old hen…..just weirder.
Another, related shift is worth noting. Today women in all generational cohorts spend on average $3,756 annually on cosmetic and beauty product purchases, men $2,928—a ratio of about 1.28 to 1. Some fifty years ago the female to male spending ratio was closer to 8-to-1. Apparently, men now feel they need to look spectacular, too, and corporate strategists in league with advertising professionals have figured out how to target that rising market with new products and new approaches to selling them. But why did that market rise so far so fast? A review of The Lonely Crowd’s seminal distinction between inner- and other-directed personality types might provide a hint.
Back to the larger point here, think what the switchout from substance to appearances may mean: If we remember images from reality directly, our brains will be suffused with reality-based images and their natural connections will populate our dreamwork, and move from there to populate our waking imaginations as we luxuriate in theta. Before the Graphic Revolution this was the default way, really the only way, our brains could collect and store images because natural, three-dimensional images were virtually the only images that existed outside of what was explicitly meant to be seen as art. If on the other hand we store predominantly ersatz images from photos and films then our reservoir of dreamwork material will include them, enabling us, even in our sleep, to “end up adopting a whole life” assembled from pieces viewed in the moviehouse.[13]
Now follow just one step further and our promised attention-arresting observation appears: If today in the age of CGI- and AI-accented movie and gaming fare some store mostly fantasy images, then their asleep, theta-dwelling brains will process using that raw material as dominant. Could these differences possibly bear implications for how people, once awakened and functioning beyond their bedrooms, perceive, define, and cope with real-world circumstances? Could it possibly not bear implications? Could it even be that the American appetite for magic, and the growing willingness of many to credit its modal logic as more than fictive, be sourced in this gluttony of fantasy entertainment? You think?
The Age of Spectacle: How a Confluence of Fragilized Affluence, the End of Modernity, Deep-Literacy Erosion, and Shock Entertainment Technovelty Has Wrecked American Politics
Foreword [TKL]
Introduction: A Hypothesis Unfurled 5
The Cyberlution
The Republic of Spectacle: A Pocket Chronology
The Spectocracy Is Risen
Why This Argument Is Different from All Other Arguments
Opening Acts and the Main Attraction
Obdurate Notes on Style and Tone
PART I: Puzzle Pieces
1. Fragilized Affluence and Postmodern Decadence: Underturtle I 39
Government as Entertainment
The Accidental Aristocracy
The Deafness to Classical Liberalism
The Culture of Dematerialization
Affluence and Leadership
Neurosis, Loneliness, and Despair
Wealth and Individualism
Hard Times Ain’t What They Used to Be
Affluence Fragilized
Real and Unreal Inequality
The Net Effect
Dysfunctional Wealth
Searching for the Next Capitalism
2. Our Lost Origin Stories at the End of Modernity: Underturtle II 75
What Is a Mythopoetical Core?
Aristotle’s Picture Album
Faith, Fiction, Metaphor, and Politics
The American Story, a First Telling
How Secularism Was Birthed in a Religious Age
Regression to the Zero-Sum
Industrial Folklore
Bye, Bye Modernity, Hello the New Mythos
Mythic Consciousness and Revenant Magic
Sex Magic
Word Magic
Progress as Dirty Word, History as Nightmare
Attitudes and Institutions Misaligned
3. Deep Literacy Erosion: Underturtle III 134
Trending Toward Oblivion
The Reading-Writing Dialectic
The Birth of Interiority
A Rabbinic Interlude
You Must Remember This
Dissent
The Catechized Literacy of the Woke Left
Reading Out Tyranny
Chat Crap
4. Cyber-Orality Rising: Underturtle III, Continued 164
The Second Twin
Structural Mimicry and Fantasized Time
Losing the Lebenswelt
Podcast Mania
The Political Fallout of Digital Decadence
Zombified Vocabulary
Democracy as Drama
Where Did the News Go?
Optimists No More
Foreshadowing a Shadow Effect
5. The Cultural Contradictions of Liberal Democracy: An Under-Underturtle 213
A Big, Fat, Ancient Greek Idea
The American Story Again, This Time with Feeling
Footnotes to Plato
Some For Instances
Jefferson à la Carte
Revering the Irreverent
The Deep Source of the American Meliorist State
The Great Morphing
Immaturity, Myth, and Magic
The Wages of Fantasy
Pull It Up By the Roots
PART II: Emerging Picture
6. “Doing a Ripley”: Spectacle Defined and Illustrated 265
Astounding Complexes and Technical Events
Tricks, Illusions, and Cons
Fakers, Frauds With Halos, and Magnificos
Projectionist Fraud as a Way of Life
Old Ripleys, New Ripleys
On Fake News
Trump as Master of Contrafiction
Conspiracy Soup
Facticity Termites
Conditioning for Spectacle
To the Neuroscience
7. The Neuroscience of Spectacle: Research and Implications 308
Brain Power
Seeing the Light
Surfing Your Brainwaves
Suffer the Children
The Screen!
Easy Rider
The Graphic Revolution, Memory, and the Triumph of Appearances
McLuhan Was Wrong, and Right
Brain Shadows
No Need to Exaggerate
8. Spectacle Gluttony: Race and Gender 345
Cognitive Gluttony Racialized
Ripleys on the Left
And Now, More Sex
Abortion: Serious Issues, Specious Arguments, Sunken Roots
Beyond Feminism
I’m a Man, I Spell M-A-N
The Imperfect Perfect
9. Saints and Cynics: The Root Commonalities of Illiberalism 383
The Touching of the Extremes
From Left to Right and Back Again
Spectacle Gluttony
The Right’s Crazy SOB Competition
The Irony of Leveling
Vive la Difference?
Human Nature
Is Woke Broke?
10. Spectacle and the American Future 416
Bad Philosophy, Bad Consequences
Astounding Complexes from TV to Smartphones
Up from the Television Age
The Crux
Cognitive Illusions
Another Shadow Effect
Myth as Model
The AI Spectre
A Sobering Coda
Epilogue: What Our Politics Can Do, What We Must Do 458
A Few National Security Implications
Meanwhile…
Who Will Create the Garden?
Acknowledgments 471
[1] Haas quoted in Altfeld and Diggs.
[2] Here differences of opinion endure. Erving Goffman believed that nearly everyone consciously projected images of themselves as an ongoing management technique, an argument he made famous in his 1956 book The Presentation of Self in Everyday Life. I recall a visit to his Rittenhouse Square apartment, probably in 1974, where I argued that, in Lonely Crowd terms, that kind of behavior aligned better with other-directed personality types than with inner-directed ones. I did not persuade him to nuance his view; but neither did he persuade me that he did not need to.
[3] Others have noticed woke mischief in the NPG, too. Here is Thomas Frank from the November 9, 2024 New York Times: “Everyone has a moment when they first realized that Donald Trump might well return, and here is mine. It was back in March, during a visit to the Smithsonian’s National Portrait Gallery, when I happened to read the explanatory text beside an old painting. This note described the westward advance of the United States in the 19th century as “settler colonialism.” I read it and I knew instantly where this nation was going.” The essay is entitled “The Elites Had It Coming,” rather ironically appearing in the NYT, seeing as how it is a bastion of elite wokeism.
[4] Boorstin does not mention Stoddard or stereopticons, nor does Gabler in his otherwise encyclopedic review of the Graphic Revolution. This is an oversight on their part.
[5] Updike, In the Beauty of the Lilies, p. 106.
[6] No one covers this developmental path better than Gabler.
[7] McLuhan, Understanding Media: The Extensions of Man (McGraw Hill, 1964), pp. 176-77.
[8] Boorstin must have read The Lonely Crowd, published in 1950, but he does not mention it in The Image.
[9] Sussman quoted in Gabler, p. 197.
[10] Pardon a personal note. When I first visited Australia in 1997 at the invitation of the Australian government, I learned that Aboriginal peoples—some of them at least—did not like outsiders photographing them. I was told that the elders believed that photographic images “stole” pieces of their souls. This belief was put down to superstition that deserved polite respect but that otherwise was of course not to be taken seriously. For decades I gave the matter no further thought until one day I realized that, if even for not exactly the correct reasons, the Abos were right! Every time we perseverate on images and appearances of ourselves we lose a little bit of our inner substance if only from inanition and neglect. If the former sort of behavior is pretty much all a person does, and if that person does not build up stocks of substance by reading and engaging in the real world, eventually not much will be left of that person within.
[11] Sometimes you just have to be there, or had to have been there, to appreciate a point like this. I served briefly as a Senate aide in the late 1970s, when most politicians were largely still serious people and, with their staffs, read essays and books as per normal. When I returned to government life in the early 2000s, even on the 7th floor of the State Department, all had changed. Except during times of accelerated crisis experience you never used to see a television switched on in a public space in any Senate office. By 2003 you never walked into such an office with the television switched off.
[12] Rourke quoted in Gabler, p. 194.
[13] Geoffrey O‘Brien quoted in Gabler, p. 196.