The Age of Spectacle, No. 29
Chapter 7: The Neuroscience of Spectacle: Research and Implications
No, we’re not going to discuss the election here. Not even a little, except to note that anyone interested in my take can go (or go again, as the case may be) to the RSIS website and look for my fourth RSIS Commentary piece in the recent series, dated Nov. 7 and titled “The US Election: Expectations and Outcomes.” Some may note that in this article I presumed a Republican majority in the House at dawn on Wednesday, before I had a confident right to do so. But I had a strong feeling…..and that feeling is turning out to have been justified. Unfortunately.
Today’s Raspberry Patch continues The Age of Spectacle rollout with the first part of Chapter 7. I’ve included a partial outline of the project at the end of the post for those who need to situate themselves to where we are. For those new to The Raspberry Patch, know that all 28 preceding posts are available in the archive, but none remain exactly as they were when posted. The entire manuscript remains a work in progress.
Note my admission in footnote 2 that I needed much study and help to master the material that begins this chapter. Dr. Richard Cytowic, Professor of Neurology at George Washington University, encouraged me, gave me a copy of his working manuscript on the subject more than a year before it was published, and helped me understand the more esoteric parts. I am deeply appreciative.
Dr. Cytowic’s manuscript was finally published, after many frustrating delays, on October 1 by MIT Press under the title Your Stone Age Brain in the Age of Screens: Coping with Digital Distraction and Sensory Overload. The published book does not differ much from the working manuscript I studied, though Dr. Cytowic’s copious research notes are updated as enabled, somewhat ironically, by the publishing delays. Everyone who cares about these issues, and most certainly anyone trying to raise healthy children today, should study this book. This is a medical doctor writing, folks, and writing in lucid English……not a journalist, or a sociologist, or an economist, or any other kind of amateur on this seminal subject. In other words, it’s the real deal.
And so to Chapter 7:
Chapter 7: The Neuroscience of Spectacle: Research and Implications
“We are the slaves of our technical improvement . . . . We have modified our environment so radically that we must now modify ourselves in order to exist in this new environment. We can no longer live in the old one. Progress imposes not only new possibilities for the future but also new restrictions.”
--Norbert Wiener[1]
And now for something that may seem completely different from all the foregoing, but that really isn’t. The fact of the matter is that “Doing a Ripley” has a deeper basis in human anatomical neurology than we have thus far noted. The astounding complexes at the core of all Ripleys rely ultimately on the way our central nervous system has evolved over roughly 300,000 years of experience playing back into the structure-function dialectic of all living things. The centerpiece of that central nervous system is, of course, the brain.
So let us then first review some brain anatomy, moving then to some particulars of our visual system, and then to how brain modalities are measured according to wave frequency distributions.[2] This will provide a basis for returning anew to our main concerns much better equipped to really understand them. Lastly in this chapter we will direct our attention to some likely and possible political implications of all the foregoing.
Brain Power
The human brain is a wondrous contraption, in key ways unlike anything similar among earthly fauna by dint of its mix of relative size and sophistication. It nevertheless has its limits as a mortal part of mortal beings, and those limits make the human brain inadvertently vulnerable to our own handiwork and to spoofing by the designs of others. Let us now try to understand the anatomical basis of how that can be.
The human brain’s roughly 86 billion neurons need power to work together, and there are limits to how many calories we can take in and then deploy to the functioning of our brains on a daily basis. While only about 2 percent of the body’s mass, the brain consumes about 20 percent of the calories adults take in—around 50 percent for an adolescent moving down toward 20 percent until around age 25, and 60 percent for an infant. Even at just 2 percent the human brain is nine times larger that the brains of other proportionally sized mammals, and the frontal cortex constitutes most of it—about 80 percent of the brain’s volume.
Three related functions use up most of our brain calories: acquiring a focus of attention; concentrating attention; and shifting attention. We move seamlessly between focusing on something, maintaining a concentration on that something, and then shifting attention to the next something. We do not realize we are doing this unless someone points it out, and then our attention itself becomes a focus of our attention, skewing, Heisenberg-like, our appreciation for what we had been doing before.
On any given day and even within any given hour only so much energy is available for the brain to function. In other words, the brain has but a limited surge capacity; just 3-5 watts of extra demand amount to 12-20 percent of daily capacity. So the brain thus needs to modulate, to space out, how it consumes energy or it eventually shuts down and needs to rest and restore. What it needs to do with the calories it has at its disposal is to acquire objects without or symbols within to focus attention on, and then, as already indicated, it needs to do two seemingly opposite things: maintain a focus of attention and shift attention to other objects or symbols when necessary or desirable.
We acquire a focus on objects through what we call glancing. Glancing is not a superficial activity, as many suppose. It is a deep penetration of the environment designed to seek out anomalies from the ordinary. It is a change detector. We do it because we have evolved what cognitive scientists call a novelty bias. Concentrating attention requires a shift in our brainwave modality, of which more in a moment. Shifting attention triggers memory retention regarding the focus we are about to give up. We do a lot, in other words, just being awake with our senses working in normal form. And just being awake, even if we are not trying to think particularly hard about anything, is energy intensive. Just for our brain to be homeostatically calm, called sometimes the isolectric condition, we pump 3.4 x 1021 molecules of adenosine triphosphate (ATP) every minute. At 1,440 minutes in a day, that comes to…..well, a lot.
We do all this on the cheap: The human brain has evolved to be a very efficient core of our central nervous system. Its cytoarchitectural design, involving the intricately folded arrangement of six layers of cells throughout the cortex, is the key to this efficiency. Thanks to this design, called sparse coding, only a fraction of neurons— between 1 and 16 percent, depending on the situation—“fire” at any given time; but because there are thousands of pathways those signals may take, they get the job done at a lower energy cost.
Some neurons apparently never fire—that is, never generate enough current to move down the axon and jump the synapse to the next axon. But their presence contributes to metabolic stability such that firing neurons don’t wear themselves out doing housekeeping chores like generating heat. Moreover, most of what the brain does goes on below the level of consciousness—and since the conscious functions of acquiring, focusing on, and shifting objects of attention consume the most energy an efficient housekeeping function in essence underwrites our consciousness energy budget.
One other efficiency trick is at work, as well. When a neuron fires in a way that runs over established networks it is setting the stage to store some new impression or data in memory. When it does this it needs to synthesize a protein to enable it to generate a new nerve cell, and that cell is sealed at both ends, so to speak, with a glial cell that is then covered over with white matter tracts—a process is called myelination. Myelinated nerve fibers are superconductors, transmitting signals much faster than occurs along non-myelinated fibers—about 460 mph compared to 2.2 mph.
The brain’s cytoarchitectural design and myelination enable it to make its connections among functionally specialized neurons based on a flow of electrical power barely sufficient to get an incandescent light bulb to dimly shine—about 25 watts. (That is not a sideways comment on modal human intelligence, although some days it feels like that.) Richard Cytowic has put this in perspective: “Compared to the mere 25 watts needed to run the brain, the room-sized Watson consumed an enormous amount of electricity and required 240,000 BTUs of cooling capacity, equivalent to two ton-weight commercial air conditioners.” Even Watson’s dramatically more efficient successor True North still uses more power and does less with it than an ordinary human brain. The human brain’s circuitry is also very slow compared to that of modern machines—typically around 120 bits (circa 15 bytes) per second compared to 8,589,934,592 bits per second on a Verizon fiber optic connection. A neuron can spike an electrical charge only about a thousand times per second; that seems fast, but not compared to a new laptop computer, which is about ten million times faster.
About 25 watts is apparently all that humans needed to survive and even thrive with the aid of culture’s epigenetic contributions—controlling fire and cooking, for example and, much later, literacy—for all but a tiny fraction of the past 250,000-300,000 years. For example, it takes about 60 bits of power to listen to one person speaking to you, about half the bandwidth available at any given time. You cannot listen effectively to two people speaking to you at the same time—that’s multitasking overload for humans, although apparently some animals (honeybees and many birds, apparently) can attend well to simultaneous communications from multiple sources). But that was good enough, as we like to say, for both folk music and government work in the past.
Here, again, is the rub: We now live in that very tiny fraction of evolutionary history, and some changes are afoot that call into question our biological coping capacity. Two changes stand out as most critical: The increase in the frequency and attention-pulling power of distractions; and the advent of technological innovations that are extensions of the human brain rather than of human muscle.
The increase in the number and cognitive appeal of distractions began to skyrocket long before the advent of the cyberlution. Everything from brightly painted shop signs and billboards, all much thicker in urban areas than in rural ones, to television attest to their by now longstanding ubiquity. And of course these innovations gradually affected our brains epigenetically, adding some modest plastic circuitry to cope with them: Technological development has always changed us, especially macro- or generative innovations like steam power. It was one thing, however, for our man-made environment, especially in urban areas, to become routinely more perceptually prominent than the natural environment. It is quite another for the current man-made digitized information environment to have entered in our individual and collective cognitive orbs with the result that we are bombarded near constantly with a dazzling cornucopia of distractions that we cannot not notice.
Distractions, digital and pre-digital alike, tend to exhaust us. They make moving from acquiring, concentrating, and switching focus much more frequent and hence far more energy demanding. If not managed wisely, the exhaustion exacted by more frequent and fixating distractions can generate gratuitous stress, anxiety, depression, and even distrust. All these effects can discombobulate our sleeping patterns, rendering our exhaustion even worse. They can, in sum, even wreck our health, physiological as well as mental—as if there were really a sharp dividing line between the two. (There isn’t.)
Nor can most of us understand the nature of digital distractions, which are advancing much faster than our admirably adaptive biological assets can handle. Not understanding how the machines work risks confusion over the distinction between effects caused by the content of transmissions and effects caused by the structures of transmissions inherent to the design of the machines. Our brains cannot sort, prioritize, and categorize the rapid-fire sensory deluge most of us now expose ourselves to, and even trying to keep up can harm us: Some research suggests that youthful habitual multi-taskers develop smaller anterior singulate cortexes, the area of the brain involving directing attention.[3]
Yet many suppose that we can use the same technology to do this sorting, prioritizing, and categorizing that caused the deluge in the first place. This is a peculiar supposition upon a moment’s reflection, one worth some scrutiny among those who take its veracity too much for granted.
Seeing the Light
So much for a very gloss on brain structure and function; now let’s look more specifically at our visual system, and how the system enables astounding complexes.
The detailed anatomy of the human eye reflects the three core functions of acquiring, concentrating on, and shifting attention. In that regard consider the fovea, a tiny depression in the center of your eye about the size of a low-dose aspirin. Your fovea contains only cones, no rods. The relative absence of blood vessels due to the absence of rods lets more light enter the fovea, and the density of the cones enables very acute vision. Your fovea covers only about 2 percent of your visual field, it being only about 1 percent of your retinal surface. But that is where half of your optic nerve fibers are that track in to about half of your brain volume in the visual cortex.
Our whole field of vision actually exceeds 180 degrees—we can see around corners, as it were--but acuity is limited to what we focus on directly. We see very well what becomes the object of our attention, and when we do all the rest of our visual field is given over to working our novelty bias—our ability to detect change in our peripheral vision and so alternate/change the object of our focus. We do this with jerky eye movements called saccades. We can move our eyes smoothly to keep a moving object in focus if that is what we want to pay attention to, but without an object to fixate on a voluntary execution of smooth eye movement is not possible.
In sum, the human visual system is an exquisite focal device and simultaneously an exquisitely sensitive change detector. To return to what this means today in our digitally suffused man-made environment, flickering brightness and flashing lights in any screen that happens to be in our visual field will yank our attention toward it no matter what we are doing, and whether we care about being distracted from it or not. We can’t help being distracted, even if absolutely nothing is at stake, thus turning our keen evolved survival tool of glancing into a sheer waste of energy. We cannot pre-filter the stimuli: Our visual equipment does what it has evolved to do, and we can do nothing about it. Our proneness to being distracted via our novelty bias is the underlying basis for an astounding complex: It enables deliberately placed distractions to seize our attention and focus it on something designed in such a way that we cannot determine quickly what it is.
The most peripheral part of our visual field is represented in the lingular gyrus of the temporal lobe, an area thick with connections to our limbic system. We need to be emotional, so to speak, about objects that enter sideways into our visual field because they could potentially kill us, which is why we startle at moving objects caught in “the corner” of our eye. Such perceptions, connected all the way down to our brainstem, ramp us up, throw us on alert, and drain caloric energy for the purpose. Indeed, we glom onto slightly salient objects we perceive more than we do obviously salient ones. The reason is that we don’t yet know how salient a just-perceived object may be and we need to find out. Obviously salient objects extracted from our memory banks we can come back to later. Objects of uncertain salience demand our attention right now, only after which we may remember them.
That evolved capacity used to save us from dangers of many sorts, and it still can and does. But it can also drive us crazy when it is weaponized against us, as for example when attackers screw with brightness.
We typically experience in nature a range of luminosity levels: starlight registers at about 0.001 of a candela; moonlight is a hundred times brighter at 0.1 of a candela; direct sunlight is very luminous at 10,000 candelas, which is why we do not look directly at the sun unless we are complete idiots. Indoor lighting of most kinds registers at about 100 candelas; computer monitors usually register at two and half times that in standard settings. The luminenscene “volume” of an iPhone, especially when held close to the face in a darkened room, can be as much as 100 times that bright and is known to cause retinal damage over time.
What this means is that it is not only or even mainly program content delivered by screens that may be problematic, especially for children, but the technology itself regardless of content, also especially for children. The human fovea is not fully mature until about age 4, just one datum of many reflective of the uniquely long neoteny of our species that extends the vulnerability of human young to maturation interference like no other animal on the planet.
It is not incidental that visual images induce spectacle in a way that our other senses typically do not. About 85 percent of the brain’s sensory input comes through the optic nerves in our stereo eyes, making the eye a key part of the central nervous system rather than a peripheral part. If “seeing is believing,” it follows that tampering with visual images can cause belief to wander in directions a manipulator prefers better than tampering with our other senses. The evidence for this and for what it implies is a bit esoteric, but essential nonetheless.
Most of us are not consciously finicky about our light environment. We see by dint of light emanating from an incandescent bulb, a fluorescent light fixture, a computer screen, an LED smartphone screen, and so on usually undistracted by any difference between these sources and illumination from unmediated natural sunlight. But there is a difference between the breadth of the electromagnetic spectrum we see unaided by any machine transmission and the truncated nanometer ranges that artificial lighting provides. Without going into the numbers, this accounts for concern about the “blue” light proclivities of smartphone screens and other back-lit digital devices and their effects on sleep patterns and health. LED screens also flicker at rates we cannot consciously detect, but the flicker has a sensory effect on our brains all the same.
Just as stereo audio sounds, no matter how close their mediated fidelity to an acoustic original, still lack the original’s full aural texture, so a mediated visual image delivered on a screen lacks the full visual texture of the original. Not only is it merely two-dimensional—one hopes this is obvious to sentient adults once it is pointed out—and made to seem three-dimension through artifice and context, its coloration and embodiment in nature are not the same. Our brains fill in the difference, and over time we have refined a knack for letting mediated images flow seamlessly into our experience as if they were fully real.
But two-dimensional images of three-dimensional reality are not fully real, only representational substitutions for objects that are real. They are thus disembodied from the flow of natural experience, and the implications are not trivial even if they are not consciously sensed most of the time. In particular, natural three-dimensional images are scale-invariant, which means that they retain the same amount of detail whether you look at them close up or further away. Two-dimensional, unnatural images vary with scale, losing detail the further the image is from the eyes.
The reason for this is that light and shadow details vary as any three-dimensional object moves in space, but a flat two-dimensional object and the image it projects to our eyes behaves very differently. The computational capacities of the brain, still a subject of much research, react to this difference, and because of it more calories are required to interpret unnatural images.[4] The brain processes scale-invariant images using a fairly small number of neurons; it must use many more to “see” unnatural two-dimensional images because it must expend energy filling in what is not there. The magnitude of variance determines how uncomfortable and energy-draining such images are to comprehend.[5]
Screens qualify as shiny objects that distract us: They are unnaturally bright and that unnaturalness captures out glancing instinct. We cannot not see them when they are on, and their very existence in our environment, whether we are paying attention to them or not—like the televisions always on in airport lounges and most medical office waiting rooms—constitutes an array of calorie-sucking distractions.
One might suppose that at this point we have all the screens that we could possible need or want. Need, probably; want, no way: A new product called Aura, a tabletop screen device into which once can load family and other photographs, changes images once every so many seconds. A button on the top of the frame allows a viewer to cause a cascade of cute little hearts to rain down over the image on the screen. Other buttons allow one to reverse to the previous photo or hold the one on the screen. Someone bought one for our house, and the three granddaughters presently living with my wife and I are mesmerized by it. Our 6-year old granddaughter, in particular, will stand in front of the Aura screen, her eyes less than eight inches from it, playing with the buttons for extended periods. When I speak to her to get her attention, I find her so zoned out, her eyes wide, that unless I touch her lightly to interrupt her self-hypnotic trance she does not hear me. When no one is manipulating the buttons, the Aura screen keeps on working, rotating photos through, even when no one is paying attention to it. But it is there, at a minimum acting as a micro-distraction whether or not those in room or passing through it realize it or not. It is the rough equivalent of visual second-hand smoke.
Unnatural two-dimensional images also disengage us from the natural world, constituting a sort of perceptual opportunity cost, just as too many “safety” gadgets built into new cars are actually counterproductive because they encourage drivers to become disengaged from what they’re doing. The same goes for seeing the world through a smartphone camera lens.
The recording of two-dimensional images forfeits first-hand three-dimensional experience for the promise of another temporally distanced two-dimensional experience. Taking photos does no harm as long as it is not overdone, and having the photos can allow the summoning at will of an intersubjectively shared experience for later joys—fine. But the cameras are so good, so near-constantly in hand, and the photos so cheap to keep in great numbers that overdoing has become so common that it amounts to preparing for a two-dimensional future at the expense of living in the three-dimensional present. Boris Pasternak did not live to see smartphones, of course, but he did not need to in order to warn us against their use to excess: “Man is born to live, not to prepare for life.”
Surfing Your Brainwaves
It is time now for a foray into the key metric of elementary neuroscience: brainwave frequencies, and what they mean. It won’t hurt, promise.
What is a low alpha brainwave, a term mentioned in passing above? A wave of about 8-9 hertz (Hz) is the answer.[7] Neuroscientists refer basically to five modal brainwave frequencies in humans, detectible with more or less high fidelity by electroencephalograph devices (EEGs).[6] Note that our brains are rarely functioning all in a single frequency; different lobes can operate simultaneously at different frequencies and may be stratified across different levels of conscious awareness. But one mode is generally dominant to consciousness.
The slowest waves with the highest amplitude are called delta, registering at about 4 Hz or less—usually marked as 0.1 to 3.5 Hz. Delta typically occurs in deep, dreamless sleep and is also the dominant rhythm in infants up to around their first birthday. When adult human brains are in delta, awareness of the world outside our heads is virtually nil, but we can and do access information stored in our subconscious. Many cognitive scientists believe that delta helps us “defrag” our brains from a day’s sensory input, and aids the transfer of short-term into long-term memory via the hippocampus.
When high focus is required, most of us can reduce delta frequencies to about zero. But delta waves actually increase in certain brains when focus is attempted; clinically validated Attention Deficit Disorder is one such condition, which makes it hard for affected individuals to focus without distractions that absorb or segregate delta brain energies. It is as though the brain is locked in a wakened but drowsy state, so focusing requires shunting the brain’s delta activity away from the task requiring focus. We will return in a moment to this datum, for it turns out to be important for understanding what astounding complexes look like to certain habituated, “locked” brains.
Next comes theta, in the 2.5-7.5 Hz range. Theta straddles our brain’s sleeping and relaxed waking time. It is “slow” brain activity compared to alpha, beta, and gamma, but “slow” does not mean boring. To the contrary: Theta-dominance brain frequency is connected to rapid-eye-movement (REM) dream-heavy sleep, and to creativity, imagination, intuitive reasoning, pattern recognition, deep memory, and emotional states of arousal. Theta dominance is rare in wide-awake adults; it is certainly no state of mind to be in if you are, say, driving a car in city traffic. But theta is normal in children up to around 12-13 years old when they’re not forced to act like adults. Of course: This is what the brain is doing when a child is engaged in typical imaginative play.
If it helps some readers, theta corresponds to Michael Polanyi’s notion of “slow” or “loose” thinking in adults as opposed to “fast” and “strict” thinking (a distinction adopted and adapted by Daniel Kahneman, who became better known for it than its original formulator). Theta frequencies are also associated with minds in meditation and prayer, and minds that are mesmerized or hypnotized by someone or something. It is, in other words, a range that can be “done to” in the sense of imposed upon subjects in face-to-face encounters, or that can be the result of engagement with frequency-emitting machines operating in the 2.5-7.5 Hz range. Yes, being coupled an activity with a machine can cause a human grain to synch up with a machine— remember that, please. Adult brains in theta correspond usually to people who are seated or lying down, and whose eyes are not moving about actively scanning the scene. Hold that datum, too; it is important to where we are heading.
Third is alpha, the 8-12 Hz range. Alpha is the normal get-it-done frequency. It characterizes our brain state when we are being calmly resourceful, problem-solving on routine tasks and shifting among them as necessary. Unlike theta, which is characterized usually by a diffusion of activity across several brain lobes, alpha is generally dominant to consciousness when it is “on.” Alpha is also the default state of the waking adult brain, the “everything is cool” mode, so to speak. In it our mood is even, we’re reasonably relaxed, we’re taking in the world and responding to cues, we’re moving around and our eyes are also moving, scanning the scene, shifting visual focus as they do. We’re ready or looking for whatever comes next, letting our novelty bias chose the tune to which we will next step. In short, we’re functional and flexible and neither drowsy, anxious, nor intense.
Fourth is beta, the 12-29 Hz range. Neuroscientists usually subdivide beta into low-beta (12-15 Hz), mid-beta (15-18 Hz), and high-beta (18 up to 29 HZ). In low-beta a person is not moving around, but is seated and focused on some task. The distribution of low-beta frequencies tends to be localized by hemisphere and by lobe—say, frontal lobe and occipital side. In mid-beta, the brain is actively thinking about something for a protracted period. The activity is often localized in the brain but can engage and link several areas together. In this state a person is alert, and the object of alertness can be either outside in the environment—say, carefully watching the behavior of a large predatory animal at some distance—or it can be internal—say, thinking about the distinction between a statistical average and a statistical mean, to take a random example. In high-beta, a brain is very alert and focused, fully “on.” A certain amount of agitation or stress can be associated with high-beta, as if one is problem-solving and the solution is required soon or even right now for a clear and present purpose. In this state a person may well be standing and even pacing, but the true focus still remains inside the alert mind.
Fifth and finally is the rarest brain frequency mode: gamma, which registers on an EEG at above 30 Hz all the way to 40 Hz on rare occasions. Here the brain is not just being directed to think and problem solve, but to integrate knowledge at peak performance levels. The brain is trying to discover or generate new understanding or awareness, to synthesize information with knowledge (they are not the same and neither is a “thing”; information may be defined, Ambrose Bierce-like, as vagrant facts in search of a purpose). When a person is in gamma every part of the brain registers at or above 30 Hz, making gamma the most non-stereo of all states of mind.
Gamma’s “pulling it all together” marks a state of accelerated mental concentration capable of producing high quality (and quantity of) thought. Apparently, only people with good memories can make highly effective use of gamma concentration periods, and those few with eidetic memories tend to be most efficient in a gamma frequency state and most inclined and able to summon gamma consciousness. Gamma states tend to be of intermediate length in duration: People rarely go into gamma for only a few minutes, and few have the energy to stay in gamma for more than a few hours at a time. Doing so is simultaneously exhausting and satisfying, if it works. Otherwise it is just exhausting.
Suffer the Children…the Rest of Us, Too
An iPad is the worst babysitter ever conceived: It not only has no eye for a child to look into to stimulate myelination, it also monopolizes a child’s visual attention bandwidth, retarding the development of saccadic glancing facility. Eyes that do not move, that do not scan the visual field in saccadic fashion, shift brainwave patterns downward toward a soporific state of mind and harm the muscles that control the eyes. Now consider this datum: As of 2017, more than half of all infants in affluent nations had played with an iPad placed before them by a parent or caregiver before they had learned to speak.
There is no longer any question that long hours spent watching screens physiologically harms brain development in young children, and that is over and above—actually it is behind—all the behavioral damage that screens do not just in childhood but well beyond, to wit: How is the next generation supposed to avoid harmful behavioral patterns in regard to screen use if their brains are diminished when young by those very patterns because of excessive screen exposure? You see the recursivity of the problem.
The Age of Spectacle: How a Confluence of Fragilized Affluence, the End of Modernity, Deep-Literacy Erosion, and Shock Entertainment Technovelty Has Wrecked American Politics
Foreword [TKL]
Introduction: A Hypothesis Unfurled 5
PART I: Puzzle Pieces
1. Fragilized Affluence and Postmodern Decadence: Underturtle I 37
2. Our Lost Origin Stories at the End of Modernity: Underturtle II 71
3. Deep Literacy Erosion: Underturtle III 130
4. Cyber-Orality Rising: Underturtle III, Continued 157
5. The Cultural Contradictions of Liberal Democracy: An Under-Underturtle 202
PART II: Emerging Picture
6. “Doing a Ripley”: Spectacle Defined and Illustrated 254
7. The Neuroscience of Spectacle: Research and Implications 299
Brain Power
Seeing the Light
Surfing Your Brainwaves
Suffer the Children
The Screen!
Easy Rider
McLuhan Was Wrong, and Right
The Graphic Revolution, Memory, and the Triumph of Appearances
Structural Shadows
Surfing a New Wave
Some Informed Speculations
8. The Mad Dialectic of Nostalgia and Utopia in the Infotainment Era 331
Cognitive Gluttony Racialized
Is Woke Broke?
Ripleys on the Left
From Left to Right and Back Again
Saints and Cynics: The Root Commonalities of Illiberalism
The Touching of the Extremes
Spectacle Gluttony
The Right’s Crazy SOB Competition
The Root Beer Syndrome
And Now, More Sex
Abortion: Serious Issues, Specious Arguments, Sunken Roots
Beyond Feminism
The Irony of Leveling
The Imperfect Perfect
Vive la Difference?
Human Nature
9. Spectacle and the American Future 398
Bad Philosophy, Bad Consequences
Astounding Complexes from TV to Smartphones
Up from the Television Age
The Crux
Cognitive Illusions
Another Shadow Effect
Myth as Model
The AI Spectre
A Sobering Coda
10: Epilogue: What Our Politics Can Do, What We Must Do 440
A Few National Security Implications
Meanwhile…
Who Will Create the Garden?
Acknowledgments 453
[1] Cybernetics (1948)
[2] I lack formal training in human anatomy, biochemistry, endocrinology, optics, and medicine. But with experts’ help I have acquainted myself with the basics of these and other scientific disciplines for the purpose of tying my spectacle thesis to solid scientific anchors. I am particularly indebted to Dr. Richard Cytowic for helping me learn and apply the material presented in this chapter. I have long noticed that many humanities and social science experts are allergic to assimilating hard science material that may be relevant to their interests. I have never understood why this is.
[3] M.R. Uncapfer and A.D. Wagner, “Minds and brains of media multitaskers: Current findings and future directions,” Proceedings of the American Academy of Sciences, 2018; 115 (40), pp. 9889-96, cited in Cytowic, Your Stone Age Brain in the Screen Age, p. 45.
[4] Note Mike McRae, “A First-of-its-kind Signal Was Detected in The Human Brain,” ScienceAlert, 29 February 2024.
[5] Cytowic quoting the work of Arnold Wilkins of the University of Essex on the metabolic effects of screen exposure.
[6] More or less because the devices work to best effect only under limiting conditions, and so cannot measure many details of cognitive functioning if the subject is moving around too much.
[7] Hz stands for Hertz, and 1 hz means one wave or cycle per second on the electromagnetic spectrum. Brain waves are very, very slow compared to, say, radiowaves.