The Age of Spectacle, No. 30
Chapter 7: The Neuroscience of Spectacle: Research and Implications, part 2
As veteran Raspberry Patch readers are aware, most post-election autopsies have focused on the obvious and the superficial, not to exclude perseverations on race and gender. As Claire Berlinski pointed out in her November 7 Cosmopolitan Globalist essay, “No one can explain Trump’s victory,” most have also been cloyingly and tautologically self-referential.
That said, some analyses have been both less egocentric and more cogent than others. Probably the most comprehensive analysis so far is that of Musa al-Gharbi [“A Graveyard of Bad Election Narratives,” November 13, on his Substack Symbolic Capital(ism)]. No, he shows, what happened wasn’t mainly about skin-color bigotry or sexism or Biden’s failure to step aside earlier or the election being bought by digital libertarian-minded billionaires, and it wasn’t even anything exclusive to the 2024 political season. As the better short pieces [David Brooks, “Voters to Elites: Do You See Me Now?” New York Times, November 6, 2024 and Brooks, “Why We Got It So Wrong,” NYT, Nov. 14; Peggy Noonan, “A Triumph for Trump’s Republicans,” Wall Street Journal, November 7, 2024; and Tunku Varadarajan, “A Democrat Ponders a ‘Thumping Rebuke’,” Wall Street Journal, November 8, 2024, Maureen Dowd, “Democrats and the Case of Mistaken Identity Politics,” New York Times, November 9, 2024; Thomas Frank, “The Elites Had It Coming,” New York Times, November 9, 2024] have also argued, it was mainly about a vigorous rejection of the Democratic Party as the party of smug, identitarian-minded, mirror-staring elites who have lost touch with both the American working-class majority’s economic circumstances and their deeply non-woke social attitudes.
Fine, that is essentially correct—and it is why I have not identified as a Democrat since 1991 (but have never identified as a Republican for some of the same and some other reasons). But not everyone agrees. Some cite rising support for authoritarianism and some link that support to rising nihilism. The argument is simple: Trump is awful, yes, but a lot of people like him anyway in part because he is awful. He is a badass, a disrupter, a jokey burn-it-all-down fulminator, just like they have become at least in their cynical anti-hero imaginations. [Matt Johnson argues this case in “`Identity Politics’ Isn’t Why Harris Lost,” The Bulwark, November 13, 2024.]
Superficiality is a relative concept. Cogent as they may be at a surface level, neither al-Gharbi nor Brooks nor Noonan nor Guy Teixeira (Varadarajan’s source), nor Dowd, nor Frank arguing the identity politics thesis, nor Johnson and others arguing the “We the People have turned nasty” thesis, unearth the shell of a single relevant underturtle, as we have been doing, and will continue to do, here at The Raspberry Patch. That may be why so few seem to understand that both of these arguments hold some water, and that they are not the least incompatible.
Yes, both the rejection of identity politics and the rise of a nihilism-inflected authoritarianism, taken together, do account in the main for what happened on November 5—not just in the presidential election but also across the board and down-ballot, as well. How so? Because both arguments are totally consistent with the ongoing cultural mentality regression to preliterate, mythic forms of cognition that we have described in some detail in the Age of Spectacle project. And they fit together: One (anti-identity politics) is a push-away, a negative motive; the other (badass authoritarian posturing) is a pull-toward, a positive motive. Many voters, it seems, both pushed and pulled to come to their decisions.
Here is the essence of it, stated succinctly particularly for the sake of readers who have only recently found their way to The Raspberry Patch: The surging new orality, delivered courtesy of our ubiquitous cyberlutionary screens, has driven the nation’s cultural mentality back toward preliterate mythical/magical modes of thought. The reason so many screen addled and addicted non-readers cared more about inflation than the raging anti-liberal democratic authoritarian threat posed by the MAGAtized GOP is that in the schizoid mythic mentality there is only the concrete to the one side (eg., the price of eggs) and the magical phantasm on the other (eg., “deep state,” QAnon, Frazzledrip, “replacement theory” conspiracies). In the mythic/magical mentality there is not much, in some cases none at all (Jacob Chansley is a poster child for the none-at-all variant), of an in-between world where facts and logic and dot connections matter.
The price of eggs is concrete; democracy and authoritarianism are abstractions annoyingly occupying a foggy zone between the eggs and the conspiracy theories. One has to actually think about them, even maybe read something about them, to understand them. But if one is cyberaddicted to screens and one’s modal brainwaves are in low alpha leaning to theta (recall last week’s post) one is not going to read or think; one is going to go image- and affect-heavy, and one is going to do it easy, which is to say quickly and on the cheap.
Then there is the role of the media in all this as partly underturtle and partly transmission belt for what we have called industrial folklore (back in Chapter 2). Amid the seething cognitive soup before us, a great deal of money is to be made by major broadcast media pandering to give their audiences what they want, which is mainly to tell them what they already believe but assume they know. Politics has long since been framed by American media as a form of entertainment, and in an Age of Spectacle, an age of cognitive gluttony and funhouse mirrors, wild lying in angry zero-sum modes is vastly more entertaining to an affluent-yet-nervous-and-resentful audience than any form of truth telling. It’s not exactly panem et circenses, as Juvenal put it--“bread and circus”--but it’s close: It’s more “fast food and arson-inflected spectacle.” Harris lost in part because she told the truth: Truths are humble but lies, especially in an culture saturated with advertising mendacity, must be grandiose. Grandiosity will always prevail in an Age of Spectacle.
Put another way, referring back to Chapter 6, American politics has become a perpetual two-headed carnival calf astounding complex, co-designed for Eloi with debit cards by media moguls and politicians. The media types are the producers of the show, and the politicians are the actors. The corporate sponsors, not least our digital oligarchs, love it all the way to the bank. At a deeper level, down in the entrails of the culture, that explains what happened—what really happened—on November 5. On November 5 a pot full of rancid stew finally boiled over, but it was a pot that had been simmering on the stove for decades. As several observers noted, it was not really about faces; it was about paths, but paths evaluated under the influence of a pervasive surrealist social brain fog.
Now finally, before getting to today’s post, I’m going to bring forward—actually yank backward to where we are in our Age of Spectacle rollout—one of key “money quotes” in the Age of Spectacle manuscript. You’ll see it again when we get to Chapter 9, but at a time when so many are searching for the true nub of the matter, it just seems cruel to make my readers wait any longer. So:
The mythical organization of society seems to be superseded by a rational organization. In quiet and peaceful times, in periods of relative stability and security, this rational organization is easily maintained. It seems safe against all attacks. But in politics the equipoise is never completely established. What we find here is a labile rather than a static equilibrium. In politics we are always living on volcanic soil. We must be prepared for abrupt convulsions and eruptions.
In all critical moments of man’s social life, the rational forces that resist the rise of the old mythical conceptions are no longer sure of themselves. In these moments the time for myth has come again. For myth has not been really vanquished and subjugated. . . . The description of the role of magic and mythology in primitive society applies equally well to highly advanced stages of man’s political life. In desperate situations men will always have recourse to desperate means—and our present day political myths have been such desperate means. . . . If modern man no longer believes in natural magic, he has by no means given up the belief in a sort of “social magic.” If a collective wish is felt in its whole strength and intensity, people can easily be persuaded that it only needs the right man to satisfy it.
That is Ernst Cassirer in The Myth of the State, written in 1944 and published after his death in April 1945. Now do you understand perhaps a but more clearly what has been going on?
Chapter 7: The Neuroscience of Spectacle: Research and Implications, part 2
There is no longer any question that long hours spent watching screens physiologically harms brain development in young children, and that fact is over and above--actually it is behind—all the behavioral damage that screens do not just in childhood but well beyond, to wit: How is the next generation supposed to avoid harmful behavioral patterns in regard to screen use if their brains are diminished when young by those very patterns? Hence the nasty recursivity of the problem. That alone is reason enough to demand that the manufacturers of car seats for kids that feature a holder to stick a smartphone in to entertain the little tot while on the road need to cease and desist making and selling those products immediately. They constitute child abuse.[1]
Young kids, beyond their first birthday in any case, should be in theta much of the time, immersed in imaginary magic/mythic play. Screens tend to put kids in alpha through a relentless process of overstimulation, to shift them basically into neutral. Screens often do the same for adults but from the other direction, downshifting them into alpha from beta. The destination is the same but the point of departure is not. In the case of the kids, screens tend to lock them in alpha if only because their sense of personal agency, or efficacy, is undeveloped. They don’t know how to get out of alpha by themselves and they have fewer time-sensitive needs to do so than most adults.
As a result, cognitively sped up and brain-neutralized kids, who get yanked out of the theta mode that is their key means of mental development, may not acquire sufficient capacities for imagination and empathy—growing their theory of mind, in other words—via play. In the fullness of time such stunted minds, if the stunting becomes habituated in their lives, may also become easier prey for manipulative personalities, charlatans, and demagogues.
We have mounting empirical reason to be concerned. Most of the myelination that takes place in human lives takes place in formative young brains. Screen exposure obviates or substitutes for direct eye contact between babies and caregivers, which is the evolved spark for instigating myelination. Too much screen exposure among infants and toddlers retards the myelination process; clinical evidence from 2020 diffusion tensor imaging (DTI) showed white tract thinness and damage in overexposed young brains. White tract development supports language and emergent literacy skills.[2] All else equal, those brains will process slower than the brains of those whose perceptual diet is rich in three-dimensional eye-to-eye stimuli.
The study, published in JAMA, emphasized that screen exposure is mainly what is causing these changes during the early and rapidly changing stages of brain development. Moreover, high levels of stress in children caused by exposure to unassimilable images on screens releases cortisol and adrenaline that, when sustained and repeated, causes dendrites to die off and can even kill off the parent neurons from which the dendrites sprout. That can cause permanent, irreversible brain damage.
More recently, a study released in late January 2023 and also published in JAMA concludes that, “excessive screen time for young children is linked to impaired brain function and may have detrimental effects beyond early childhood and impair future learning.” Specifically, excessive screen time leads to a later abundance of low-frequency brain waves, a state directly correlated with low alertness. The study’s lead author, Dr. Evelyn Law of the National University of Singapore (NUS) Medical School and the Singapore Institute for Clinical Sciences’ (SICS) Translational Neuroscience Programme, told the Straits Times: The study provides compelling evidence to existing studies that out children’s screen time needs to be closely monitored, particularly during early brain development.”[3] That, in turn, is why the government of Singapore has moved to regulate digital use among children and to warn adults of the ill effects of addiction to it.[4]
Singapore’s “measures” will go well beyond the mere warning labels proposed tardily by U.S. Surgeon General Vivek Murthy on June 17, 2024—not that they will necessarily work.[5] China’s already have. In the United States individual school districts have been left to fend for themselves, since the Federal government seems paralyzed on the matter.[6] The likely result is that school districts in better-educated areas will move to act quickly and resolutely, and others will not. The long-term consequences of significant regional and urban-rural differences in controlling cyberaddictions among the young could in time become very significant in terms of human capital quality. The same goes for differences among states: China’s ability and determination to act on the challenge of cyberaddiction, compared to the dysfunctional U.S. Congress, is a national security concern.
The Singapore study, like earlier others, also correlated screen time excesses in children to retarded prefrontal maturation, so affected developing executive function. Behavioral symptoms in children adversely affected by screens also include difficulty regulating emotional stress, difficulty paying attention to and following directions, and impulse-control weakness. Why? Because a child’s brain needs to move among different brainwave modalities by way of getting used to them, knowing intuitively how they feel and what mind correspondences they enable. But the real payoff of the January 2023 study lies in its hypothesis as to why these affects occur.
Remember the critical role of context in aiding the brain’s interpretive function when we discussed optical illusions. Optical illusions work because nothing in nature resembles their designed-in complete lack of contextual cues, so it leaves the brain stranded in interpretive limbo. That is what produces the astounding complex. Now note that earlier neurological research concluded that when infants see images on a two-dimensional screen they have trouble processing information from those images, presumably because their brains expect to be provided cues and clues to meaning from the three-dimensional reality within which human beings evolved. When they don’t get those cues and clues in a form their young brains can handle, they flood out in incomprehension. The study states:
When watching a screen the infant is bombarded with a stream of fast-paced movements, ongoing blinking lights and scene changes, which require ample cognitive resources to make sense of an process. The brain becomes ‘overwhelmed’ and is unable to leave adequate resources for itself to mature in cognitive skills such as executive function.
In other words, lacking any accumulation of evidence that would tell the child what the images mean, the kid gives up and goes passive. This reflex resembles how a young child responds to insistent loud noises: They go passive, sometimes falling asleep so that the brain will filter out the noise, as it invariably does when we sleep. What is happening with the screen bombardment is that the child experiences an astounding complex, a “you don’t see that every day” sort of spectacle, that strands him or her between possibilities of meaning: A; not-A; not-that-A-but-some-other-A? The child needs for the astounding complex to end; the novelty bias may have attracted his or her attention to the “shiny object” screen, but after the “wow” fun part is over the astounding complex generates anxiety the longer it persists. If not yet screen-addicted, the child looks away to get his or her bearings again in the non-mediated-image real world.
We used to think that brain development and related cognitive skill sets maturation were malleable only in children, particularly infants, and that after a certain amount of time the impacts became negligible and ultimately disappeared. The studies cited above, looking for “clean” and therefore clear results decided to study young children; as important as they are, they provide no empirical clue to the impact of the technology on older humans. Fine; scientists gotta do what scientists gotta do to attract grant money and get published.
But we now know that frontal lobe development takes much longer to reach maturity than we formerly thought, in males up to the mid-20s on average. This is where one’s risk centers are located, and it is why auto insurers demand higher premiums from 25-and-under male drivers. Lately we are slowly beginning to confront the truth that there is no point where the cyber-technovel impacts of screen-delivered mediated images become negligible and disappear. The more that is delivered the larger the cumulative impact regardless of viewer age or intelligence. In other words, there is still some overload threshold that can turn any more or less normal person into a screen-addled imbecile.
Worse, perhaps, as cognitive metabolisms varyingly slow with the aging process, we may return to a higher level of vulnerability to input from screens (and of course not only screens). The specter of old folks sitting around listening to rightwing talk radio and Fox News thus should cause us to reconsider what is going on in and to their brains when they do that.
Even more portentous, if excessive screen use disrupts sleep patterns in younger people it can develop into a pattern of poor adult sleep. One victim is memory: Too little REM sleep diminishes memory and the transfer of short-term to long-term memory via the hippocampus. In that regard simply note that spending just one hour in front of a laptop or desktop display screen after dark but before bedtime suppresses melatonin production in 15-17 year olds by 23 percent, and even brief exposure to the intense light coming from a tablet can suppress melatonin levels in pre-adolescents by 90 percent. Chronic sleep deprivation in children also correlates with shorter telomeres, over time amounting to about 1.5 percent shorter for each hour less than the recommended hours of sleep a child needs.
That is not quite all the bad news, though shortened telomeres is pretty bad news. Some research indicates that prolonged absorption in screen-based sounds—never mind the images for once—may interfere with the laying down of brain pathways devoted to social cognition. Even toddlers expect human beings, not electronic boxes, to talk to them. Sights and sounds from a television screen left on for “white noise” purposes, even if a child is not focused on the machine, may still compete with and retard language acquisition simply by polluting the air with noise that interferes with the quiet a child needs to learn to recognize distinct human sounds. We know for a fact that audible background television noise correlates with delayed language acquisition compared to households where the television is usually off when adults are speaking to children. Kids need to clearly hear what adults are saying to them and clearly hear even their own practice babbling in order to get the sounds right. The idea that kids need clear auditory feedback to learn how to speak is obvious once it is pointed out. But apparently it does need to be pointed out to some parents.
The effect of screen exposure on adults is not so dire but neither is it trivial. Research has shown that at least 90 percent of American adults use some kind of visual electronic device within one hour of bedtime.[7] Many sleep poorly as well for any number of reasons. If left untreated for a long time there is a definite empirical correlation between those with chronically suboptimal sleep patterns and early onset of dementia. An insufficiently rested brain will start cannibalizing itself at some point by killing off non-firing, housekeeping-necessary neurons.
Luckily, it is easy to remedy and reset poor sleep patterns. The best way to do it is free: Spend more time outside, and do so especially in the morning when blue light is more prevalent. That will reset human circadian rhythms in just a few days. Doing that also correlates, believe it or not, with a better BMI index because blue light exposure levels influence body fat and the appetite-regulating hormones leptin and ghrelin secreted by the stomach.[8] If you want to make sure you get to bed when you need to, avoid all screens after sunset and make a fire. Look into the orange-yellow light…..and you will be fine.
The Screen!
A famous painting, once of the best known and most easily recognizable of all modern paintings, is Edvard Munch’s “The Scream,” painted in 1893. Why did Munch paint this painting? He said it had to do with a panic attack he had recently suffered. But now we know better: His was a prophetic vision of The Screen!
Now that we have at least a glossing of how our brains and visual systems work, and how they may be vulnerable to tampering, it is time to detail some of the wider implications of those vulnerabilities. Some of those implications follow closely on settled science; others are admittedly more speculative, but that does not render them ipso facto unscientific. After all, most scientific truths start out as intuitive inferences. Speculation, or hypothesis building if we want to dress it up a bit, is an important part of the scientific process.
Whatever terms different clots of neuroscientists prefer to use, the fact is that digital devices compete for attention with the natural and face-to-face unmediated social world, and as such they drain attention and memory away from what is real and unmediated to what is mediated and, for a great many, not infrequently fictive. This is deliberate: All the major competing high-tech digital corporations strive to reduce human face-to-face interactions in order to substitute a manufactured designer reality that leads us to buy the things advertisers hope to sell us. This substituted reality most easily penetrates young, still-forming brains. As Ramsey Brown, a neuroscientist interested in designing ways to counter big-tech methodologies says, “Your kid is not weak-willed because he can’t get off his phone; your kid’s brain is being engineered to get him to stay on this phone.”[9]
The result of over-exposure to the hyperstimulative world of the internet is that sleep, mood, and acuity of thought are all degraded because the devices, whose circuitry runs much faster than the brain’s, make our brains tired far more so than the fewer distractions of the pre-digital age. This is why those heavily attuned to screens increasingly complain of an inability to focus, “brain fogs,” unreliable mid- and long-term memory, and a near omnipresent sense of anxiety that one is always running behind schedule. The result is often a kind of manic primitivism “when people seek a new identity by plunging into ceaseless action and hustling,” wrote Eric Hoffer in 1970 in First Things, Last Things. “People in a hurry can neither grow nor decay; they are preserved in a state of perpetual puerility,” remaining adolescent-like despite their biological maturity.
Thus, many people rush out to buy stuff not because they need or really even want it, but because the accumulating stuff becomes a substitute way of accounting for and defining a self that we are otherwise too hurried to reflect upon and tend to. Stuff accumulation, sometimes called “miswanting,” is the easiest on-ramp to the hedonic treadmill than has become the signature characteristic of Western materialist culture, and the digital tsunami working in this regard as a form of cultural malignancy has made it a much more powerful impulse. Many people do not have to rush out to a store to buy something anymore; they rush instead to their phones, pay through an online broker, and have whatever it is that they don’t really needed delivered. What amazing convenience applied to the usually unnecessary.
So if we suspect that we are being exploited in this way, all we have to do to avoid harm is to shut off our phones and close our eyes to visual distractions out in the world beyond our magic rectangles. But if we close our eyes while outside we will likely bump into a lot of things, possible things like trucks and busses. So can we go through life in city and suburb with our eyes perpetually closed?
Of course not, which is why Anthony Burgess’s 1962 book A Clockwork Orange—and the stunning 1972 film made from it starring a young Malcolm McDowell—was very nearly prophetic, regardless of what Burgess knew or did not know about the anatomy of human visual systems. In perhaps the most iconic scene from book and movie alike, behavior modification technologists prop open the eyes of their subjects so that they could not not see shocking scenes of violence and brutality. So yes, we can in theory look away or close our eyes, assuming we are not already addicted to screens. What if we are addicted, however? A young Malcolm McDowell could not not look at horrific video images being poured into his optic nerves: He was restrained coercively, with toothpicks keeping his eyes open. What’s our excuse?
That feeling of always running behind has important knock-on effects, as well. For example, intimacy is a beautiful condition between two human beings, whether it be of a sexual nature or not. But genuine intimacy takes time to develop. In an age of wasting patience, however, a true experience of intimacy is becoming ever rarer. That is why hook-up culture has developed as a way to satisfy baser needs without ever having to contend with the slow investment demands of intimacy, even assuming younger denizens of the digital age even have a clue as to what sort of non-transactional relationship that could refer to. Connecting sexual passion and love has always been challenging. World literature would not read as it does were that not so. But challenges can be met with patience, humility, discipline, and forbearance. Will being speeded up by our own machines make meeting the challenge of intimacy just harder in future, or at some point will it make it impossible? What does that portend for the kind of loving home environment children need to become emotionally secure adults?
Happily, the solution for such irritations and misanthropies comes easily to hand, in theory at least: disconnect, stop the incessant flow of images, “refuse it,” as Sven Birkerts advised back in 1994 before smartphones even existed.[10] Like occasional boredom, silence is healthy as well as proverbially golden. But cyberaddicts don’t, or can’t, do that: They love the anticipation of reward even more than they rue the reward once they have it. Doing too much of anything that is supposed to be and starts out being pleasurable generally leads to anhedonia, defined as the absence of enjoyment in an experience that is supposed to be enjoyable. What about those not yet addicted, but merely lazy, ease-seeking, go-with-the-flow types? Are they more likely to become addicted or to “refuse it”? At scale, no one knows; one can find anecdotal evidence for either possibility.
An over-abundance of fictive screen fare, in particular, competes with and displaces the natural human drive to socialize. No one can look a machine in the eye when interacting with it because it has no eye, and making eye contact is what humans evolved socially to do. Machines interact at, not with, people, and no amount of fantasy anthropomorphization can change that. It is also well known thanks to the work of Harvard Medical School molecular biologist Lauren Orefice, who established that sensory over-reactivity in the extremities correlates with autism spectrum disorder in the brain, that screen addictions among the young can create “virtual autism” as well as worsen clinically diagnosed autism.[11] The rise in autism correlates strikingly with the proliferation of electronic devices in homes, notably televisions. Autism diagnoses are now thirty times more common than they were in 1960, perhaps because medical professionals are more sensitive to it but also, perhaps, because there are more symptomatic people to be sensitive to. There is a well-understood genetic predisposition to autism, but it may be that latent predispositions are activated by certain environmental stimuli, and would remain merely latent without them.
It is also worth noting that extreme cases of virtual autism correlate with the participation of younger people in online extremist groups. Three reasons for the correlation stand out. First, online modalities enable autistic people to socialize without the attendant social anxiety. Second, extremist forums have archives, so those with autistic disorders can dwell at length and at their own pace is mastering the relevant conspiracy theories. And third, the rigid and simple nature of dark conspiracy theories enables those afflicted with autism to easily match their impoverished theory of mind with the conspiracist narrative.[12]
Those afflicted with such maladies, genetic and/or acquired, typically cannot look others in the eye; they have been conditioned or have conditioned themselves via machines to be asocial beings—virtual hermits, in other words. Screens may not cause autism as they cause virtual autistic symptoms, but they can exacerbate it. Consider that faces on screens never return a child’s smiles or answer a child’s specific question or make eye contact. Children watching faces on a screen are completely invisible to those faces; no emotional feedback of any kind can occur. This is not what one does with children displaying autistic characteristics; it is the very last thing one would want to do.
Even more significant, in young developing brains super-saturation of fictive images and plotlines can easily out-compete anything going on in the real world. That is because fictive images tend to be beacons of spectacular “shiny objects” that stimulate relatively hyper-abundant amounts of dopamine compared to ordinary “just walking around” perceptual stimuli. Additionally, people who interact often with one another not only socialize each other (not merely “with” one another) but also to some extent synchronize with one another biologically. This applies to males as well as females and may help to explain, via pheromones and in other ways not yet well understood, the biochemical substrata of empathy. Whatever else they do, screen addictions that displace face-to-face interactions tend to harden our hearts.
Consider further that a mediated visual image delivered to us on a two-dimensional screen cannot substitute for the multisensory reality of the natural world. Our core senses—visual, aural, tactile, olfactory, and balance—work synergistically to produce multidimensional working models of the world around us; they are networked in the architecture of our bodies and brains just as they are in all animals. Only the dependence proportions differ: Most mammals depend more on their olfactory sense than do humans. Most birds are highly visual dependent. Bats depend on an ornately developed aural sense, for example. Humans depend mostly on the visual sensory mode, followed by auditory and tactile.
Of course, screen-delivered video images are usually accompanied by sound tracks, but screen media tend to exaggerate the invasive access of the visual to our brains over other sensory inputs. The origin of the aforementioned “despotic eye” seems to be an evolutionary-driven preference for simultaneous object and motion input as with vision, over sequential input as with sound.[13] That is probably why eye-to-brain neural connections outnumber ear-to-brain connections by a factor of three. The reason may have to do with time and the need to act quickly in a moment of danger or opportunity. It takes only a tenth of a second for a visual image to travel from retina to the primary visual cortex; it takes many times longer for sequential auditory stimuli to accumulation into a coherent meaning.
But of course human sight and hearing, as well as our other senses, ultimately work in an integrated fashion, and so provide a sensory whole that is more than the sum of its parts. (The postmodernist attempt to deconstruct the unity of individual personalities may one day argue that a person’s visual self is separate from its auditory self, from its tactile self, and so on.) An infant’s developing hearing, with each sound laying down neuronal connections that add to patterns recognized and deposited in memory, corroborates and is corroborated by the other senses. This is why infants who are born profoundly deaf must wear their hearing aids and cochlear implants during all waking hours so that this process of sensory integrated learning can proceed at full power.
That anatomical reality concerning vision and hearing also aligns with the bicameral “day brain/night brain” hypothesis mooted in Chapter 4, since the novelty bias would have been less active at night before controllable artificial illumination was invented, when purposeful human action was reduced from typical daytime demands. It even follows, possibly, that dawn and dusk are the two most naturally creative times for humans as their neural energy is shifting from right-to-left or left-to-right brain predominance, so that at those times the brain attains a maximally integrated functionality. It may not be coincidental that most faith communities have traditionally timed prayer services for dawn and dusk.
One result of knowing this sensory distinction is that most commercial films and television offerings, by prioritizing images over sounds, demand little from the viewer in terms of interpretation or imagination. Viewer are swept along by the succession of realistic images that are so easy to take in—so long as there are not so many of them so fast as to create processing bottlenecks—that they can be consumed even in a low-alpha brainwave state of easy-going, generic distractibility. It also explains why phones with screens are far more desirable (and addiction-prone) than the now virtually obsolete audio-only phones.
Easy Rider
Easy is the key term here. . . .
[1] Long after I drafted this chapter a corroborative book appeared: Jonathan Haidt, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness (Penguin, March 2024).
[2] See J.S. Hutton et al., “Associations between screen-based media use and brain white matter integrity in preschool-aged children,” JAMA pediatrics, 2020, 174(1).
[3] Law quoted in Ng Wei Kai, “Screen lined to impaired brain function, may affect learning beyond childhood: Study,” Straits Times, January 31, 2023. The study included participants from McGill University and Harvard Medical School.
[4] See Elisha Tushara, “S’pore to put in place in placed measures to deal with screen time and device use in coming months,” Straits Times, June 22, 2024, and Tushara, “About one in two S’pore youth has problematic smartphone use: IHM study,” Straits Times, July 28, 2024.
[5] See Syarafaba Shafeeq, “Young and troubled: More teens in S’pore getting hooked on digital devices and seeking help,” The Straits Times, November 9, 2024.
[6] Note Karina Elswood, “Fairfax looks to control phones,” Washington Post, August 7, 2024, p. B1.
[7] A.M. Chang et al., “Evening use of light-emitting eReaders negatively affects sleep, circadian timing, and next-morning alertness,” Proceedings of the National Academy of Sciences (2015) 112(4), pp. 195-204.
[8] Explained in Kathryn J. Reid et al., “Timing and intensity of light correlate with body weight in adults,” PLOS ONE, April 2, 2014. 9(4), cited in Cytowic, Your Stone Age Brain in the Screen Age, pp. 206-07.
[9] Brown quoted in Cytowic, p. 87.
[10] Birkerts, The Gutenberg Elegies (Farrar, Giroux, Strauss, 1994), p. 229 in particular.
[11] Orefice, “Outside-In: Rethinking the etiology of autism spectrum disorders,“ Science, October 4, 2019.
[12] See Rothfeld, “The dark internet mind-set,” op. cit., p. B3.
[13] Romanyshyn, “The Despotic Eye.”