The Age of Spectacle, No. 15
Chapter 4. Underturtle III: From Deep Literacy to Cyber-Orality, part 3
First thing: sorry. I thought I had scrubbed and sleeked down Chapter 4 before starting to roll it out on The Raspberry Patch, but last week’s post showed otherwise. An entire section—most of the first part of the post, actually—should not have been included; it belonged elsewhere, and it has since been relocated in Chapter 10. I went back and removed the section from the post in the archive.
Another subsection, “Zombie Vocabulary,” remains in the chapter but comes later. When it shows up again I’ll remind you that you’ve seen it already and so can skip it if you like. Since the way the previous post ended is no longer part of the chapter’s order, I’ve backed up one paragraph so you can see how his week’s material links up with what now comes before.
On the whole the drill of using Substack to force myself to put the Age of Spectacle manuscript in better shape is working as intended. Realizing that No. 14 was subpar by way of internal organization is an example; I only wished I’d discovered and acted upon the problem sooner.
To obviate this sort of thing happening again, I have created a detailed outline of the manuscript, listing all the subsections within the chapters as well as the chapter titles. That has already helped me re-order some later chapters, but more adjustments may be necessary so even this master outline (look to the end of next Friday’s post for it) may change at least a little more. I should have produced this more detailed outline earlier and meant to….but life has been more than typically distracting lately—have you noticed? Plus it’s July in Washington…… Still, as distracted folk go, I’m not a garden-variety victim: I’m not even addicted to my smartphone.
So, on we go now with part 3 of Chapter 4.
. . . . We therefore now see another dialectic at work: a downward spiral of analytical competency among, presumably, our college-educated best and brightest. What will happen when current and future ChatCPT veterans populate the billets out in Langley, or in Foggy Bottom, or at the Pentagon? What if their political overseers also count as ChatCPT veterans? What will happen then to what it is fair to call the creative and practical aspects of policy competence?[1] That’s what I mean by “very bad.”
The Reading-Writing Dialectic
Obviously, then, if the benign habits of deep reading deteriorate, it is certain that habits of expository writing will do the same. Like reading, orality has two sides--in this case, the listening and the speaking. The implications are several, however, and some matter more than others. For one manifestation that does not matter so much, consider the still-recent hybrid form of orality that characterizes the phenomenon of social media “writing”—the reason for the scare quotes will soon become evident.
Given no more than a moment’s thought, one might suppose that the vast majority of the copy we find on Facebook, Nextdoor, and other niche forms of social media is composed in the mental mode of written language. It does seem, after all, to be writing: letters put together to form words, words put together to form sentences, sentences put together to enter the network of symbolic meanings that writing and reading are all about. You might be someone who supposes that because right now you are reading a book manuscript, and your habituated orientation to the copy is that of written as opposed to oral language.
If so, you might be someone who has been aghast at the abysmally poor technical as well as literary quality of the writing in social media, and you might well have wondered whether the semi-literacy you consistently see in such domains is better explained by the sad fact that most people don’t know how to write proper English sentences anymore, or because they don’t care to do so—or some entwined combination of the two. I also was perplexed by this mystery until it dawned on me that most of those writing on social media actually do not approach it as a form of written language; they approach it as a form of oral language, since most use voice-activation software to “write” their messages and responses.
Some “writers” do review what gets laid down on the screen to make sure it makes sense, ferreting out glaring errors in the voice-to-text software system and occasionally adding punctuation. Far fewer, it seems, will go further than that to check for proper usage conventions. Many do not review anything; one frequently sees whole long paragraphs with no punctuation whatsoever. This seems to mimic video streaming, which mesmerizes viewers precisely because no electronic punctuation is allowed to interrupt the electron flow. Shadow effects in the digital age are nearly everywhere, and can be readily detected if one looks for them.
What we see in social media writing is partly a reflection of the impatience pandemic from which we as a society are suffering, and partly a reflection of the fact that most participants do not hold themselves and others to the same standards as they would, or might, if they were genuinely oriented to written language tasks. In short, the apparent choice between “they don’t know how to write” and “they don’t care” is a category error that misses the real point.
We even have a hybrid form of mixed writing and orality: emoticons. Emoticons are a rough 21st-century equivalent of hieroglyphics. They are very popular, it seems, especially among those younger than about 40. The effect is, however, worse than if modern hieroglyphics were used the way they were five thousand years ago in Egypt. Emoticons require no syntax as the more advanced renditions of ancient hieroglyphics did; they work either as a kind of singular exclamation mark or, when bunched together, like a multi-vehicular emotional collision. Even worse, the emoticon user picks from a list devised by anonymous others instead of trying actually to create his or her own symbol.
Thanks in part to the ubiquity of social media in tandem with the decline of serious reading, many people now apply sharply degraded standards to written language as a shadow effect of the new orality dominance. On the Nextdoor feed I get locally here in the Maryland suburbs of the nation’s capital the words “there,” “their,” and “they’re” are often used interchangeably. If anyone points out such errors, inevitably someone else will chime in “it’s just social media” or, worse, “it doesn’t matter.…you know what they mean.” Indeed, one fellow living in a retirement complex called Leisure World assured everyone that “those rules are not enforced anymore.” That would have to mean that this fellow does not read newspapers, magazines, or books in English, because yes indeed, thank heaven and professional editors, those rules are still enforced in places where written language endures.
But how many places does that come to? If indeed it is true, as noted above, that 54 percent of adult Americans cannot comprehend standard newspaper copy, then it’s no big surprise that we are witness to otherwise intelligent people debating which sources of news that come to you on a screen are less biased than others, as though this were a real question. If you don’t read but only depend on screens for your news, you have zero chance of actually understanding any public policy issue. All screen-borne fare is infotainment—as cable news has been called Kabuki theater—and as such bears the debased characteristics of the age. It is shallow and undemanding, image-saturated and thus emotionalized, ahistorical and focused relentlessly on the now—the “Breaking News” syndrome, the cocaine of the contemporary American media industry.
The Birth of Interiority
Just as pre-literate people in the mythic-consciousness past, not yet in possession of developed concepts of causality or facticity, were capable of believing almost anything if a trustworthy authority claimed it to be so, post-literate people are far more gullible than deep-literate ones. We are moving back fast toward orality, now engined by vivid high-tech visual stimuli, away from print literacy and, as Walter Ong showed, the emotional and logical syntax of orality differs markedly from that of literacy.[2] We need to dwell here a bit on one of the key implications of this difference.
Understanding deep literacy helps us get from Karl Jasper’s Axial Age to the modern age. The rise of individual agency—a key hallmark of modernity—appears to depend on the development of a refined sense of interiority in a person: that sense of inner conscious being that defines one’s individual essence. The linguist and historian Owen Barfield called it “the shifting of the centre of gravity of consciousness from the cosmos around him to the personal human being himself.”[3]
This personal human being is often referred to lately as “the narrator,” that “voice” we silently “hear” only in our own heads that each of us as individuals knows to be the inner “me.” Long ago it was called, in English, the soul, which in its day was a novel use of the word. In turn, the development of that sense of interiority appears to depend in large part on the neural circuitry of the deep-literate brain. In short, the advent of deep literacy is the most likely proximate source of modernity via the rise of individual agency that it allowed.
At this point we would be remiss not to mention a well-known alternative hypothesis on the origin of introspective consciousness, namely, Julian Jayne’s 1976 book The Origin of Consciousness in the Breakdown of the Bicameral Mind. Jayne argued that societal instability and change caused the rise of introspective-capable consciousness roughly three millennia ago. He and many other scholars agree that the existence of symbolic language is a necessary precursor to subjective consciousness and abstract forms of thought generally. It is therefore all the more striking that Jaynes, like Jaspers before him, never even considered the impact of deep literacy’s advent on his observations and argument. Note that Jayne’s roughly 3,000-year origin point corresponds well enough to Jaspers’s dating of the Axial Age in the 5th-6th centuries BCE.
Similarly, James L. Kugel’s fascinating book, The Great Shift: Encountering God in Biblical Times, shows how the Jewish conception of God and related matters such as the nature of prayer shifted twice from the earliest evidence we have to the present time.[4] But again: The evidence Kugel uses is per force written, even when what is being written about may have (ostensibly) occurred much earlier; yet it doesn’t occur to Kugel to ask whether the advent and spread of deep literacy had anything to do with the shifts he describes. To scholars and intellectuals, it seems, literacy is the 600-pound guerrilla in the library: She’s certainly there, but no one ever seems to notice her!
A neuro-cognitive fact supports the conclusion that literacy is the real source of the sense of interiority we recognize in ourselves. When we read we are using our visual sense, especially so when most commonly we read silently. We see the words on the page or on a screen. But what we are seeing is a progression of arbitrary man-made symbols whose basic purpose is to capture oral language. Reading is thus a rare, perhaps unique, hybrid of our visual and auditory senses, and that hybridization shows up to cognitive scientists in how our brains activate in a more integrated way when we read than when we just look or just listen.
Reading—and reading includes not only reading of lexical content but also of mathematical symbols and musical notation—has thus aptly been called a “super stimulus” to consciousness, and a subsequent shaper of it in both individuals and, via mechanisms still not completely understood, in the species as a whole.[5] As Ezequiel Morsella put it, reading even a single word “is no trivial process. . . . [A] single written word can reliably and insuppressibly activate two different kinds of mental representations (one based on vision, one on audition), involving different brain areas and yielding two separate conscious contents, each of which is activated insuppressibly.”[6] Consider as evidence that we can read in a novel about sounds being made, say by galloping horses, that were not in fact ever actually made, and have those sounds affect our motor functions as if we had just heard them. Without the hybridization of our sensory apparatus accomplished by the epigenetic revolution of literacy, that cannot happen.
This insight has clear implications for the aforementioned transformation of polymorphous mythic systems of logic into monadic religious ones. Judaism provides the example I know best, so forgive a short detour to demonstrate the point.
A Rabbinic Interlude
Like other ancient scripture we know of, the Torah was written not to be read silently but to be recited, and originally to be heard by all but the small group of hereditary priests who could read and recite. A famous line from it, after all, begins “Hear, O Israel, the Lord thy God, the Lord is one.” It does not say, “Recite, O Israel….” or “Read silently, O Israel….” This is more important to the subject at hand than might at first be supposed.
Orality privileges emotion compared to written forms of communication, and before photography and video the only way to use images to convey symbols was through the plastic arts: drawing, painting, and sculpture. The Judaic prohibition against idolatry must be understood in part as a reflection of concern with the literalness, or excessive concreteness, of what is seen visually. One sees a drawing or a painting or a sculpture, and one can look at for as long as one wishes; but, before the advent of literacy, one could only hear language as it was spoken and then, sans tape recorders, it was gone.
This is also more important than it may first seem. As we will detail in Chapter 9, human anatomy privileges vision over all the other senses—about 85 percent of our sensory flow is visual—and vision works instantaneously. It is composed of a string of all-at-once presentational impressions. Of course these impressions move as our eyes and heads move, but each impression is a whole unto itself that flows without interruption into our brains, giving rise to what the psychologist Robert Romanyshyn has called “the despotic eye.”[7] Auditory sensations are by contrast linear. They are not composed of all-at-once symbols but serial symbols that the listener must assemble to make meaning. Moreover, visual sensory impressions are not really symbols; they simple are what they are and into the brain they go. Auditory impressions are symbols whenever language is involved, and must be interpreted by the brain.
We have a saying “seeing is believing” but this, according to the Judaic sensibility both ancient and not, is a dangerous and impetuous error. We also have a saying “appearances can be deceiving” and that is right, since appearances by their nature are superficial. We cannot know what another person is feeling, or believes, or fears, or hopes for just by looking at him or her. We must engage that person with language; we have to talk and listen to learn such things which, unlike seeing, is an iterative process that takes time, not an all-at-once flash of sensory perception as with sight.
So Judaism privileges sound over sight as the portal to the divine, as well as to one another. Note, too, that the Torah is revealed in the wilderness. Why? Because the wilderness is quiet, so speech is more easily and more accurately heard. In Hebrew the word for wilderness and the word for “he speaks” is the same word composed of four consonants (m-d-b-r).
Also note: The Torah never claims that Moses sees God, but only hears His voice. It followed further that since we could not see God, all visual representations purporting to be gods or to lead a person to gods were deviations to be avoided. A key reason is that it is possible, indeed typical, to see many representations simultaneously in one’s field of vision, but we only have neuronal bandwidth to intelligibly comprehend one voice at a time. Hence the Jewish definition of idolatry and the insistence on a unitary, monadic God really comes down to a distrust of privileging sight over hearing, of the visual over the auditory.[8]
Remember This
Back to the main narrative now: Of course the brain then combines conjoined simultaneous visual and auditory content into synthetic meanings. Now, if just one word triggers this kind integrative cognitive dynamic, consider what an entire sentence, a paragraph, a chapter, a book, can do, and more or less insuppressibly does. And that suggestive impact applies to only one language: Imagine the mental dexterity that develops in the brains of those who can read two, or three, or more languages. This, in a simple nutshell, is the neuro-cognitive substructure of deep literacy, and when seen dialectically it illustrates how deep literacy grows our brains and enables consciousness of our own interiority at a level unavailable to those who cannot read. It also applies, albeit less definitively, to those who do not read—in other words, to those taught to read in youth but who typically choose not to read anything more elaborate than words on lists and signs for the rest of their lives.
The advent of critical-mass deep literacy, and then the invention of movable type, is also the proximate origin of Protestant theology in the European context of the late medieval period.[9] In short, the advent of literacy not only gave rise to the key element in the modern mindset (not the other way around)—agency via interiority— but also to its institutional embodiment in theology in what was still a religious age.
There is more. Literacy, by deepening our sense of interiority, also transformed memory, and thus identity. When we engage with a novel, especially naturalistic fiction as opposed to wild tall-tale folklore and miracle-suffused religious origin stories, we must absorb a fairly detailed timeline with it to really grasp the story. Pre- or non-literate cultures have timelines in their oral traditions, of course, but they are invariably less well defined. This is why in preliterate cultures two-year and twenty-year and two hundred-year periods tend to be spoken of as more or less the same. Recall how, as noted above, Miriam can be both Moses’ sister and Jesus’ wife in a folkloric timeline, but this cannot be in a post-orality literate context.
It is also why in preliterate oral narratives sequences of events sometimes get flipped and that, in turn, is why in this cognitive dispensation causality tends to be a far looser fitting logical garment: If we know that X happened after Y, then we know that X could not have caused Y. If we don’t know that, of if the order is fungible or vague, then anything can cause anything else. Metamorphosis reigns, in other words: magic as quotidian reality. This mental architecture is balm to the creative imagination, but it isn’t useful for navigating social and political realities.
Personal memory in turn defines identity. We know who we are, or better, we on-goingly define who we are, in part on account of knowing who we have been. The fineness of our capacity to remember events is affected significantly by learning how to read and then doing it regularly to shape our neuronal pathways. We become attuned to sequentiality and we then apply the knack to ourselves, to shaping the detailed memories that define our persona. No reading, no revolution in the brain, no circuits for reading as a shared epigenetic cultural trait, no highly refined self-memory, and hence no strong ability to project a timeline forward to imagine a concrete personal future.
In his own inimical way, Bob Dylan put the point in a June 2020 interview:
There’s definitely a lot more anxiety and nervousness around now than there used to be. But that only applies to people of a certain age. . . . We have a tendency to live in the past. . . . Youngsters don’t have that tendency. They have no past. . . . Young people who are in their teens now have no memory lane to remember. . . .[10]
To lack a “memory lane,” in Dylan’s language, is to be as a chronological adult as it was to be a 6-year old; there is only the constantly onrushing present. This is how the erosion of deep literacy, at the mercy of social media and the general orgy of screens, harms the ability to imagine and to plan or, put in colloquial language, to connect dots. Those are skills that require an adeptness with sequentiality and, from it, a sense of temporal fineness.
Almost needless to say, too, addiction to screen-borne distraction, leading to acquired-ADHD forms of impatience and an inability to concentrate on anything that isn’t entertaining, affects the raw material of short-term memory that turns into long-term memory via the hippocampus. Present shock produces sequential incoherence in short-term memory, which becomes either unassimilable to long-term memory or disruptive of it unless that sensory input is balanced by input from reading. In plainer English: Memory has evaporated with patience, since the hippocampus cannot form coherent memories with sensate trash alone. The trouble we are experiencing in the age of digital disruption is that, while reading can still be great fun for those who reach a certain level of competence, it is not as instant or as easy a portal to entertainment as that which flows out from subtly ersatz moving images of reality.
Present-shock aligned fare can be very entertaining for those acculturated to the experience—and for those who happen to be stoned (or still stoned) when watching—which more or less resembles a spinning hallucination. This kind of experiential appetite may explain how the award-winning 2022 film “Everything Everywhere All at Once” became so popular. Even those viewers who liked the movie were often hard put to summarize its storyline, but most seemed not to care. That alone says a lot about the current dominant mentality in American culture. So does the fact that when older people try to tell or read stories to young people reared on screens, if not fixated on them—their grandchildren, say—they often find that the kids lack the attentional capacity to follow the stories to the end, or sometimes even to the middle.[11]
All this suggests that a person’s sense of interiority is not a genetic constant, not an invariant and unchanging feature of human brain anatomy. Unless provoked to ponder it, we usually assume otherwise, and the assumption is mistaken. Just as language development generally has tracked with larger hominid brain sizes over eons, the growth of our inner voice to full articulate maturity can only take place as our individual capacities for language develop—from that of the child before he or she develops a theory of mind to the adult capable of seeing the self as an object, capable in other words of asking the first question of philosophy: Who, or what, am I?
Human nature as a social animal readily explains the power of articulate speech in an intersubjective context. We have a genetic basis for understanding speech and speaking. But what need has anyone for a particularly articulate inner voice if that voice never has anyone else to “talk” with—an activity done silently only in reading? Thus, to repeat—because it is so important, yet not intuitive—our adult sense of interiority seems to be linked, perhaps inextricably so, to our gaining literacy competence.
The mature “narrator” likely arises from the aforementioned complementary pairing of unnatural acts—reading and writing—as pairings repeat over an extended time at the initiative of living deep readers, who may of course then become writers later paired up with other readers, and so on and on goes the necessarily dialectic reading/writing process of deep literacy. The mature narrator in our heads is thus a cognitive artifact of culture, of the epigenetic revolution in the brain, not of neurobiology alone. As Ong put it, “oral communication unites people in groups” whereas writing and reading “throw the psyche back on itself” and, as such, cultivates individuality. Between the person interacting spontaneously with the environment and that environment there is now a mediated, back-reflected image of the person in that person’s own consciousness, such that the person becomes simultaneously subject and object. Reading and writing thus creates a kind of mirror that enables each person to see him or herself closely, carefully, and more or less at will at whatever speed may be desired. Self-reflection, and from there to the portal of a deeper philosophical awareness, is how we acquire greater texture and nuance in our theory of mind.
Here we have too, just incidentally, the first conceptual possibility of privacy, as opposed to the mere physical fact of being alone. Here we have, in other words, the first chink in the armor of the irrepressibly public nature of human existence. Isolation spites privacy; note how many people nowadays are eager to share their personal information with multiple others they have never met and likely never will. But the ubiquity of communal propinquity sires privacy. The distinction tells us all we need to know about the imbalance between individualism and community that we have allowed to arise in our increasingly man-made environment. It forms a paradox: If we are too much alone our individuality isn’t good for much, is it?
None of this says or is meant to imply that preliterate cultures were necessarily not profound, or not beautiful, or not capable of intricate expression, or not granularly vivid, or not intersubjectively sharable. Not at all. It is certainly not to say that today great art cannot be created for screens—although it is hard to imagine everything from a Shakespearean play four centuries ago to a great film today working as an oral interface with its audience without the script having first been committed to writing. It is to say that if a silent narrator abides in the minds of non-readers, it must be at least in some ways a narrator different from our own. The slow but inexorable movement from oral/communal to written/private uses of narration was indeed epochal. It is hard to disagree with Ong’s conclusion that, “without writing, human consciousness cannot achieve its fullest potential.”[12]
[1] See Philip Zelikow, “To Regain Policy Competence: The Software of American Public Policy Competence,” Texas National Security Review (September 2019).
[2] Into this context fell an essay by Temple Grandin, “Society Is Failing Visual Thinkers, and That Hurts Us All,” New York Times, January 9, 2023. Everyone who knows or knows about Grandin admires her, but this essay is strangely if understandably off point. In an age of galloping orality displacing written culture, complaining about the written-language emphasis of schooling is twice mistaken. Granted: We probably start forcing reading too soon on young children; Finnish education takes a different and arguably wiser tack. But still: It is all that American educators can do to maintain some level of literacy these days, and visual thinkers would as likely be harmed as helped by a turn toward special attention for them in schools. Visual thinkers like Grandin can be very creative and focused, and it is often the struggle with written language in school that makes them stronger by forcing them to find alternative learning strategies that work for them. We would likely end up with the worst of both worlds if we took Grandin’s advice: an educational system unable to maintain necessary standards of written literacy, but that also cannot adjust effectively to help visual thinkers and others with learning differences.
[3] Owen Barfield, History in English Words (Faber & Faber, 1954), p. 166. The first edition of this classic appeared in 1926.
[4] (Houghton Mifflin Harcourt, 2017)
[5] This raises a perhaps awkward truth. Could it be that, by dint of some neo-Lamarckian process still not well understood, groups of humans with certain allele clusters have managed to rear multiple generations of literate people such that, over time, their brains became functionally different from groups of humans that remained non- or pre-literate? This of course begs the deepest epigenetic question of whether inculcated cultural traits among groups can affect biological changes that can be inherited and progressively developed over time. The most obvious way this can happen concerns premodern assortative marriage and the likelihood that high-status families will rear more living offspring--so the opposite of a genetic bottleneck, more like a genetic cornucopia. Not just Lamarck but Darwin himself initially thought something more narrowly bio-chemical was going on with his theory of pangenesis. Gregory Bateson later pursued this hypothesis through his theories about the soma (Steps to an Ecology of Mind [Ballantine, 1972], pp. 346-63). Stephen Jay Gould and others have questioned the “pure” Darwinian notion of a wholly random process of natural selection as the sole engine of evolution, and there is a neurological basis for doubting this hoary notion. Nicholas Wade has argued that allele cluster differences generate marginally different institutions in culture that over time can have non-trivial relative effects. So if neural networks are shaped by every sensory experience, and if literacy as a prominent epigenetic experience has been a cause of progressive brain development through an extended neoteny, as now seems beyond doubt, might the bottommost reason that a fairly small number of Europeans were able to conquer most of the world between the 15th and 19th centuries been that their modal literate and numerate brains were different and more advanced in non-trivial ways than the modal brains of the non- or pre-literate populations they subdued? Very awkward if that is true, but it may be true.
[6] The different brain areas include, at a minimum: for visual representation of letters, the Visual Word Form Area (VWFA); for auditory representation of phonemes, the Planum Temporale--particularly the left Planum Temporale also known as Wernicke’s Area. If we combine reading with speaking and handwriting in the reading-writing dialectic we must add Broca’s Area for vocalization and Exner’s Area for writing. See Ezequiel Morsella, “The Power of the Written Word,” Psychology Today, December 21, 2023, and the graphic brain illustration within. For a more technical presentation of the argument, see Morsalla et al., “Homing in on consciousness in the nervous system: An action-based synthesis,” Behavior and Brain Sciences (Cambridge University Press), Vol. 39 (June 22, 2015).
[7] Romanyshyn, “The Despotic Eye: An illustration of metabletic phenomenology and its implications,” Janus Head 10 (2), quoted in Altfeld and Diggs, “Sweetness and Strangeness,” Aeon, July 2019.
[8] This view also defines a basic contrast between the ancient Jewish and Greek definitions of the sacred. The Greek sensibility attributes philosophical and ideational essence to vision. According to the Stanford Encyclopedia of Philosophy, referencing Thales and Pythagorus, the original meaning of the term εἶδος “visible form,” and related terms μορφή “shape,” and φαινόμενα “appearances” from φαίνω “shine,” were visual metaphors. Hence, the Greeks believed that what was beautiful was holy; the Hebrews believed that what was holy was beautiful. The Greeks privileged seeing over hearing; the Hebrews hearing over seeing.
[9] Back for a moment to Rabbinics: The emergence of the concept of the soul as the true locus of human consciousness did not originate in Europe or with Protestantism. Martin Luther called his vision “Abrahamic” and based it on scripture for good reason, and for the same reason he expected the Jews of Europe to rally around his theological dispensation. When very few did, Luther did not take it well. Finally, reading this concept of the soul, and hence of individual agency and moral responsibility, back to Abraham exemplifies the Masoretic mind at work; in truth those concepts did not set deep roots in Judaism until scribal schools spread widely enough to produce a literate critical mass of people in about the 6th century BCE. This development can be shown by analyzing the evolving relevant Hebrew vocabulary in scripture, particularly uses in context of the word נפש.
[10] Dylan quoted in Douglas Brinkley, “Bob Dylan Has a Lot on His Mind,” New York Times, June 12, 2020. Emphasis added.
[11] Consider Charles Frazier: “We are mistaken to gouge such a deep rift in history that the things old men and old women know have become so useless as to be not worth passing on to grandchildren.” Thirteen Moons (Random House, 2007), p. 412.
[12] Ong, Orality and Literacy, p. 14.