How does the world fit together? How does causality work? What does any given answer to questions as large as these have to do with our small individual lives, how we think about them, and what sorts of moral obligations we suppose devolve on us as a consequence?
Welcome to the domain of philosophy. It’s not easy to write briefly on my own meta-epistemic for interpreting the social and political world, and the fact that my likely audience is splayed all over the place with respect to background knowledge doesn’t make it easier. When all is said and done, this promised essay may prove too much for some and too little for others. It can’t be helped. What I can promise is to put the matter in as clear and entertaining a manner as I can. That should help experts and novices alike along the way of what follows.
In Philosophy 101, at least in the Western world, we learn that the subject comes in four main parts or domains—though some add a fifth: cosmology, logic, epistemology, and ethics. And maybe aesthetics.
The first part, cosmology, is about what exactly the world is, where it comes from and how it seems to be organized. This is the best interpretation one can give to the old Firesign Theater skit punch line “What is reality?” if you’re old enough, or otherwise strange enough, to know what that refers to. Yes, philosophy can be humorous. Know the one about René Descartes, who leaves his office one afternoon after spending the day “dualing” with his shadow and heads for a small Parisian café with a few colleagues and orders a drink, probably a Kir? Well, he does. When he finishes his drink a waitress asks if he wishes to order another. Descartes answers (in French of course, or maybe Latin): “I think not” and promptly disappears into thin air.
Ahem….back to cosmology: What the world is, were it comes from, how it is put together, and how all that affects the way it functions begs several questions that wise men and women have been pondering for millennia. Does the cosmos have an extrinsic creator, and if so of what sort? So you see right off that philosophy and religion, for all the differences between them, real and claimed, have some basic orienting questions in common. Does the physical, material world we presume to exist exhaust the cosmos, or in parallel with the physical and manifest is there some spiritual or ideal world as well? If the latter, is that spiritual world a layer or residua of the creator? How do we individual human beings find ourselves able to deploy symbolic language to pose and ponder such questions? In other words, does the fundament of cosmology suggest some implication for human perception of the world, since we are obviously part of it? When the human mind can include itself as an object of its own inquiries, we have the beginning of philosophy, which, whatever else it is, is self-reflection.
Logic barges into this little salon of ours without asking. The physical world behaves in an orderly way. Things happen lawfully, physics and the other hard sciences tell us. The universe is not off on a random lark. God does not play dice, said Einstein. So what, then, is the plumbing blueprint, so to speak, of this non-random quality? This is where logic comes in, and also where mathematics comes in.
If causality exists and can be observed, then logic is what helps us derive the meta-rules through which causality operates. Mathematical logic is logic expressed symbolically rather than lexically. This raises some wonderful questions: Is mathematics an abstraction of something that inheres in physical reality, or is it just a symbol system hatched by humans that happens to correspond to aspects of reality, but that is analytically separate from that reality and limited toward extremes in its correspondence to it? (Beats me; go ask Edmund Husserl.)
Logic also cloys at us another way. If everything that happens in the physical universe has a necessary and sufficient cause, then need it have a first cause? If not, how logical is that? So logic drags us ineluctably to the question of intelligent design. How can the world exude orderliness without there having been some “first cause” or “first mover”—what has been generically known in human cultural history as God, as we refer to it in English? Can one imagine a different sort of first-causer who, or that, might have taken a different approach to universe building? Could there have been a world like that imagined in the ancient Greek myths, a world of gods and demigods where anything and anyone can turn into anything else and then turn back again, and where timelines can reverse, twirl, and somersault at will?
This question quickly gets lots trickier fast if you let it. When philosophers, and some natural scientists, speak of necessary and sufficient cause for a given observed phenomenon, they brim with assumptions. One is that, for all practical and theoretical purposes, what they observe is what there is and is all of what there is. A Stanford geneticist named Robert Sapolsky, in a recent book entitled Determined: A Science of Life Without Free Will, has made a splash lately based essentially on this assumption. And he draws an accurate conclusion from it: No one is responsible for what they do, whether wonderful and kind or criminal and murderous; everything is foreordained in the sense of having necessary and sufficient prior causes. Happily, few other natural scientists agree with his argument, which is, as we used to say in the 7th-grade schoolyard, “old as the hills and twice as dusty.” It is also childishly flat, as we will soon see.
But for now consider only the simple sense of causality: a+b+c+d+…..n=X, in which a, b, c, and d are prior conditions or factors bearing on a given outcome of interest and X is the outcome of interest we are trying to account for--whether outcome refers to a human behavior, the behavior of a seal, say, or the precipitate in a chemistry experiment. (Any outcome we define is something we are trying to isolate from the ceaseless flow of reality, and how we do that is itself a head-scratcher—but never mind.) The terms necessary and sufficient mean, unless otherwise qualified, that given an assemblage of all relevant factors at play, and specifying their weighted relational interconnectedness, the outcome has to be what it is, and cannot be any other way. It is predetermined, in other words, not in any way random. It is, as Voltaire’s Pangloss would say, “the best of all possible worlds” because it is the only possible world.
That formulation of causation can be expressed as a sort of billiard ball model, to wit: The cue ball hits the nine ball which hits the eleven ball, and both the nine and the eleven, still moving as they must be given the physics of the thing, hit other balls and, before you know it, you’ve sunk the eight ball in the side pocket and lost the game. Damn! But a moment’s reflection tells you that this is not a sufficient model. Every natural scientist knows that hydrogen, nitrogen, and oxygen atoms in a gas will behave randomly under spatially contained conditions of temperature and pressure. The gas will still behave predictably on the macro-level, sure.But on the micro-level?
If micro-randomness or indeterminacy is compatible with macro-determinism when it comes to the material universe—perhaps in the idea of natural selection as well as in the behavior of gas molecules—then we need a more sophisticated way to understand it. And if gas molecules’ motion are either non-determined or non-determinable, why not the consciousness of human beings? So, similarly, is it not logically possible that while the sum of humanity’s behavior may (or may not) fit into some pre-cooked divine plan or other-authored teleology, the behavior of given individuals need not be? That would spell free will for all practical and most theoretical purposes.
Sapolosky may contend that even each and every gas atom’s motion is in fact deterministically set by its prior conditions, and only appears to be random because we’re too thick and lazy to see it. But if he does that then most theoretical physicists these days will smack him on the forehead with a two-by-four labeled quantum mechanics. Quantum mechanics may turn out to be just mysticism for physicists, but the math still works, most of the time at least. So we must grant the possibility that actual randomness can collectively sum to the predictable when it comes to accounting for physical phenomena. So much, then, for the simple billiard ball model of causality. If Douglas Adams were still alive he might be able to conjure a billiard table that took quantum mechanics into consideration. The Pool Table at the End of the Universe, perhaps? Alas.
There is more: I said “observed” phenomenon, which means that consideration of the human senses, imperfect and selective as they are, are required to get to the guts of the causality question. Immanuel Kant famously distinguished between noumena and phenomena, pointing out that we can only know the real, material world as filtered though our senses and analyzed by our brain. This is the font of the philosophical disposition we know as idealism, not idealism as opposed to realism but idealism as opposed to Sapolsky-like stark raving materialism.
This distinction matters to theology as well as to philosophy as again, even against the insistence of many philosophers and theologians, the two realms intersect. How to show this? Well, let me use some settled science to suggest why dogmatic atheists like Richard Dawkins and the late Christopher Hitchens are/were being silly.
Behold the electromagnetic radiation spectrum: If we measure it by wavelengths it goes from 108 to 10-18--from power cables to cosmic rays, with long wave, medium wave, television and FM, cell phones, microwaves, fiber optics, infrared, ultraviolet, x-rays, and gamma rays in between. If we measure by frequency (Hz) it goes from 100 to 1026. In the center of this spectrum, between non-ionizing and ionizing radiation, we have the range of visible light that human beings can perceive: about 7.5 x 10-7 meters to 3.5 x 10-7 meters. The visible light humans can perceive comes to a bit less than a ten-trillioneth slice of the universe’s energy spectrum. Our aural and tactile senses may expand that a smidgen, but only a smidgen.
Less than a ten-trillioneth slice, mark that. So it’s one thing to admit a problem knowing what may be true of any first mover of so capacious a creation; that is justifiable humility. It’s quite another to be certain that there is nothing to know about any first mover because no such first mover exists or ever existed. Given the tiny ambit of our capacity to see or sense creation, that, to me, smacks of unapologetic arrogance. (I called it silliness above just to be nice.) In other words, I don’t know what is, but I’m not therefore willing to conclude that there is nothing. Who would conclude that about something as routinely mystical as the future? We know not what the future will bring within its wide parameters of the possible, but that doesn’t strike most of us as a good reason to assert that there is no future, does it?
The foregoing reference to Kantian idealism introduces the third domain of philosophy: epistemology, or how we know what we know. What does it mean in more granular detail to say that human beings “observe”?
Philosophers distinguish five and only five types of knowledge sources that generate, all taken together, the full range of knowledge of discrete facts that any of us has:
empirical—from the senses (you see the words in front of you--but that doesn’t mean you know by empirical means alone what the words mean….more on that seminal observation below);
rational—from reason (you know from intuition that two plus three equals five);
introspective—from self-knowledge (you know uniquely how you feel, if for example your ear itches);
memorial (you know or suppose you know what your mother-in-law said or did or claimed she did last week because you remember it);
and testimonial—from what others tell you, orally or in writing. Everything you know about the Taiping Rebellion, for example, assuming you know anything at all about it, is from testimonial sources, since you did not see them with your own eyes in China in 1850-1864, were not born with rational knowledge of them, they are not parts of you, and what you remember about them is only from testimonies you have read.[1]
We acquire knowledge from testimonies in basically two ways: We evaluate the reliabilities of single testimonies or we trace the origins of multiple testimonies to see if the sources are objective and hence oblivious to our knowing of them. We learn to trust what some sources tell us and to distrust others; over time, too, most of us also learn to distinguish what we want others to tell us from its reliability as objective description.
At least some of us still learn how to do that, but the practice seems to be getting rarer. Epistemology has vastly more practical application than most people appreciate, and it’s time for a slightly extended example.
Consider that all the vast majority of Americans know about national politics comes via testimonial knowledge, and intermediated testimonial knowledge at that. Who intermediates clearly matters. Mitt Romney remarked just after the recent Iowa Republican primary that “. . . a lot of people in this country are out of touch with reality and will accept anything Donald Trump tells them. . . . You had a jury that said Donald Trump raped a woman, and that doesn’t seem to be moving the needle. . . .”[2]
What Romney is alluding to is the diffuse but concerted attempt to derange a significant percentage of the electorate’s capacity to evaluate testimonial reliabilities. Historically, most of us trusted the Washington Post and distrusted, say, the National Enquirer. MAGA entrepreneurs, acting as willful facticity termites via Fox News, One America News, and broadcast shamans like the execrable Alex Jones, have for years now set out to reverse that order by repeatedly presenting reliable media as “fake” and fake stuff as reliable. By doing so frequently and consistently enough amid the insular designer infospheres of the digital age, magnified greatly since the advent of so-called social media, unreliable sources have become indistinguishable from reliable ones for many people, including some on the further Left as well as the further Right.
A migrating cultural meme has clearly accelerated the size of this carnivorous wormhole. The postmodernist premise that facts don’t exist, only hegemonic narratives, and the associated narcissistic premise of expressive individualism since morphed into imperial-“I” subjectivism, started on the Left, of course. But it clearly has been taken up and weaponized to stunning effect by the illiberal Right--thanks so much, Mssr. Derrida and Foucault.[3] Care for an amusing example of imperial “I” subjectivism from across the pond? “‘There’s just as much truth in what I remember and how I remember it as there is in so-called objective facts.’ So says Prince Harry, who may be as credible a philosopher as I am a weightlifter,” wrote Nick Timothy, co-chief of staff to former British Prime Minister Teresa May, in the January 15, 2023 Telegraph. (N.B.: Nick Timothy would be shite as a weightlifter, just so you’re aware.)
With sufficient confusion tossed up between the reliable and unreliable, reliable sources can no longer readily suppress wishful thinking. Many readily credit as reliable whatever others say if it reflects their prejudices. Nobody more reliable can counter the wishful thinking because they’re not trusted, and those who try to counter it just persuade a wishful thinker to trust them even less—so around and down we go. That, exactly, is the Zeitgeist tailwind Romney seems to have caught hold of.
The same basic phenomenon is true of the woke Left’s latest episode of ideological St. Vitus Dance: the anti-binary trans fad. Since there are no facts, including importantly in this case facts about chromosomes, license is presumed to say the wildest and weirdest things imaginable about gender and sexuality. To cite an extreme expression that nonetheless clarifies the underlying impulse, “There are as many genders as there are individuals because we’re all unique individuals.” Thus Saorsa-Amatheia Tweedale, a Whitehall diversity ambassador (who claims to be a trans women, but who knows?) attached to the UK Department for Work and Pensions.
The trans fad amount to a virtual cult enabled by digital technology, and as a cult it performs the same social function of all cults and apprentice religions: Expressing a belief in the very improbable works as social glue to meld small anti-mainstream groups together, thus providing psychic shelter from the myriad alienating gigantisms of modern life. In this the cores of the MAGA and the anti-binary worlds are the same.
And you thought epistemology was just a big word with no application to anything practical or interesting…. We will return to this theme once we come to The Age of Spectacle material a few months down this substackian road. But first we must apply rubber to that road.
My own meta-epistemic is simple enough: Yes, there is a material world and it matters enormously as facticity and base; without it nothing else happens or can happen. This is why postmodernism is full of poop, let’s call it: It has wildly overcompensated for the enormity of 19th-century positivism, which took an early version of the billiard ball model error and applied it even to the human and cultural sciences. Luckily for you, we’ve not time to rummage through the aftermarket wares of Giambattista Vico, Auguste Comte, and similarly-minded others. We don’t even have space to sideswipe more modern expositors of the positivist delusion—that anything that could not be seen, touched, heard, and thus counted was not real and so not worth bothering about—like Bertrand Russell, who thought that an ideal language, with no metaphors or ambiguities to sully it and mislead us, was both possible and a good thing to construct. Russell’s encyclopedic misunderstanding of language brings to mind Orwell: “Some opinions are so stupid that only an intellectual could hold them.”
Positivism and determinism were and remain close cousins, with an historically deep family tree. In premodern times before Abraham came along only two kinds of narratives existed: the triumphalist narrative and the tragic narrative. The former, nearly ubiquitous in the ancient Near East, held that “we” were superior and destined to become and remain triumphal. Whatever the current state of affairs, this was written in the stars, said the priests and shamans. The later, epitomized and raised to elaborate finery by the ancient Greeks, held that we as playthings of the gods in the dome of heaven were bound by fate. If our fate was a typically mortal and tragic one, we might strive and struggle against it, but all in vain. The triumphalist and tragic narratives shared a straitjacket form of determinism. All was predestined; human beings could do nothing to change things.
The determinism in these ancient narratives has been often replayed in more modern ones, both putatively religious and supposedly scientific. As to the religious, think Calvinism’s doctrine of double predestination, a throwback to ancient Greek thinking that rejected the forward looking, Age of Reason-friendly proclivities of the Lutheran/Episcopal version of the Protestant Reformation. Or think the Arabo-Turkish notion of kismet, from the Arabic qisma meaning lot or fate, a slice of pre-Islamic folk belief that wormed its way into Islamic culture and then meandered into English parlance in the early 19th century (after which is became the name of a 1955 movie with musical compositions by Alexander Borodin; talk about long, strange trips).
As to the supposedly scientific, examples abound. Marx postulated a positivist gloss on economics such that materialism alone (a person’s place relative to the means of production, or class, to be more specific) defined political reality. Later on, Freud’s key to determinism was the subconscious. We went from “the Devil made me do it” (apologies to Flip Wilson) to “my subconscious id made me do it.” Now some geneticists, like Dr. Sapolsky, are doing the same thing, to wit: “my genes made me do it.” Activiste sociologists and kindred CRT ideologues love this stuff: There are no criminals, only victims of white male “structural” oppression, so we need no police or jails; what we need is revolution leading directly to underdog utopia—nice work if you can get it maybe, but you can’t get it.
Positivism really took it on the chin during the 20th century, thankfully. But bad ideas are notoriously hard to kill. In the social and cultural sciences, in the West at least, we have in recent centuries moved backwards as well as forwards. When such disciplines as political economy and political philosophy existed, say in David Hume’s day, classical philosophical concerns twinned with the developing disciplines of anthropology, sociology, and psychology fostered synoptic reflection on what people really cared about, and provided them with an integrative way to study it. But the destruction of political economy and its replacement by the would-be “positive” disciplines of economics and political science put paid to that. The destruction of political philosophy exported the political part to political science departments as a poor third cousin as those departments rushed toward the quantitative…because they could. The sundering of political philosophy left philosophy as a rump bereft of a subject matter that had been among its greatest inspirations since Plato. Together these two examples of reductionist academic engineering disorganized our stock of knowledge about the things that still matter most to us, or ought to.
It left us, for example, with an impoverished meta-epistemology for the social sciences premised on three materialist, reductionist assumptions. First, the proper focus of social science is the individual person, since abstractions like society and community are just airy metaphors and don’t actually exist (echo Margaret Thatcher) and so cannot be quantified. Second, all persons are value maximizers. Third, all actors are interchangeable since neither idiosyncrasy nor culture is strong enough to inflect the behavior of value maximizers. From this triad of assumptions we got the Skinnerian model of human behavior—rats in a maze, subject to operant conditioning, that being very close to the basic modal the Johnson Administration used to erect the Great Society’s War on Poverty. Create incentive structures using these three assumptions and quantitative measures of sufficiency and people will behave the way you want them to, whether they understand why or not. Except that with a few benign, isolated exceptions, they didn’t.
Just one man was responsible for exploding much of this rubbish and warning of one of its most likely pernicious consequences. Herbert A. Simon’s notion of satisficing utterly destroyed assumption number two even as he warned of the coming “the attention economy.” Meanwhile cultural anthropologists, primatologists, and forensic archeologists were establishing beyond doubt that humans are social animals, not a horde of Protean individuals. In reaction to the grey-suited automatons of the War on Poverty came the original neoconservatives—Irving Kristol, Nathan Glazer, Daniel Bell, Irving Howe, Seymour Martin Lipset, Daniel Patrick Moynihan, James Q. Wilson, Peter Drucker, and others—who insisted that culture did too matter, since humans were indeed social animals, and diversely autogenic ones at that. They warned that any ambitious public policy which ignored culture and subculture was bound not only to fail grandly but was likely to produce counterproductive outcomes that by its own lights it would be powerless to understand.
All that turned out to be true, but positivism nevertheless persisted, and persists still. It persists in folk-like ways, as with the widely persisting belief in the predictive powers of astrology and in now-classic popular songs like “Que Sera, Sera” from 1955. Less folksily, it can be seen in the rational-choice model still popular within contemporary political science. The human rats-in-a-maze metaphor lives on, too, as the dreadful Scott Adams’s “moist robots” locution, moist robots readily manipulated by whorish Ph.D.s in cognitive psychology in thrall to big-data digital technologists in the marketing departments of huge plutocracy-siring corporations. Positivism’s intellectual standing has been razed nearly to the ground, but large concentrations of private power have not gotten the memo, and seem determined to show that they don’t care about social science epistemology or theory. Imagine that.
Sorrowfully, none of the dour detour into the positivist swamp was strictly necessary. Already a long time ago—roughly 4,000 years back in the mists of pre-history—a third kind of grand historical narrative appeared on the scene, albeit one with only minority sway. Its text is the Hebrew Bible and its original master of ceremonies was Abraham. Unlike the triumphalist and the fatalist narratives, the Abrahamic narrative is not deterministic. It is a covenantal narrative that is open-ended toward a contingent, not a necessary, future. In this narrative’s vision humanity becomes a partner with God: If we follow His guidance we will thrive, and if we don’t we will gather a bitter harvest. The choice is ours, but we are instructed directly in Deuteronomy 30:19 to “therefore choose life” because, supposedly, this God loves us.
The covenantal narrative relies on sociological logic, not magic or quotidian miracles. It tells us that we are largely and recursively responsible for our own individual and collective social futures as they move through time. As a consequence, it makes our decisions consequential, and we become morally responsible for their effects. It gives us reason to get out of bed in the morning. Determinism, contrarily, is both monotonous and demobilizing. To me it’s the rough equivalent of having to listen to Christmas muzak all year long.
Not coincidentally, a covenantal narrative projects out a teleology—a purpose to it all, or at the least a line of progressive development.[4] The cosmos does not circle around itself in repeating cycles, mimicking the seasons, as in most ancient conceptions. It rather moves forward as it spirals toward places and times it has never been. It spins out like, well, like a helix—like a sprawling cylinder in a hurry.
Last on this point, the covenantal narrative helps explain the Judaic stipulations against idolatry, against bowing down to idols and imagining that carved hunks of stone or wood have magical powers over human life. To credit such beliefs is to credit the false determinism of either the triumphalist or tragic fatalist narratives, for those narratives depend on static visual images, not on aural ones that by their very nature must unfold forward in time. The commandment is “Hear, O Israel,” not “See, O Israel,” and Psalm 115 is the proof text: “Their idols are of silver and gold, the work of men’s hands. The have mouths but do not speak, they have eyes but do not see, they have ears but do not hear, noses but they do not smell. They have hands but handle not, feet but they do not walk; neither speak they with their throats.” In the Abrahamic dispensation, only an immaterial Creator who speaks but cannot be seen aligns with a human future defined as contingent, not necessary, free and self-completing, not predestined or predetermined.
The covenantal idea of a teleological journey for humanity did not remain within the bounds of religion. Without belaboring the intermediate steps of its emergence into a secular sphere, Georg Wilhelm Friedrich Hegel’s idea of History with a capital H was in the main a secularized (if barely) adaptation of the Hebrew prophets’ concept of a messianic age. One may be less than fond of Karl Schmitt, but his argument in Political Theology (1922), that “[a]ll significant concepts of the modern theory of the state are secularized theological concepts. . . ,” isn’t easy to refute.
Hegel inserted a non-rabbinic accent of determinism into the notion of an historical teleology, and of course Marx a bit later canonized a determinist teleological doxy going forward into the history of Western socialism. Alas, the journey from Isaiah to Marx illustrated an old, rather distressing pattern, perhaps best described by, of all people, Tom Robbins:
The problem starts at the secondary level, not with the originator or developer of the idea but with the people who are attracted to it, who adopt it, who cling to it until their last nail breaks, and who invariably lack the overview, flexibility, imagination, and, most importantly, sense of humor, to maintain it in the spirit in which it was hatched. Ideas are made by masters, dogma by disciples, and the Buddha is always killed on the road.[5]
Not coincidentally too, the covenantal narrative recognized the huge psychological weight of moral responsibility, notably when that responsibility fell short of its mark. It handled the challenge not by offloading onto a magic-wielding priesthood it but by problematizing it. It thus introduced the possibility of true forgiveness, of ourselves and others, as opposed to mere forbearance. In the Abrahamic dispensation, therefore, no deus ex machina need save us—in this case deus is meant quite literally—since we have free will and enough agency to save ourselves via true atonement and the consequent redirection of our lives. So here enters the fourth domain of philosophy: ethics.
Many a politician, and other semi-educated folks, uses “ethics and morality” together in a sentence to allude to something they consider good and necessary in human society; but since they don’t actually know what the words means, they toss both into the rhetorical hopper just to cover their bets. Morality concerns the distinction between good and bad, or right or wrong, behavior; moral reasoning is a form of effort designed to guide ourselves and others toward good, right, or proper behavior. As is befitting of a domain of philosophy, ethics is the study of moral behavior and moral reasoning. The one is the doing, the other makes that doing an object of reflection. The next time you hear someone bloviating about “. . .ethics and morality. . .” you will doubtless crack a smile and quietly enjoy yourself….before frustration sets in.
Cosmology, logic, epistemology, and ethics all defined, and some examples and applications offered. And you're still here. Nice. But that’s not quite the end of the matter. When I noted that positivism and materialism took a well-deserved beating in the 20th century as philosophers, sociologists, and all manner of writers and poets reacted against it, I did not identify the name of positivism’s main opponent. A few precursors I did mention or allude to—Isaiah as well as Kant. But now it’s time to supply its name, recite a brief anthology of its heroes (my models of intellectual probity) and leave it at that.
As many of you have already sussed out, we speak of phenomenology. Phenomenology is a way of looking at the social world that dwells somewhere in the nexus between philosophy, anthropology, and sociology—with lateral applications to politics and diplomacy, as well.
The phenomenological premise is that humans live in an intersubjective world of symbols that in turn generate a wonder of laminated realities that are bound to, but not exhausted by, the material world. To grasp human social reality we need to look not just at the “picture” out there but also at the “camera” in here, so to speak, at how our collective cognitive apparatus produces the cognitive reality in which we truly live.
It’s easy to show the distinction between “picture” and “camera.” Two miniature examples from the phenomenology cupboard will suffice.
First, if you are walking one fine morning in a meadow and think you espy a little bunny in the shadow of a majestic oak, but find when you approach closer that it is only a crumpled up paper bag, the bag doesn’t care that you were mistaken, and had the bag in fact been a bunny, the bunny would not have cared that you saw her accurately. But if you enter a room roaring with boisterous noise being made by a clot of people and smelling a little like Jameson and Guinness and suppose it to be a well-lubricated party, when in fact it is a wake, your framing error could be quite consequential. It could get you tossed out of the room onto your mistaken arse. Takeaway: The definition of a social situation is not the same as the definition of a natural one. More important, causality isn’t the same in a symbolic world as it is in a material one, just as your dreams are rooted per force in a wide-awake perceptual substructure but are not bound to it.
Second, when we speak we make sounds called phonemes. But when others hear those phonemes they are not interested in the noises but rather in the symbolic meanings that hitch a ride on them. The phonemes produced by a human voice box--a marvelous conjunction of muscles, nerves, teeth, tongue, gums, cartilage, various connective sinews, and jaw and skull bones—are real, physical phenomena that can be measured, recorded, and analyzed. The symbolic meanings attached to them are basically arbitrary, save for some radical onomatopoetical exceptions, different in every language and changeable over time by customary intersubjective agreement. The symbolic meanings carried by the phonemes—not just by their lexical content but also their rhythms and intonations—cannot be recorded, measured, or quantified, but those are the elements of speech we care about. If you need a metaphor here, think about all the horses and riders that populate 19th-century English fiction. If a hero or villain arrives in Thomas Hardy’s Casterbridge on horseback, for the sake of the plotline we care about the prospective deeds of the rider, not the horse.
Phenomenology remains indebted to Kant for its ur-grounding in philosophy, and arguably Kant’s greatest 20th-century philosophical expositor was Ernst Cassirer (1874-1945). Cassirer’s ideas, most easily grasped in summary in his 1944 book An Essay on Man, were further spread by one of his students, Susanne K. Langer. In sociology William James (1842-1910) in America and then Alfred Schütz (1899-1959) (originally) in Austria are key to the passage of phenomenology from philosophy to the social sciences. James famously referred to the multiple worlds in culture in which people live, “each real whilst attended to.” Schütz took Edmund Husserl’s term lebenswelt—life-world translated literally, but meaning “natural attitude”—and applied it to the social sciences. Peter Berger and Thomas Luckmann later wrote a pathbreaking book titled The Social Construction of Reality (1966) that introduced vast numbers of students and social scientists to phenomenology in the face of positivism’s high wall of misunderstanding. The wall cracked. You could almost hear it.
Many others may be mentioned, not to exclude Karl Popper and his famous distinction between clouds and clocks; the subtitle of his 1966 lecture and essay is instructive: “An approach to the problem of rationality and the freedom of man.” The subtitle alludes to a hoary argument that once upon a time was referred to as the opposition of rationalism and empiricism. As with the materialist/positivist/determinist position still today, empiricists used to claim that that what was amendable to empirical treatment exhausted the rational. This still goes on, with the insistence, for example, that mind is reducible to brain, and the consciousness is either bound to become explicable in those terms or else is demonstrably some sort of illusion. Don’t bet on it. When empiricists manage to fully account for consciousness, please come and let me know. Until then, if then ever happens, I will not be holding my breath.
Rationalists answered, to simplify only a tad, that yes, all that was amenable to empirical investigation was rational, but not all that was rational was amendable to empirical investigation. Clocks were most likely amenable because they were close-ended, and could be made apodictable—meaning that any sound scientific experiment could be replicated with the same results. But clouds, while subject to rational analysis, were far less amenable because as open-ended systems, captained by autogenic creatures, they changed by dint of the interactive and recursive flow of human perceptions and intentions. No self-respecting oxygen atom would ever decide on a whim to acquire a ninth proton and refuse to nuzzle up to a pair of hydrogen atoms to molecule the night away; but people do that general sort of thing all the time.
Popper remains a beacon, but in my view the work of Erving Goffman stands out as most developed and useful for any would-be social scientists who needs to know what sort of planet he or she is standing on. His 1976 attempt, in Frame Theory: An Essay on the Organization of Experience, at a master integration of phenomenological principles with sociological method is a book worth studying. Here is the key money quote from it: “Social life takes up and freezes into itself the conceptions we have of it.” That is how, as Cassirer had put it, social worlds are capable of progressive articulation. They are massive and moving intersubjective clouds of symbolic creation, expression, and interpretation. And they are open-ended and free: While anchored to material reality, as they must be, they make their way into the future along no narrowly predetermined path. Man, said Kenneth Burke in Language as Symbolic Action (also 1966!…1966 is starting to look like a concentrated micro-Axial Age) ), is “the self-completing animal.”
In this way the phenomenological description of the human symbolic means of producing contingency seems to mirror recent developments in molecular and computational biology. Evolution is not as random as Darwin and many others have long supposed—and so one certainly hopes, by the way, that the Sopolskys of the world do not affirm the randomness of natural selection yet still argue on behalf of determinism in everything else. The evolutionary trajectory of a genome is prefigured but not predetermined by its evolutionary history. It generates and stores a repertoire of responses to environmental stimuli based on past jousts with the environment, and responds to new conditions by adapting based on that repertoire.[6] How it will adapt is in principle at least partly predictable if the conditions it is adapting to are properly understood; supercomputers now allow us to actually do some predicting. So it turns out that Jean-Baptiste Lamarck (1744-1829) (and speculatively after him Gregory Bateson, Stephen Jay Gould, and others) was right after all, sort of, just not for the reasons and through the mechanisms he supposed.
The basic point is that the material and ideational worlds are complements, not opposites, and only by keeping both in play can we grasp the true human social condition as it skitters along. Marxism in its original and later forms insists that the material is all there is—all clocks in Popper’s metaphor—and postmodernism in its original and later forms insists that the ideational is all there is—all clouds. They commit mirror-image errors of one another. James Thurber summed up the problem thusly: “You might as well fall flat on your face as lean over too far backwards.”
Now, finally, in Frame Analysis Goffman created a vocabulary with which to lay out his system of phenomenological method in sociology. A short summary cannot do it full justice, but it might provide enough bait to get some of you to read and relish the book. Here goes, then.
We live in a basic reality of primary frameworks, collectively the lebenswelt. We subsist in the natural attitude, but we are capable of transforming any of those primary frameworks into a dizzying array of laminations, or keyings, that produce the worlds “real whilst attended to” that James spoke of, and we are capable of joining in those worlds not only as creators but as co-conspirators in their temporary reality. An obvious example is a stage play, movie, or some other presentation of fictive scriptings. Brackets separate the keyed activity from the lebensvelt, like curtains opening or the room darkening, so that we know we are now about to be engrossed in a world that is not real like the lebenswelt is real, but that is modeled on activity extant in the lebenswelt. Most adults are so effortlessly good at this that we have no trouble following multiple laminations, such as is illustrated by the play within a play in Shakespeare’s A Midsummer Night’s Dream.
A keying of a keying Goffman calls a layering. It is also possible to downkey, or remove a layer. Espionage and counterespionage provide examples of keyings and competitive counterkeyings. When, for example, a false-flag operation is revealed for the willful misdirection it is, we have a downkeying. Downkeyings can collapse a series of laminations suddenly down to the lebenswelt, or they can stop a collapse of laminations part-way. Obviously, such matters have a different impact on the lebenswelt than a production of A Midsummer Night’s Dream.
Keyings can be limited and discrete or they can spread over entire areas of cultural activity. A faith community’s narrative symbol system can be described as a broad, long-lasting keying. What the National Football League is about, or what any sport is about, can also be described as a massive but intermittent keying that becomes operative when the whistle blows and it’s game on. In the case of football and lacrosse the keying is modeled on war….but war, as an organized form of violence, is itself a keying of raw fighting, which is an undeniable part of the lebenswelt. Again Goffman to summarize the gist: “. . .the real or actually happening seems to be very much a mixed class containing events perceived within a primary framework and also transformed events when they are identified in terms of their transformations.”
So now we come, at long last, to politics. Yes, the politics especially of nations can be construed as a keying, a massive, multi-layered and highly consequential keying of the lebenswelt. Discrete models of behavior in primary frameworks can be lifted into a lamination together and made coherent by certain common symbolic applications. It can only exist by dint of metaphors and other symbolic instruments, and its conventions, rituals, and special forms of language are obvious now that the phenomenon as a whole has been pointed out.
And yes, diplomacy in a world of sovereign states is a keying of a keying--a further lamination of the political frame abstracted another layer up to apply to interstate relations. Obvious now, no? The protocols, decorum, rituals, pretenses of law, and special uses of language in diplomacy: You cannot not see these devices and their brackets now that they have been pointed out to you, right?
To close the curtain let me hand the pen to Michael Walzer who, in a June 1967 Political Science Quarterly essay put it about as well as anyone has:
Politics is an act of unification; from many, it makes one. And symbolic activity is perhaps our most important means of bringing things together, both intellectually and emotionally. . . . Words alone may not do this, but words which become part of the special vocabulary of politics--king, subject, citizen, duty, rights father of his country, checks and balances, and so on--obviously do. . . . In a sense, the union of men can only be symbolized; it has no palpable shape or substance. The state is invisible; it must be personified before it can be seen, symbolized before it can be loved, imagined before it can be conceived.
Thus my meta-epistemic, the mitt with which I try to snap line drives, fly balls, and pop ups from the bats of symbol sluggers. You can get a mitt like that, too, if you want one.
[1] This section draws from Aveizer Tucker and Adam Garfinkle, “The Etiology of a Fact,” The American Interest, January 25, 2018. Dr. Tucker took the lead in this section of our essay, and again I thank him for so doing.
[2] Romney quoted in Charlie Sykes’s Morning Shots, January 18, 2024.
[3] I would be remiss if I did not also “thank” A.J. Ayer, not even a Frenchmen, who more than likely started all this with his “emotivism” argument back in the 1940s.
[4] Another way to express what Hegel was on about is as an end-point understood as a point of maximal development to an ideal state. Thus when Frank Fukuyama published his “End of History and the Last Man?” essay in the Summer 1989 issue of The National Interest I knew for certain his Hegelian use of the phrase “end of history” would be widely and protractedly misunderstood. Frank meant “end” as in the fullest point of a teleological process, not “end” as in time has run out. So it was, and so it still is.
[5] From Still Life with Woodpecker (1980).
[6] See “Evolution is not as random as previously thought, finds new study,” PhysOrg, January 3, 2024, referencing and discussing a study published in the Proceedings of the National Academy of Science, namely: Alan J. Bevan, Maria-Rosa Domingo-Sananes, and James O. McInerney, “Contingency, repeatability, and predictability in the evolution of a prokaryotic pangenome,” PNAS, December 26, 2023.