The Abyss and the Garden: A Tale of Two American Post-Constitutional Futures
Post-January 20 AoS Chronicle, No. 16
We have come here at The Raspberry Patch, finally, to the last part of the extended Questions essay that began on Friday, May 2, continued uninterrupted on May 9 and May 16, then picked up on May 30 after a May 23 foreign policy interlude, and for the past two weeks jagged out to a lateral set of digressions not entirely alien to the Questions theme, but not spot on it either. Why the hesitations and delays? Don’t I know my own mind?
Why the hesitations and delays can be posed as a philosophical question. Now, one might reasonably suppose that it is easier to write about the past, which exists, than about the future, which is yet to exist. There’s just more material. That’s true, but hardly an unalloyed godsend: The more there is to write about the harder to parse, filter, and organize what you wish to say. But, on the other hand, that past is the single only past there is amid the many possibilities that might have been, while the possibilities for the future remain unparsed by time and so multiple. (Of course I’m assuming a non-quantum universe and a lack of other mystical hanky-panky at play, and I’m ignoring as well the foolishness of the aphorism that “hindsight is 20/20,” seeing as how archival history would be a breeze were that true, and a breeze it isn’t…..which anyone who has ever tried doing it would know from experience.) So one could write endlessly, almost—almost meaning were it not for the problem of mortality management—about what might be.
To boot, in such a tetherless exercise one would never have to worry about the pesky matter of getting anything factually wrong since the future lays down facts only through a pattern of posthockery the very nature of which we still struggle to comprehend: the strict determinist causality that Einstein avowed?; Planckian quantum randomness?; both, macro-determinism with micro-randomness within, as in a gas cloud?; we’re living in a planetary-scale computer program working to generate the question that yields the answer 42 to life, the universe, and everything run by mice and dolphins?; we’re caught in a alien projection simulation without an Archimedean third-point perspective enabler?; witchcraft?; metaversal perception dominance?(we’ll come back to that anon); something else?
Don’t I know my own mind? Well, have you ever tried to hit a squirrel making a mess of your lovely developing Santa Rosa plums, hopping from branch to branch taking one bite out of each fruit just to be rodentially malicious, with an air rifle pellet, an arrow, or a sling-shot stone? Or tried to draw a true bead on a non-Beatrix Potter garden rabbit—in other words, reasonably cute but definitely not clothed in a little blue jacket—ravaging your tiny bright green endive seedlings? I have, and it’s damned difficult because the little miscreants just won’t stay still long enough for you to nail them with whatever homespun ordnance you happen to be using. It’s the same for any moving target, and the berserkly developing Trump 2.0 Administration is very much a moving target, lately racing madly toward constitutional hellfire. No one knows what will happen next, least of all the Berserker-in-Chief. If you shoot at a squirrel or a rabbit and miss you at least scare the varmint away for a while. That’s a consolation of sorts, and perhaps all that can be achieved in any given moment. If I could scare Donald Trump out of the city I was born in with an off-target air rifle shot, I probably would. Obviously, big “if.”
But he and that are not the targets of this essay. I am not aiming here at anyone, especially one who is at least as much a symptom of a deep cultural miasma as he is a subsequent cause and accelerant of it. I am instead aiming at the final question of the Questions series, framed by Benjamin Franklin’s legendary reply to Elizabeth Willing Powel in Philadelphia outside Carpenter Hall on September 17, 1787: “a republic, if you can keep it,” to wit: If the Trump 2.0 Administration represents an extinction event for liberal democracy and constitutional order as Americans have known them for about 250 years, what do the entangled futures of country, nation, and state look like a few years out, ten years out, fifty years out? And why are so few people, as far as I can tell, asking and trying to answer these kinds of questions?
Post-Extinction Event Future Number One: The Abyss
Two books published in February accidentally but ineluctably dovetail on one another to put the matter in high relief: Bruno Maçães, World Builders: Technology and the New Geopolitics (Cambridge University Press, 2025); and Alexander C. Karp and Nicholas W. Zamiska, The Technological Republic: Hard Power, Soft Belief, and the Future of the West (Crown Currency, 2025).
Maçães argues that the world is just a perception for all practical purposes, so that material reality—including reality’s manifestation as geopolitics—is being replaced by metaversal and the artificially intelligent framings of material reality. The world that matters will be built, not merely inhabited. The Karp-Zamiska book accepts this basic epistemology and argues that great power competition in the future will exist beyond conventional material reality, and that U.S. AI and related engineers must therefore be more aggressive and engage more intimately with government on behalf of U.S. national interests and values. Coming from the CEO of Palantir and his deputy this analysis may seem a bit self-serving, because it is; but that does not necessarily make it wrong.
Both books, especially the second, are alleged by the publishers to be taking the intellectual and policy-science world by storm. Here, unabridged, is the marketing slug designed to sell the Maçães book, which sees
geopolitics as a battle of visions whose outcome will be determined by technological dominance rather than physical territorial control. . . .
World politics has changed, claims Bruno Maçães. Geopolitics is no longer simply a contest to control territory: in this age of advanced technology, it has become a contest to create the territory. Great powers seek to build a world for other states to inhabit, while keeping the ability to change the rules or the state of the world when necessary. At a moment when the old concepts no longer work, this book aims to introduce a radically new theory of world politics and technology. Understood as “world building”, the most important events of our troubled times suddenly appear connected and their inner logic is revealed: technology wars between China and the United States, the pandemic, the wars in Ukraine and the Middle East, and the energy transition. To conclude, Maçães considers the more distant future, when the metaverse and artificial intelligence become the world, a world the great powers must struggle to build and control.
Dust jacket praise for World Builders is fulsome and diverse; here is some of it:
“. . . a truly original thinker. He pulls the world before your eyes before reassembling it for you later. World Builders is not simply an important book, it is a great book.” Ivan Krastev
“. . . there is a great need for philosophical thinkers like Maçães who are able to throw light into the growing darkness surrounding us.” George Yeo, former Foreign Minister of Singapore
“Bold. Authentic. Sharp.” Marietje Schaake, former Member of the European Parliament, author of The Tech Coup: How to Save Democracy from Silicon Valley. Please remember the intonation of Schaake’s subtitle, which we will come back to below.
And another—and here please remember the tone of his judgment: “An altogether fascinating and deeply troubling book.” William Dalrymple, author of The Anarchy: The Relentless Rise of the East India Company.
And here is a specially relevant one:
“Maçães knows the Internet can deliver us from the tyranny of place—but Eurasia, East Asia, and America won’t go down without a fight.” Peter Thiel, co-founder of Paypal and author of Zero to One: Notes on Startups, or How to Build the Future.
As to The Technological Republic, here’s the publisher’s marketing blast:
Silicon Valley has lost its way.
Our most brilliant engineering minds once collaborated with government to advance world-changing technologies. Their efforts secured the West’s dominant place in the geopolitical order. But that relationship has now eroded, with perilous repercussions.
Today, the market rewards shallow engagement with the potential of technology. Engineers and founders build photo-sharing apps and marketing algorithms, unwittingly becoming vessels for the ambitions of others. This complacency has spread into academia, politics, and the boardroom. The result? An entire generation for whom the narrow-minded pursuit of the demands of a late capitalist economy has become their calling.
In this groundbreaking treatise, Palantir co-founder and CEO Alexander C. Karp and Nicholas W. Zamiska offer a searing critique of our collective abandonment of ambition, arguing that in order for the U.S. and its allies to retain their global edge—and preserve the freedoms we take for granted—the software industry must renew its commitment to addressing our most urgent challenges, including the new arms race of artificial intelligence. The government, in turn, must embrace the most effective features of the engineering mindset that has propelled Silicon Valley’s success.
Above all, our leaders must reject intellectual fragility and preserve space for ideological confrontation. A willingness to risk the disapproval of the crowd, Karp and Zamiska contend, has everything to do with technological and economic outperformance.
At once iconoclastic and rigorous, this book will also lift the veil on Palantir and its broader political project from the inside, offering a passionate call for the West to wake up to our new reality.
And here is selected dust-jacket praise from some of the usual sources:
“A cri de coeur that takes aim at the tech industry for abandoning its history of helping America and its allies.” —The Wall Street Journal.
“Not since Allan Bloom’s astonishingly successful 1987 book The Closing of the American Mind—more than one million copies sold—has there been a cultural critique as sweeping as Karp’s.”—George F. Will in The Washington Post.
“The Technological Republic provides a fascinating, if at times disturbing, insight into the reassertion of US hard power.”—Financial Times, “Best Books of the Week”
“Equal parts company lore, jeremiad, and homily . . . The primary target of The Technological Republic is not a nation that has failed Silicon Valley. It is more cogent and original as a story about how Silicon Valley has failed the nation.”—The New Yorker
“Fascinating and important. This book is a rallying cry, as we enter the age of artificial intelligence, for a return to the World War II era of cooperation between the technology industry and government in order to pursue innovation that will advance our national welfare and democratic goals.”—Walter Isaacson
“A bold and ambitious work, The Technological Republic reminds us of a time when technological progress answered a national calling. It is essential reading in the age of AI, as the direction of Silicon Valley will help define the future of American leadership in the world.”—Eric Schmidt, former CEO of Google and chair of the Special Competitive Studies Project
“[The Technological Republic] is by turns provocative and insightful, and Alex Karp’s resilience, patriotism, and depth of experience in our rapidly changing world provide instructive lessons and intellectual arguments for all of us to consider.”—Jamie Dimon, chairman and CEO of JPMorgan Chase
“The Technological Republic should be read by everyone who cares about how technology should contribute to the protection of American values and our security. Karp is a true patriot—a loving critic of his industry and his country who wants them both to be better.”—General James N. Mattis (USMC Retired)[1]
OK, you can breathe now. Must reads, both books, right?
Perhaps; I’ll read Maçães rather than take Cambridge University Press’s marketeers at face value. I don’t expect surprises, for World Builders has a backstory with which I am already familiar (but if I am mistaken I will owe and deliver my regrets).
Maçães argued, a bit too expansively it seemed to me at the time, in a September 9, 2020 Intelligencer essay entitled “How Fantasy Triumphed Over Reality in American Politics” that:
American life continuously emphasizes its own artificiality in a way that leads participants to believe that they are living a fantasy. Americans are learning to live in a realm of hyper freedom, possessing the power to create imaginary worlds and the freedom to unleash a kind of selfish and extravagant fantasy life. Americans no longer ask whether a book or television series would work in real life, they ask whether real life would work in a movie or television series.
Expansive or not, I cooed when I read this, for it harmonized perfectly with my developing Age of Spectacle thesis. “He gets it,” I said to myself.
Maçães’s essay was extracted from a book then about to be loosed into the wild: History Has Begun: The Birth of a New America (Oxford University Press, 2020).[2] History Has Begun did put fantasy—not quite the same as spectacle—at the heart of its analysis, but it offered no robust theory as to how fantasy arose to such prominence, did not delve into much history, and asserted what struck me at the time as a peculiarly optimistic interpretation of a fantasy/spectacle mentality’s implications. When I say peculiarly optimistic I mean that what seemed to me to be a glib, downside-unexamined approach to the subject made me nervous, as if Maçães was having too much fun at the expense of his own credulity. His newest book, I suspect, will make me even more than nervous, for I suspect he “gets” too much of it, for as James Thurber once wrote, “You might as well fall flat on your face as lean over too far backwards.”
World Builders will likely alarm me in part because of what others, not least Karp and Zamiska, are doing with the same basic analysis. The Technological Republic argues, as already noted, that Silicon Valley has lost its way, and the adjuration it find its way exudes a hawkish thymotic patriotism. Karp and Zamiska claim to care deeply about democracy and American political values, and evidently some take them at their word.
What is one to say about these two books, assuming that between backstories and advertising copy I’ve got the gist right? A first observation is that, as far as I am aware, the authors of these two books don’t know each other and have not commented on each other’s work, even though the former works as an epistemological backdrop—a postmodernist, radically constructivist view of political behavior—and the latter scopes a politico-military application of that epistemology. If the connection between the two books were not already obvious note that Peter Thiel likes the Maçães book and is obviously associated with the Karp-Zamiska book, since he is on the board of Palantir, which he pretty much founded.
A second observation is that some of the jacket-praise comments conflict. Schaake’s The Tech Coup, subtitled How to Save Democracy from Silicon Valley, sees Silicon Valley as a threat to democracy but Karp/Zamiska clain to see it as a great potential protection for it. Are General Mattis and Walter Isaacson right to call Karp patriotic if Schaake’s view is correct?
Karp is, after all, very enthusiastic about Palantir’s Panopticon project for DOD, whose implications for government surveillance of U.S. citizens (as well as of immigrants without legal residency rights) in the hands of a no-longer-far-fetched authoritarian state are not mere. Karp’s enthusiasm for Big Data AI applications to civil society domains will certainly be freer to express and develop itself in a post-Constitutional United States, whatever he mutters about American values. And note in this regard that Thiel’s old PayPal partner Elon Musk and his DOGE raiders have already stolen much of the data from the Treasury Department and elsewhere needed to feed the Panopticon program and others like it. The wildly overblown nonsense about NSA spouted by ignorant journalists in the wake of the Edward Snowden affair back in 2013—“NSA is reading your email!”—looks like silly putty compared to the now burgeoning real thing.
Perhaps, then, we should assign more weight to the Financial Times’ use of the phrase “at times disturbing” to gauge the Karp/Zamiska book and to William Dalrymple’s use of the phrase “deeply troubling book” to size up Maçães? We should indeed, and I am about to tell you why.
Let me divulge the punch line at the get-go by way of, yes, a question: In their forward galloping multi-refracted world of perception-dominance, reality-construction, and dematerialized ethereal geopolitical competition, what will Messrs. Maçães, Karp, and Zamiska be having for lunch?
Huh? No, really, I’m serious; bide with me a moment and see what I mean. Listen to this:
S: Come then, let us invent a state, but its maker in fact will be our needs.
A: Clearly.
S: The first of our needs is food, the thing most necessary for life.
A: Yes.
S: And the second need is housing and the third clothing and that sort of thing?
A: That is so.
S: How large will our society have to be to give us all these things? Won’t we have to have a farmer and a builder and a maker of clothes? And how about a shoemaker, and some other workers to take care of the needs of the body?
A: Right.
S: But it is almost impossible to put our society in a place where it wouldn’t need things that have to come from other countries.
A: It is.
S: So we will have to have traders. . . .and if the trade goes overseas we will have to have sailors. . . . But how about the division of all these common goods which was the purpose of the society from the start?
A: We’ll have to have a market.
S: Yes, and money as a sign for the purpose of exchange. . . .
And so the invention of a society goes on as Socrates and Adeimantus do their classical thing in Book II of The Republic—you knew that, right?—commenting widely and wisely about autarky (and tariffs), and about what wonders of prosperity trade, markets, and legal tender can produce:
Socrates: . . . .and let us see what sort of existence the men and women in this society will have. They will make bread and wine and clothes and shoes. They will be builders of houses. . . .stretching themselves out on simple beds covered with flowers. They will be happy with their young sons and daughters, drinking their wine with more flowers in their hair and making up songs to the gods. A happy company, not producing more than enough offspring for fear of getting into need or war.
Enter Glaucon, Plato’s brother, who is a troublemaker…..
Glaucon: But with nothing to give a taste to their food!
Socrates: True, I was overlooking that. Well, let them have salt, olives, and cheese, and onions and greens. . . long may they go on living in peace.
Glaucon: Yes, Socrates, and if you were forming a society for pigs, what other food would you supply?
And so we come upon the famous City of Pigs exchange that has been willfully pushed, pulled, and also innocently misunderstood for centuries:
Socrates: . . . The true society is the one we have been watching, the healthy society as it were. But if it is your pleasure that we take a look at the fevered society? . . . For there are some for whom this sort of existence with this sort of food is not enough. They have to have all the apparatus of the present day, players and dancers and song girls, . . . if we go beyond the necessary things I was talking about . . . then a bit of some other group’s land will be needed. . . Then we will go to war. . . . Here we see the starting point of war. . . . So we’ll have to take great care to get men with the right natural qualities for this, the most important sort of work: guarding the state. . . . strong fighters, free from fear. They must be spirited. . . . But then how will we keep them from being violent with one another and the rest of society?. . .
OK, enough: Some of you at least can see where this 4th/5th century BCE conversation is going. It starts by establishing that socio-political orders are built from the basic organic material needs of the people, but then launches toward Nietzsche’s “last man” and his “will to power” intoxicants, and from there to nations indulging the thymotic urge in a collective mystical tense that encompass historically ample cases of impassioned Protean irresponsibility leading to war, with glory and riches for a few, ignominy and shame for a few others, heartbreak and mourning for everyone else.
The point is that notably increased happiness and contentment, even when accomplished with a reasonable hope of stability for both, are clearly not enough for some people; and it only takes a relative few such fevered folk among the rich and powerful to screw things up for everyone….especially if they are not constrained by some formulation of vox populi. It was not just Plato, writing in Greek, who understood what avarice, arrogance, a taste for heroic adventure in the face of the shuddering fear of consumerist boredom could do to a healthy, happy society. Here is how Ecclesiastes, by legend authored by King Solomon, expressed a similar essence in Hebrew:
Go thy way, eat thy bread with joy, and drink thy wine with a merry heart; for God hath already accepted thy works. Let thy garments be white always, and let thy head lack no oil. Enjoy life with the wife whom thou lovest . . . The words of the wise are heard in quiet more than the shouting of a ruler among fools; wisdom is better than the implements of war, but one sinner will destroy much good. (Ecclesiastes IX:7-9, 17-18)
What does this have to do with our two books? What Maçães and Karp/Zamiska are about in the present context seems both more and less than the doings of garden-variety fevered folk. It is less, perhaps, in that only a high-flying symbol manipulator like Maçães could apparently manage to persuade himself that geography, demography, physical resources, and literal mortality can be marginalized away in global politics by video-game-like postmodernist reveries about metaversal Al-infused perception dominance.
Sure, as Peter Berger and Thomas Luckmann showed us years ago, cultural reality, its political aspects with it, is socially constructed, and no card-carrying phenomenologist (like me) denies it. But that’s not the same as conflating, as Maçães seems to do, Popper’s close-ended clock-worlds with his open-ended, autogenic cloud-worlds, ratifying the latter and dismissing all the former as quaintly obsolescent. If social constructions were wholly free of material substrata constraints, if they floated about at the whim of human imaginations, if they were fictive in Harold’s purple-crayon style, then pigs could whistle and I could be Mary Queen of Scots. “Some opinions are so stupid,” wrote Orwell, “that only an intellectual could hold them.” Yes, and as Jerome K. Dick famously said, “Reality is that which, when you stop believing in it, doesn’t go away.” He might have added, “and it eventually comes ‘round to bite you in the arse if you disbelieve it or marginalize it too much for too long.”
That’s what I mean by lunch. One could make the same point by reference to horseback riding. If you want to go horseback riding you’ll need a saddle, reins, and some specialized skills—all human doings. But you’ll also need a horse!
Others make the same point by reference to a different reminder of the material basis of all surrealist fantasies: nukes.
Nuclear weapons exist as a factor in geopolitical life; if they didn’t Israel and Iran would not be presently engaged as they are. If we ponder the shape of future politico-military showdowns among major powers—and it is really only in crisis scenarios that all the wild animals come out to the waterhole (recall Barth’s remark in The Sot-Weed Factor quoted in a recent post)—we know that just one relatively small nuclear weapon set to air-burst, an explosion that might not even kill anyone directly, would create an EMP (electromagnetic pulse, for those requiring translation assistance) sufficient to toast any and every electricity grid for many, many miles around. And no electricity, no AI-infused systems for military intelligence or weapons platform management and control (unless someone remembered well before to build and operate huge and shock-hardened generators for such contingencies).
Now, thinking about the races going on around the world today to infuse AI into military competitions is a capacious and fraught subject. Let us be content with just a few basic observations.
First, there is a difference between using AI for purposes of gathering and analyzing intelligence and using it to automate major weapons’ firing protocols. As frail as AI might be in the former arena it could still help U.S. intelligence capabilities if only because we are so bad at some aspects of the way things are done now. In the latter case, however, it can be downright dangerous. Fear that other powers might act and react faster that we can pushes us to seek decision speeds made possible by machines that altogether obviate human judgment “in the moment.” But no pre-programmed protocols can possibly anticipate every nuance of battle-space dynamics, so if several powers locked in a crisis defer to their machines, we could well end up with an offense-tilted, highly crisis-unstable condition. It might be a little like the multiple-mousetrap onset of World War I, only much, much faster and much, much more devastating. Am I saying, then, that the Abyss Extinction Event future scenario could entail a nuclear holocaust, a prelude to straggling survivors landing on Cormac McCarthy’s “road”? I am, yes.
It doesn’t have to be this way, so smile: Some very intelligent people working on this matter are fairly certain that AI can be used as well to shore up deterrence and stability, in short to make things less rather than more crisis unstable, so that no EMP ever happens and can be withstood even if it does. The U.S. military must experiment with AI also for defensive purposes: We cannot know what adversaries might wish and be able to do to us unless we get there first. AI can even be used, some aver, to devise non-kinetic offensive vectors that can essentially disarm adversaries by infusing uncertainty and malfunctions into their own Al-infused communications and control systems. What if, for example, the U.S. military could sterilize a crisis between AI-infused nuclear power G and AI nuclear-infused nuclear power H—oh, say India and Pakistan if you insist on a concrete example—so that neither could launch their weapons of mass destruction? Would that be such a bad thing?
Alas, those who understand the potential benefits of selective AI-infusing in the U.S. military also know that the way the contracting process currently engages U.S. R, D, T & E functions enables the corporations to push applications of AI that do not bode well for stability and benign application. That’s what they know how to do best so far so that’s what they do, and that mainly bodes well for corporations like Palantir, RTX Corp, and others making lots of money. Despite former SectDef Ashton Carter’s best efforts to narrow the gap, few DOD types really understand what AI coder pioneers do, and few AI coder pioneers really understand what the military arts and sciences are about—and most of the latter could care less. For them it’s a great and fun job, better than a supercool video game, and anyway if things should happen to go “wrong”—wrong……moral reasoning, what’s that?—IBG-YBG. So what has the Trump 2.0 Administration done? It has made four employees from Palantir, Meta, and OpenAI, people with no military background or experience, lieutenant colonels. Yikes!
So in moving from Maçães to Karp/Zamiska we go from theory to application, from teeny money to big money, from one person to many people, and overall in practice from less to more fevered. From Socrates’ time to our town, the lure of the newest thing, the next frontier, the lust for entertainment, status, the best cuts of meat, gobs of money, and sheer power, among other enticements, often drive and shape human energies—sometimes for the better but sometimes not when new forces jump the track and tumble out of control with the help of new technologies whose uses at scale are yet to be understood. The global competitions of 19th-century European kings and emperors, their legions of soldiers and statesmen in service, that crashed into oblivion in August 1914 provide a good and still fairly recent example. That is what some fevered men (and women, probably) are doing again now, this time with military and other applications of AI instead of then-new marvels like airplanes, tanks, and weaponized poison gas. We have once again become addicted to our own curiosity, so that what can be imagined must be invented.
That is what AI is really about as humanity’s latest of a long list of sirens, but what does the true nature of language say about the fidelity of AI to material reality? All large language programs so far use written language, not sound bytes of orality, and phonetic-alphabet written language is composed of secondary symbols (oral phonemes are the primary symbols) that build concepts of reality. They are not neutral tools of passive reception of inherently accurate reality-mapping sensory data. Reality is Kantian noumenon and always will be; language symbols are Kantian phenomenon and always will be. So AI is no gnostic probe enabling us to see deeper into reality; we are instead using it to fulminate statistical hallucinations, but suppose otherwise. Could this perhaps be dangerous in an age of empowered ignorance and ideological anti-science fever….in some capitals? Pete Hegseth abolished DOD’s Office of Net Assessment; is hubris perhaps his middle name? But he is hardly unique; Goethe’s warning has applied to many, and may still very much do so: “Die ich rief, die Geister, Werd ich nun nicht los.”[3]
Two basic problems define an AI-inflected Abyss scenario vision of a post-Constitutional American future. The lesser problem concerns our corporate, political, and military elites’ utter failure, so far, to understand the nature of the man-machine interface at a workaday sociological level. Ponder this:
The problem starts at the secondary level, not with the originator or developer of the idea but with the people who are attracted to it, who adopt it, who cling to it until their last nail breaks, and who invariably lack the overview, flexibility, imagination, and, most importantly, sense of humor, to maintain it in the spirit in which it was hatched. Ideas are made by masters, dogma by disciples, and the Buddha is always killed on the road.
That’s Tom Robbins from his 1980 novel Still Life with Woodpecker. Nothing is wrong with engaging with the fictive imagination for serious purposes when it is properly identified, for writers of fiction can teach us about human nature in ways that others cannot. That is the case here: The most brilliant, creative, and compassionate people do not always, ahem, rise to the top of huge bureaucracies, whether public or private. No one familiar with how the Cold War-era Department of Defense operated can miss Robbins’s point, and things have not changed as much as they might have since early 1992.
The second more daunting concern revolves around the tendency of complex advanced technology once placed in operation to master the human beings associated with it rather than the other way around; for the Maçães Karp/Zamiska future is a post-deep literate future. It is, at least in its inner sancta, a future of machines and data and statistical experimentation, but of little deep reading and so of broad conceptual poverty. It is thus a future rich with how questions but poor in why ones in which empowered humans can navigate fixed environments with executive function competence, but lack much ability to think strategically let alone philosophically. So another question: Can humans be expected to design, as recognized and needed, new guardian AI protocols to control, modify, and manage AI dynamos already set into motion? Maybe, but I’d sooner bet on bumblebees learning to play billiards.
A generative technology as powerful and capacious as AI is also a likely epigenetic tipping point. Even as we approach a no-returns policy curve in the road—the New York Times reported in March that Google DeepMind CEO Demis Hassabis foresees AGI (artificial general intelligence) emerging within five years—again no one, almost, is asking the key question: Will AI change us in ways we cannot now anticipate and would not choose if we understood them? What is meant by “change us”? Please read the following remark carefully, slowly, and mindfully to understand the answer:
We have modified our environment so radically that we must now modify ourselves in order to exist in this new environment. We can no longer live in the old one. [Technological] progress imposes not only new possibilities for the future but new restrictions.
That is Norbert Wiener writing in his 1948 book Cybernetics, the book that birthed the digital age. He understood in 1948 that tools are not just extensions of man, as Marshal McLuhan later told us, but also modifiers of man. Our corporate and military elites today seem to have no clue of this; the latter see only professional advantage to be reaped while the former see only dollar signs followed by integers followed in turn by many zeros with no interrupting decimal points. Macroeconomists see AI as a productivity index booster, with no hint of awareness of the human capital and social trust externalities at issue despite already manifest evidence of the downsides of the internet/social media age.
Put slightly differently, those who have studied the reading brain, pioneer scientists like Maryanne Wolf, Sherry Turkle, Naomi Baron, and many others, worry that digital-world innovation is outpacing our understanding of what digi-technological innovations set loose at social scale are doing—literally, not metaphorically, doing—to our brains. Our plastic epigenetically earned frontal cortex circuitry is changing as we expose ourselves to more and faster digital and less and slower print-page sensory input. They see that what may be gained is not yet balanced by an understanding of what may be lost, and what may be lost could—I would say already is—ramifying widely in the culture, and from there into our politics, in ways we have no real gasp of yet. Dr. Baron summed up the dilemma: “The real question is whether the affordances of reading on screen lead us to a new normal . . .in which length and conceptual complexity, . . . memory, concentration, immersion, and reflection are potentially lost.”[4] And what about not reading on screen but directly imbibing images, some still but most of them moving, on screens? What then may be lost in a post-literate hyper-digital era, even assuming we don’t turn much of the planet into a radioactive ruin?
Now consider a more generally familiar matter: educating kids. Here is Brink Lindsay from his superb June 9 The Permanent Problem post, “America’s Internal Brain Drain” (which need be read in its entirety):
. . . .the advent of AI . . . in its present incarnations at least, looks poised to accelerate our national brain drain by helping to ensure that young people’s cognitive potential never gets realized. Students in high schools and colleges across the country are now relying more and more on ChatGPT and other large language models to “automate” the learning process (i.e., read books and write papers about them), but the joke’s on them. Learning cannot be automated; it comes only to those who are willing to do the necessary work. And consequently, we risk raising even our best and brightest to be post-literate.
Don’t get me wrong—I’m about as far from a Luddite as you can get. . . . I continue to hold out hope that the best solution to the bad effects of AI will be better AI. There is immense potential . . . in AI tutors that offer individualized, always available, and infinitely patient instruction. Such tutors would be designed to guide young people—and adult learners as well—through the effortful process of gaining knowledge and developing skills. We know that one-on-one tutoring is dramatically more effective than classroom instruction—the problem has always been that it’s impossible to scale. With AI tutors, that constraint disappears. That’s not the AI we have now. But if we are going to arrest and reverse our [culture’s] cognitive decline rather than speed it up, that’s the AI we need.
Well happy days, then, for surely if we must, we can, right? (That’s a paraphrase of Laura Testvalley from Edith Wharton’s The Buccaneers, for those keeping literary score…..). What we need is surely what we’ll get. Right?
I used to think that the culture of an open, diverse, tolerant, and humble liberal democracy was better suited to manage very difficult socio-technological challenges like this, but I’m no longer confident about that as a techno-surrealist, post-literate culture degrades American civic life year by year by year. But if a democratic polis already struggles with such challenges, can an elitist/oligarchical one do any better as the challenges wax ever more formidable? Gosh: Yet another question….few seem to be asking.
Post-Extinction Event Future Number Two: The Garden
Oh phooey, not again!!! We’re out of room for a single post on the Substack template. No matter: We’ll pick up our parable next Friday…..
[1] This sort of praise raised a question: Does Gen. Mattis serves on Palantir’s board of directors, in which case the statement would pose a sunken a conflict-of-interest problem? He does not, but when he served as Secretary of Defense in Trump’s first term three close aides—Anthony DeMartino, Sally Donnelly, and Justin Mikolay—were Palantir alumni and seem to have influenced Mattis in Palantir’s favor. Corporate “inside” lobbying of this sort is increasingly common in the AI age—of which more below. See Jacqueline Klimas and Bryan Bender, “Palantir goes from Pentagon outsider to Mattis’ inner circle,” Politico, June 11, 2017.
[2] When I discovered Megan Garber’s “We’ve Lost the Plot,” The Atlantic, January 30, 2023, the cooing returned: She got it, too, much of it.
[3] “The spirit I have summoned up I can no longer rid myself of” is a good translation of this classic line from Johann Wolfgang von Goethe’s 1797 poem Der Zauberlehrling (“The Sorcerer’s Apprentice”).
[4] Baron, quoted by Maryanne Wolf, Planet Word presentation, September 18, 2024. See here also the foundational empirical study, which sought to quantify aspects of the cognitive effects of reading on conceptual capacity in children, by Anne E. Cunningham & Keith E. Stanovich, “What Reading Does for the Mind,” American Educator (AFT), Spring/Summer 1998).