Skip to content
Home          Images          Just Sayin'        Postcards        Waypoints

Speculative (but in some respects confident) Thoughts about the Epistemological Crisis and AI


Manic Mental Meandering on a Saturday Morning before working in the yard

More than 10 years ago I started talking about the “epistemological crisis” of our era in response to the sudden proliferation of conspiracy theories and the cults that grew up around them. It just seemed to me … it still seems to me … that there's too much bullshit superstitious claptrap to take in, and I found myself disoriented by whole populations of secular theory-fanatics latching on to really absurd visions of reality. I simply couldn't, and still can't, grasp WHY people decide to accept such bizarre fictions about how the world works.
 
If you're not familiar with that word “epistemology,” it's the philosophical consideration of “how we know.” Months after I started talking and writing about it, I started seeing other people write about it, using those words, so it affirmed my observation.
 
Now, at least 10 years on, the epistemological crisis continues, and conspiracy theories and wacko pseudo-science rule the info-sphere for broad swaths of the general population … and, unfortunately, also frightening percentages of the electorates in what used to be relatively sensible countries. There's discussion and argument among philosophical types about whether the crisis signals the end of the Enlightenment era, as 'objective fact' has seemed to lose ground as the basis for knowing stuff. And of course liberalism itself is the oldest child of the Enlightenment … and liberalism has been proclaimed dead by articulate and smart conservative writers of the day, particularly Patrick Deneen and Rod Dreher and their cohort of retrograde neo-orthodox authoritarian-minded Christians.
 
I don't think it's too tangential to jump from that thought to my quite recent mental wanderings about AI.  After years of somewhat self-conscious avoidance, I've finally gotten around to reading more about AI as it has become so super-relevant that to keep avoiding it would be sheer obstinacy and intellectually irresponsible.
 
I hadn't decided until very recently whether to accept that it was going to be as big a deal as folks are saying it is going to be.  But largely due to an interview of Pete Buttigieg by NPR's Steve Inskeep a few days ago, it's very clear to me that the right answer is that it's going to be a bigger deal than we can imagine. It will change everything. And very, very soon.  Buttigieg said within 3 or 4 years. And certainly within the lifetime I like to assume I have left, so I can no longer say “well that's for future generations.”
 
I've always know that AI will supplant so many many people in so many occupations … many or most of them not at the grunt work level, but rather at the middle and even upper echelons of human services and management. Because the simple fact is that AI will be be able to do all, or almost all, jobs better than humans can do them. And that will dramatically alter, in very short time, social as well as economic stratification.
 
One thing I'm absolutely convinced of is that AI will largely serve the very rich, and there's no doubt in my mind that it will further polarize the economic classes, as the very rich become the entrenched lords of the world-manor, and the rest of us become either serfs or disorganized populations of desperate scroungers … no doubt with a few Simon Legrees to force some essential minimum number of people to actually work. What's not clear to me is how the rich will derive further wealth without lower social classes to support economies by spending their nominal incomes. But perhaps it's true that the rich will no longer need lower classes, and humanity will simply fade away from lack of care and failure to maintain viable reproduction levels. Or perhaps that there will be a great die-off caused by environmental degradation that only the rich have the resources to survive?
 
I have not read the William Gibson “Sprawl” (or 'Neuromancer') trilogy, but suddenly it's high on my reading list, since I discovered its existence a few days ago from a Henry Farrell blogpost about how “the rich are not like you and me.”  The reference stuff I've read about Gibson's make this series of (there seems to actually be 4 books in this “trilogy”) “Cyberpunk” books sound in some respects a lot like the 'Neo Seoul' chapter of “Cloud Atlas” … in terms of it being a dystopian futuristic society under tight and inescapable corporate control. In Cloud Atlas, there are also strains of a Huxleyan Brave New World, as cognitively defective humans are cultivated to serve in mindless service functions. I did not see reference to that kind of eugenic adaptations in Gibson's world, but what does seem to be the same is the reinforcement of very strict caste lines in which the hyper-rich are fully in control and the vast majority of humans willingly subject themselves to control by an unseen plutarchs who have in some sense lost their basic humanity. But again, perhaps organic humanity will be unnecessary?
 
Of course these are speculative views of the future. But I no longer think it's wild speculation. It is, rather, a reasonable statement to say that it could happen. Or even to assert that something like this, with varied details, will very likely happen. And soon. David Mitchel placed his futuristic chapter in “Cloud Atlas” in the year 2044. And unfortunately for the reality I have embraced my whole life, I can pretty easily see some version of that social and technical evolution coming to pass … in the next 19 years.
 
My refrain, lately, is “It's a good time to be old.” I stand by that thought, because it's just too much change for me to countenance.  At least I won't have to put up with it for too long.