
Originally published by Kriss Berg, etc. @KrissBergTweets on X.
View original article
· View original post
· 2026-02-18T13:10:41.000Z

If Twitter/X frames a big part of your worldview, its been a rough couple of weeks. Chicken Little AI technocrats have been predicting doom for white collar jobs – and just about everything else. The future looks simultaneously promising and bleak af.
It’s being compared to the pandemic. Millions will be out of work and the have/have not divide will be Grand Canyonesque. It all feels… inevitable.
Until, that is, you realize a few fundamental truths about technology and its disciples, human nature, and – maybe most importantly – the market for your attention.
We’ve been here before
It’s important to parse the messenger as much as the message.
These stories, the most alarming among them anyway, are universally offered by AI technocrats. These people believe in technology the way a southern Baptist preacher believes in Jesus.
This is the “software will eat the world” crowd. Technology is EVERYTHING.
We must remember their worldview leads them to believe that not only will technology rule the world but its adoption and advancement will happen linearly and then logarithmically.
This is rarely the case. Adoption-advancement often mimics an EKG more than a nice, straight line. Just as often it swings back and forth like a pendulum.
Bill Gurley, among many others, predicted in 2014 that Uber would cause steep declines in car ownership. Uber is now an afterthought and car ownership is higher than it has ever been. The only disruption came for some taxi drivers, and a lot of them survived.
Marc Andreessen (the Pope of The Holy Technoreligion) said in 2013 “Software is eating retail… Physical retail stores will go away.” For a few years it seemed he was right. The traditional American mall is all but dead. But 80% of purchases are still made in a physical store and the actual dollar amount grows every year. Toys R Us, Bed Bath and Beyond, and dozens of other retailers are making a comeback.
The list of bad predictions is long.
Remote work was supposed to kill the office.
The internet was supposed to kill paper.
Electric cars were going to kill gas stations.
In some cases they were off by 180 degrees, in almost every other case the demise happened far slower than they predicted.
So why do they all seem to agree: this time is different.
AI is disrupting the disruptor first
Technoreligionists are first and foremost coders. They’re computer nerds whose formative years were spent writing software.
So it shouldn’t be a surprise that they are sounding the alarm – they’re the ones getting disrupted first.
Machine learning and AI-driven coding has been around for a decade+. It was its very first use case – and to their credit it has been accelerating at dizzying speed lately.
But the next time a technologist/coder proclaims, with jaw agape, “AI just did my job for me” your answer should be “Yeah, no shit. It’s a coding tool.”
That’s why I had to chuckle at the irony of the viral doomer AI article everyone shared last week. We were supposed to be awed at the fact that the latest Claude model wrote itself – GASP.
Now step back and ask yourself how impossibly stupid it would have been had the Claude team NOT used what they had publicly proclaimed to be the most advanced code writing tool in history – to write the code the tool ran on?
OF COURSE Claude and every other AI system is helping write itself – its kinda the point of the damn thing.
So it’s perfectly reasonable that the coders are sounding the alarm – its eating their jobs first. Or is it?
Even the doomer article – and the Claude team itself – admitted that it was all done with human guidance. Because it has to be.
Anthropic’s (Claude’s parent company) headcount has quadrupled in the last year. They need people too, apparently. If AI was truly eating jobs, wouldn’t it eat theirs first?
It’s not eating those jobs because, like it or not, most AI models are still basic bitches.
Is AI really getting better?
Any daily user of AI knows the answer to this question is still (3 full years after AI exploded into popular culture) a resounding “well…” or at the very least “YES, buuuut..”
ChatGPT, the OG AI platform, was rendered practically useless for most of us starting somewhere in November. Gemini still seems to pride itself on being woke and refuses certain tasks it finds distasteful. Grok is like the crazy redneck cousin – willing to do ANYTHING for a buck – but none of it particularly well.
Every daily user has had the visceral (and very recent) experience of scream-typing at your agent “JUST DO WHAT I TELL YOU DAMMIT!!!!”
Yes it’s getting better, and will be eye-popping the future. But it’s not this linear/logarithmic hockey stick they’d have you believe.
Image generation and animation feels magical until you actually try to build something commercially viable. Something that would be consumed by a paying public. “AI slop” is still more prevalent than “holy shit this truly useful and wonderful”.
I produced (what I felt was) a professional-grade one minute commercial with “live” humans for one of our health products last week. It had a refined script, realistic “people”, plot development, emotional touch points, character continuity, the works. I marveled that what would have taken months and $200k+++ before took me about 4 focused hours and $50 with a couple of AI agents.
The result? It was trounced – and I mean WRECKED – by a 17 second video one of our customers did in her car after work one day.
Humans have an innate sense of what constitutes humanness.
Talk to someone who actually bought the Mac Mini and deployed a Claudbot (or whatever the hell it calls itself now) and they’ll tell you it definitely automated some repetitive tasks but it was A) much harder to set up than they thought and B) requires far more monitoring and upkeep than it should.
There are very few things we can simply set and forget with AI. Hallucinations are still a massive problem.
AI still has a big trust problem. We don’t fully trust it, and for good reason. It still requires a ton of human guidance, judgement and fact checking.
Don’t get me wrong, I use AI 3-4 hours a day and its made my job WAY easier. Most days.
But the notion that it will simply do my job and render me or anyone on my team useless misunderstands what most humans do the majority of the day: we interact with other humans.
And replacing that interaction with bots and agents is still very, very far away. The majority of humans will reject the notion outright, possibly forever.
So it begs the question…
What will AI actually do?
The primary use case for the non-technologist public is still writing, fact retrieval, and data analysis. It’s like a suped-up Google motorcycle with an Excel/Word sidecar for most of us.
That’s not to say it isn’t capable of pure magic and “holy shit” moments.
But technologists tend to forget that the industries they claim will be most disrupted – law, corporate America, healthcare, education, government, etc. – are the slowest moving and most risk averse organisms on the planet.
And for good reason. Billions of dollars, human lives, and potential prison time are table stakes.
Most of them can’t replace vast swaths of white collar workers with AI agents any more than we can replace janitors with Roombas. The tasks are far too complex when you really think about it.
AI, like all technology, is A tool. Not THE tool.
For large, mission critical tasks, all it takes is ONE mistake, ONE hallucination by an AI-agent and the whole thing crumbles under its own weight.
This doesn’t mean an AI takeover in some form won’t happen – it certainly will. It means it will happen far slower than anyone believes or hopes.
It’s very hard for a small/nimble AI coding team, which achieves breakthrough technology ~daily, to understand how behemoth organizations actually use technology. They not so much adapt to new tech as transform the technology around themselves.
It will be maddeningly slow for those of us who love to “move fast and break things.” You’ll find yourself asking “Why don’t they just use AI??” more often than not.
Adaptation will unquestionably be a good thing. Removing busy work, data entry, and rudimentary analysis and making things generally more efficient will benefit us all.
I personally hope we totally eradicate those do-nothing Big Tech jobs where days are filled with HR meetings, boondoggle ‘offsites’ and heady decisions about what kind of smoothie to order at the company cafe.
But what about all those jobs!
There’s a lot of hand wringing over entry level white collar jobs. This notion forgets that entry level white collar job seekers will have grown up with AI more than any other generation. Their AI abilities will likely far exceed that of their prospective employers.
And they’ll be sorely needed for the gargantuan task few people actually talk about: implementation.
How do you deploy these revolutionary tools on something other than “Hey Claude, build me a spreadsheet”? Large institutions especially will need actual humans to deploy and monitor their AI solutions.
People, young and old, who are knowledgeable about applying AI will be indispensible.
I mean, if someone walked into your business or place of employment today and said they were fully equipped with AI tools to attack your hardest problems – wouldn’t you hire them on the spot?
Yes, some of those jobs will go away. But we will not weep for those jobs any more than we did for the coal shovelers, switchboard operators, or typing pool secretaries of yesteryear.
Human nature dictates we over-anticipate pain and underestimate our ability to adapt.
We’ve conveniently memory-holed the fact that between 1980 and 2000, America lost over 20 MILLION manufacturing jobs. This was around 20% of ALL jobs. Poof. Gone.
And yet, this was likely the most prosperous time for the most prosperous country in human history.
“But it’s different this time”.
Maybe! Maybe the growth of the AI revolution will lead to growth in employment, efficiency, and leisure we don’t have the capacity to imagine yet. The 3-day workweek is within our grasp.
In the last few years we’ve solved obesity, bullshit busywork, and our cars drive themselves. So…how bad can the future be?
I believe in human prosperity. That’s because my 50 years on the planet has taught me there is little practical use for pessimism. Caution, yes. Forethought, absolutely.
But outright pessimism about something as vague as “The Future” is worrying about an outcome that no one can either adequately plan for or meaningfully change.
Remember, the pessimists sound smart – but the optimists are usually right.
Finally, the next time…
You see a doomer post touting “AI is about to destroy ______” before reading another word – look at the author.
“Show me the incentives, I’ll show you the outcome.”
If this person is selling AI tools, and 99/100 are, we know the incentive. “But but it looks like an insider sounding the alarm! He knows what’s coming and it’s terrifying!!!”
Remember: scaring people into believing your accounting tool will render CPA’s useless is a foolproof method to get CPA’s AND their most naive clients to use your tool.
Remember the old ads “The Government Wants To Ban This Medicinal Herb!”? It makes us say “OOOH, well of its that BAD it must REALLY WORK”
We all want access to the scary tools, the ones that are just this side of legal and ethical. If you’ve fallen for this, as I have and will continue to, you know the result is almost always “ehhhhhhh….ees okay”.
The technophiles will ALWAYS say “But you need the NEW model. That old thing was buggy af. UPGRADE bro.”
When they do show them this:
So fear not friends the AI revolution will not be televised.
There is no imminent collapse. The economy, business, government, and all its trappings will move apace.
Next time the doom seeps into your mind, ask yourself, what has a higher likelihood: that vast swaths of people will be laid off in the name of efficiency overnight, or that the labyrinth of systems in place built to prevent change will slow roll this tech like they do literally everything else?
We are just as likely to decry the too-slow adoption of AI – esp at the gov’t and Corporate America level – as to be terrified of its advancement. When, not if, the government decides to regulate AI there will be much handwringing over that too.
This, of course, presents an opportunity for you. You’re smaller, more agile, more adaptable. YOU can benefit from AI today. So, keep going. Use the tools, find their limits.
You can become virtually indispensable to your business, your employer, and your prospective clients by learning what AI is truly capable of. The opportunity is enormous for those willing to figure it out and apply it to real world problems.
The friction to build what you and your clients actually want has never been lower. And lower friction will lead to expansion in ways we haven’t begun to imagine.
The only way to get left behind is to refuse to adapt.
History does not repeat, but it rhymes. In the end, this tech revolution will probably be similar to the other ones: fruitful, noticeable, mostly beneficial and eventually native to ~everything we do.
Life, as we know it, will go on the same – but different. Better.
Meanwhile, the doomer articles will continue until morale improves. But, alas, the breadlines and tent cities will only be fodder for doomers and cynics.
AI will not eat us, it will augment us. Humans find a way.
Now where’s my flying car, dammit.


