Mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.
The mythology of Technology has a special altar for AI, artificial intelligence, which is reverently worshiped as the source of astonishing cost reductions (as human labor is replaced by AI) and the limitless expansion of consumption and profits. AI is the blissful perfection of technology’s natural advance to ever greater powers.
The consensus holds that the advance of AI will lead to a utopia of essentially limitless control of Nature and a cornucopia of leisure and abundance.
If we pull aside the mythology’s curtain, we find that AI mimics human intelligence, and this mimicry is so enthralling that we take it as evidence of actual intelligence. But mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.
The current iterations of Generative AI–large language models (LLMs) and machine learning–mimic our natural language ability by processing millions of examples of human writing and speech and extracting what algorithms select as the best answers to queries.
These AI programs have no understanding of the context or the meaning of the subject; they mine human knowledge to distill an answer. This is potentially useful but not intelligence.
The AI programs have limited capacity to discern truth from falsehood, hence their propensity to hallucinate fictions as facts. They are incapable of discerning the difference between statistical variations and fatal errors, and layering on precautionary measures adds additional complexity that becomes another point of failure.
As for machine learning, AI can project plausible solutions to computationally demanding problems such as how proteins fold, but this brute-force computational black-box is opaque and therefore of limited value: the program doesn’t actually understand protein folding in the way humans understand it, and we don’t understand how the program arrived at its solution.
Since AI doesn’t actually understand the context, it is limited to the options embedded in its programming and algorithms. We discern these limits in AI-based apps and bots, which have no awareness of the actual problem. For example, our Internet connection is down due to a corrupted system update, but because this possibility wasn’t included in the app’s universe of problems to solve, the AI app/bot dutifully reports the system is functioning perfectly even though it is broken. (This is an example from real life.)
In essence, every layer of this mining / mimicry creates additional points of failure: the inability to identify the difference between fact and fiction or between allowable error rates and fatal errors, the added complexity of precautionary measures and the black-box opacity all generate risks of normal accidents cascading into systems failure.
There is also the systemic risk generated by relying on black-box AI to operate systems to the point that humans lose the capacity to modify or rebuild the systems. This over-reliance on AI programs creates the risk of cascading failure not just of digital systems but the real-world infrastructure that now depends on digital systems.
There is an even more pernicious result of depending on AI for solutions. Just as the addictive nature of mobile phones, social media and Internet content has disrupted our ability to concentrate, focus and learn difficult material–a devastating decline in learning for children and teens–AI offers up a cornucopia of snackable factoids, snippets of coding, computer-generated TV commercials, articles and entire books that no longer require us to have any deep knowledge of subjects and processes. Lacking this understanding, we’re no longer equipped to pursue skeptical inquiry or create content or coding from scratch.
Indeed, the arduous process of acquiring this knowledge now seems needless: the AI bot can do it all, quickly, cheaply and accurately. This creates two problems: 1) when black-box AI programs fail, we no longer know enough to diagnose and fix the failure, or do the work ourselves, and 2) we have lost the ability to understand that in many cases, there is no answer or solution that is the last word: the “answer” demands interpretation of facts, events, processes and knowledge bases are that inherently ambiguous.
We no longer recognize that the AI answer to a query is not a fact per se, it’s an interpretation of reality that’s presented as a fact, and the AI solution is only one of many pathways, each of which has intrinsic tradeoffs that generate unforeseeable costs and consequences down the road.
To discern the difference between an interpretation and a supposed fact requires a sea of knowledge that is both wide and deep, and in losing the drive and capacity to learn difficult material, we’ve lost the capacity to even recognize what we’ve lost: those with little real knowledge lack the foundation needed to understand AI’s answer in the proper context.
The net result is we become less capable and less knowledgeable, blind to the risks created by our loss of competency while the AI programs introduce systemic risks we cannot foresee or forestall. AI degrades the quality of every product and system, for mimicry does not generate definitive answers, solutions and insights, it only generates an illusion of definitive answers, solutions and insights which we foolishly confuse with actual intelligence.
While the neofeudal corporate-state cheers the profits to be reaped by culling human labor on a mass scale, the mining / mimicry of human knowledge has limits. Relying on the AI programs to eliminate all fatal errors is itself a fatal error, and so humans must remain in the decision loop (the OODA loop of observe, orient, decide, act).
Once AI programs engage in life-safety or healthcare processes, every entity connected to the AI program is exposed to open-ended (joint and several) liability should injurious or fatal errors occur.
If we boil off the mythology and hyperbole, we’re left with another neofeudal structure: the wealthy will be served by humans, and the rest of us will be stuck with low-quality, error-prone AI service with no recourse.
The expectation of AI promoters is that Generative AI will reap trillions of dollars in profits from cost savings and new products / services. This story doesn’t map the real world, in which every AI software tool is easily copied / distributed and so it will be impossible to protect any scarcity value, which is the essential dynamic in maintaining the pricing power needed to reap outsized profits.
There is little value in software tools that everyone possesses unless a monopoly restricts distribution, and little value in the content auto-generated by these tools: the millions of AI-generated songs, films, press releases, essays, research papers, etc. will overwhelm any potential audience, reducing the value of all AI-generated content to zero.
The promoters claim the mass culling of jobs will magically be offset by entire new industries created by AI, echoing the transition from farm labor to factory jobs. But the AI dragon will eat its own tail, for it creates few jobs or profits that can be taxed to pay people for not working (Universal Basic Income).
Perhaps the most consequential limit to AI is that it will do nothing to reverse humanity’s most pressing problems. It can’t clean up the Great Pacific Trash Gyre, or limit the 450 million tons of mostly unrecycled plastic spewed every year, or reverse climate change, or clean low-Earth orbits of the thousands of high-velocity bits of dangerous detritus, or remake the highly profitable waste is growth Landfill Economy into a sustainable global system, or eliminate all the sources of what I term Anti-Progress. It will simply add new sources of systemic risk, waste and neofeudal exploitation.
My recent books:
Disclosure: As an Amazon Associate I earn from qualifying purchases originated via links to Amazon products on this site.
Self-Reliance in the 21st Century print $18,
(Kindle $8.95,
audiobook $13.08 (96 pages, 2022)
Read the first chapter for free (PDF)
The Asian Heroine Who Seduced Me
(Novel) print $10.95,
Kindle $6.95
Read an excerpt for free (PDF)
When You Can’t Go On: Burnout, Reckoning and Renewal
$18 print, $8.95 Kindle ebook;
audiobook
Read the first section for free (PDF)
Global Crisis, National Renewal: A (Revolutionary) Grand Strategy for the United States
(Kindle $9.95, print $24, audiobook)
Read Chapter One for free (PDF).
A Hacker’s Teleology: Sharing the Wealth of Our Shrinking Planet
(Kindle $8.95, print $20,
audiobook $17.46)
Read the first section for free (PDF).
Will You Be Richer or Poorer?: Profit, Power, and AI in a Traumatized World
(Kindle $5, print $10, audiobook)
Read the first section for free (PDF).
The Adventures of the Consulting Philosopher: The Disappearance of Drake (Novel)
$4.95 Kindle, $10.95 print);
read the first chapters
for free (PDF)
Money and Work Unchained $6.95 Kindle, $15 print)
Read the first section for free
Become
a $3/month patron of my work via patreon.com.
Subscribe to my Substack for free
NOTE: Contributions/subscriptions are acknowledged in the order received. Your name and email
remain confidential and will not be given to any other individual, company or agency.
Thank you, John M. ($75), for your wondrously generous subscription to this site — I am greatly honored by your support and readership. |
Thank you, Bruce W.M. ($3/month), for your admirably generous patronage to this site — I am greatly honored by your support and readership. |
Thank you, Carl S. ($3/month), for your most generous patronage to this site — I am greatly honored by your support and readership. |
Thank you, Carl ($20/month), for your outrageously generous patronage to this site — I am greatly honored by your support and readership. |