Does AI have to be "Intelligent" to succeed?
Everything is relative.
After proclaiming that Crypto is in a bubble, the bubble blowing crowd seems to have turned its attention to AI.
Artificial Intelligence.
Even renowned bear Michael Burry, of Scion Capital fame, is now chiming in on how both AI and Crypto are just giant bubbles reminiscent of the Tulip era.
Besides the multi-billion dollar investments that the large players have already committed to date, and that critics are saying will never be returned in the form of profits, many critics believe the technology itself is nothing but a HOAX. That there is nothing intelligent about it, and that it is given to hallucinations and mis-findings with such frequency, as to render the technology equivalent to a slightly more useful version of Google Search.
Ouch.
Let’s dig in.
What is intelligence?
To me, this is where the large disconnect is taking place.
I have no problem with the word “Artificial”, since it simply points to an artifice of some type of creation, which it absolutely is, but when it comes to the word “Intelligence”, things get much more obscure.
We have always considered the game of Chess, to be one of mind and process. There has never been a Chess World Champion we considered to be “un-intelligent”.
Yet, a computer managed to beat the world’s best human, at a game of great skill that requires superior intelligence.
Here is the timeline:
1. The first real computer to beat a human at chess (1956)
In 1956, the Los Alamos Chess Program running on the Los Alamos Scientific Laboratory computer (the MANIAC I) defeated a human player for the first time in history.
The opponent was a novice-level human (name not historically recorded).
The program only understood a simplified version of chess (no castling, no en passant, reduced rules), but it was the first verified machine victory over a human.
This is generally considered the first true instance of a machine beating a person in chess.
2. The first time a machine beat tournament-level human players (1967)
In 1967, the program Mac Hack VI, developed at the MIT Artificial Intelligence Laboratory, became the first chess program to defeat a human in a real tournament.
It beat multiple USCF-rated players.
It even achieved an established tournament rating (~1400s), which was groundbreaking for the era.
3. The famous milestone: a machine beating a world champion (1997)
The well-known event was in 1997, when IBM Deep Blue defeated world champion Garry Kasparov in a regulation match — the first time a reigning world champion lost a match to a computer under standard tournament rules.
Today, humans can still defeat weaker engines, older engines, or “lightweight” bots, but the strongest chess engines (like Stockfish, and other modern chess-engine/AI programs) are far stronger than any human.
Most recent public analyses assert that modern engines are “superhuman,” meaning that if a top human played under comparable conditions, they would almost certainly lose. There’s no recent credible record showing a human defeating a state-of-the-art engine under fair, high-standard conditions.
Now before you huff and puff, I am already going to agree with you.
Isn’t playing Chess at a master level, simply a game of combinations and probabilities?
Wouldn’t a computer, that is simply able to memorize every possible combination, at every level, with every possible move ever made in recorded history stored in its database, naturally have an advantage, not because it is more intelligent, but simply because it has a larger capacity for near term storage, and rapid access to that storage? Is the computer really being “Intelligent” or is it simply playing the best probabilities stored in its database at every move in a purely mechanical manner?
The answer is…. it doesn’t matter.
A game of relativity.
If you are amongst a group of campers, and a bear charges at your group. You do not have to be faster than the bear to get away, you just need to be faster than the slowest member in your group.
To me, the subject of “base intelligence” is never discussed in public forums.
Why? Because it is politically toxic to say that some people are entirely un-intelligent. Since we are all human, and we are all created equal, we should all have equal access to a base level of intelligence. But if we set manners and politics aside for a minute, we all recognize that there are vast differences between humans.
I cannot run a 100m Dash in 9.58 Seconds no matter how many decades of preparation you give me.
I cannot dunk a Basketball unless I can make use of a trampoline.
I cannot speak 10 languages.
I cannot perform long form divisions of 10 numbers divided by 5 numbers in under 2 seconds in my head.
Yet, other humans can.
We tend to speak of “Intelligence” as if it has already been normalized to a base. That of being human. So anything that is “Super Intelligent” must be smarter than a human.
Do you see how this argument relies on an improper axiom?
Let’s turn to some literacy statistics.
This is measured by the Programme for the International Assessment of Adult Competencies (PIAAC), conducted by the Department of Education’s National Center for Education Statistics (NCES).
The most recent comprehensive data from the 2023 PIAAC study indicates that 44% of U.S. adults scored at Level 3 or above in literacy, which is considered the standard for strong reading and comprehension skills.
Understanding the Literacy Levels (2023 Data)
Literacy is measured on a scale of proficiency levels, where high proficiency involves successfully understanding, interpreting, and synthesizing complex information from multiple texts.
High Literacy (Level 3 or above): 44% of U.S. adults.
Task ability: Indicating strong reading and comprehension skills; they can complete complex tasks that require comparing and contrasting information, paraphrasing, and making high-level inferences.
Basic/Low Literacy (Level 2): 29% of U.S. adults.
Task ability: Can perform basic reading tasks, like locating explicitly cued information or making straightforward inferences, but struggle with more complex or lengthy texts.
Very Low Literacy (Level 1 or below): 28% of U.S. adults.
Task ability: These individuals have difficulty locating information in short texts and may only be able to determine the meaning of simple sentences, often classified as functionally illiterate.
This percentage of high-performing adults (44%) actually represents a slight decrease from 2017, when 48% of adults scored at Level 3 or above, indicating a widening skills gap, according to the NCES.
Now, I understand that some folks might say that Literacy does not directly equate to intelligence, and that some people considered illiterate, may still display some capacity for other forms of “Intelligence” (Street Smart), but the point still has to be made.
The majority of Americans are operating at or below very basic levels of Literacy. Their ability to complete complex tasks are limited, their ability to make subtle distinctions or to process complex information is challenged.
I want to be clear that I am not looking at the WHY, or who is to blame. Or if there is any blame at all. There are many socio-economic reasons that gave us these results. There are inequalities in access, in money, in race, that are too vast to enumerate in this missive, but that are also besides the point of this essay.
My point is a much simpler one.
The bar to “Artificial Intelligence” is actual quite low, if instead of comparing AI’s prowesses to that of Garry Kasparov, we simply compare AI’s abilities against the average American.
Basic implications
For AI to be revolutionary, and to change the construct and fabric of our society in a meaningful way, it does not need to be “Super Intelligent”. It simply needs to be functionally smarter than most average Americans.
This is proving itself in the real world.
Driving was long thought of as something intrinsically human. It requires almost every human sense gifted to mankind. You need to see, to hear, to touch, you need hand-eye co-ordination, you need both your hands and your feet.
It is truly uncanny just how many sensorial inputs go into driving a vehicle. Move that to Stick-Shift and you are at an even higher level!
And yet… along came WAYMO.
Here are some stats:
Lower crash / injury rates per mile driven
A 2025 peer-reviewed study of Waymo “rider-only” (no human behind the wheel) operation covering 56.7 million miles found that Waymo’s crash rates were significantly lower than human-driving benchmarks across crash types. arXiv+2The Atlantic+2
In that dataset, Waymo vehicles had a 96% reduction in intersection-related crashes (Any-Injury-Reported), the largest decrease among crash types. arXiv+2PubMed+2
Other reductions reported across types: fewer crashes involving pedestrians (pedestrian injury crashes), cyclists/motorcyclists, secondary crashes, etc. arXiv+2Waymo+2
For a smaller-mile baseline (7.1 million miles in Phoenix, San Francisco, Los Angeles), crash-vehicle rate for any-injury incidents was 0.6 per million miles (IPMM) for Waymo’s automated driving system (ADS) vs. 2.80 IPMM for human benchmarks. That’s roughly an ~80% reduction. arXiv+2AV Industry Association+2
Police-reported crash-vehicle rates over those same miles were 2.1 IPMM for Waymo vs. 4.68 IPMM for humans — a ~55% reduction. arXiv+1
• Fewer liability / insurance claims
A 2024 report comparing Waymo to human-driven vehicles (including those with modern safety tech) found an 88% reduction in property-damage claims and 92% reduction in bodily-injury claims for Waymo’s fully autonomous rides. Waymo+1
Even compared to newer human-driven vehicles (2018–2021 models with ADAS features), Waymo still showed an ~86% reduction in property-damage claims and ~90% reduction in bodily-injury claims. Waymo+1
• Overall fewer serious-injury crashes and dangerous situations
According to a 2025 report, Waymo self-driving cars have “85% fewer suspected serious injuries” compared to human drivers. eWeek+1
The same report claims 92% fewer crashes injuring pedestrians, 82% fewer injuring cyclists/motorcyclists, and 96% fewer injury-causing crashes at intersections (a high-risk scenario).
WAYMO is not a better driver than Ayrton Senna, or than even the last place finisher on the Formula 1 tour.
Heck, it is not a better driver than the last place finisher of Formula 3, or the Amateur Go-Kart Championships.
It is simply a much better driver than most average Americans.
For this reason, and this reason alone, all “commute driving” is bound to go driverless.
This means cars, but also cargo and trucks.
Driving for leisure and fun, are something entirely different, but driving out of necessity, is something “AI” will take over entirely.
Making some extrapolations
When you look at “AI” as a “Bottoms Up” replacement, rather than a “Top Down” benchmark you quickly realize its disruptive nature.
In its current iteration today, “AI” can already perform most mental tasks better than most Americans.
Writing, mathematics, research, summarization, and basic comprehension.
An “AI Support Chatbot” for a commercial product is way more helpful, accurate and responsive than a human customer service rep offshore, that started 2 days before, is working the night shift, and is referencing a binder to find answers to very basic product questions.
The dire reality is that “AI” can already today, replace about 1/4 of the entire human workforce. Without getting “smarter”.
The main impediment to further “AI” propagation, hasn’t been more intelligence, but rather the fact that the “AI” is currently tethered to the wall via a 110V umbilical cord.
Cut that cord, and EVERYTHING changes overnight.
Robotics, and not ChatGPT6, 7, 8 or 10, are what is going to make a difference. That is the next hurdle towards human replacement.
Once “AI” can move freely, in an articulated manner, the amount of things it can do better than the average American will quintuple.
Some implications
“AI” is not an investment category whose outcome we have yet to wait to see.
“AI” is already meaningfully responsible for many layoffs across many industries. At Google, to justify new human headcount, you need to explain to management how the position can absolutely not be covered by an AI Agent first.
And that is at todays’ intelligence level.
With every round of artificial cognitive improvement, “AI” simply climbs one extra rung of the ladder of which human professions, performed by an average human, are now below the capabilities of an average “AI”.
Trump fired the BLS Director because he did not like the Unemployment number, he then proceeded to simply not release the following months numbers at all, under the guise of some government closure (which had no effect on his international travel schedule, or the daily terrorizing of brown people). My guess is that the numbers moving forward will all receive a “special treatment” and some fuzzy math to make them more palatable.
After all, can you imagine the effect on the general public of a move to double digit unemployment? All the while new Billionaires and a possible Trillionaire come to lavish parties at the gold clad Epstein Ballroom?
No bueno, as they say here in Los Angeles.
But it is not all doom and gloom.
Needed adaptations
Some believe that we should put limits on “AI”, to essentially stem the flow of the inevitable, but history has shown that it is not really possible.
Once one country got an Atomic Bomb, others did as well. Once one country got a Nuclear Weapon, others did as well.
Suppressing “AI” research or development here in America, will not hamper its progress and development in China, India or Russia.
Instead, what is needed, is much reflection and a complete re-tooling of the structure of our current society ahead of what Raoul Pal has coined : “The Economic Singularity”.
That moment in time, where your role as a human in the labor force, is entirely nebulous and unclear.
Your participation will not be needed and yet you will still be called upon to provide for yourself and your family’s well being.
If hiring a human ads 2 units of productivity to a company, hiring a robotic “AI” will undoubtedly add 25 units of productivity.
“AI” does not sleep, does not eat, does not take vacations, does not take breaks, does not need to rush home to a sick child.
The math is entirely skewed to one side of the equation.
I am often reminded of the story of Caterpillar, and how before the invention of the excavator, it took 10 men to dig the same hole. After the invention, there were still jobs for humans, operating the excavators. Just less than before. The flaw in this argument is that the excavator was made to be operated by a human. That singular job was built into the product itself.
“AI + Robotics” are made to obviate the need for a human. As such, we have no historical reference for the third order effects it will bring.
Yet, this is not necessarily the end of the story.
A change in mindset
The letters U.B.I. send shivers down the spine of most working folks.
Universal Basic Income, the idea of giving all humans a basic sum of money, simply to live, has historically always been a terrible idea.
Humans are animals of instinct and desire. At our core, we need a motivation to get up every morning. To get out of our PJs, and to do something productive with our lives. If you remove any form of monetary recompense, the system of motivation breaks down. It is that simple.
It is for this reason, that Capitalism, with all of its obvious flaws, has been the only functional form of large government and industry. Any attempt at Marxism, even if initially well intended, inevitably seems to turn into a form of dictatorial communism. Where one strong man keeps most of the riches for himself, and distributes the remaining pittances to the public.
But up till now, the assumption has always been that there were jobs to be had.
What happens when willing and able bodies, desperate for work, are simply not able to meet the productivity benchmarks of “AI + Robotics”?
How can you force a business, whose charter it is to maximize profits, to spend more and achieve less, by hiring a human?
One way is to change the basis of taxation. Instead of taxing profits, we could move to a system of taxing marginal productivity of labor. If you want to switch to a more productive unit of labor, that is fine, but every incremental increase in marginal productivity could be met with a higher rate of taxation. The number would be such that it might still make sense to go the “AI + Robotics” route, but it would provide a monetary stipend to a fund dedicated for human compensation.
How that tax would flow back to humans is up for discussion. Community service. Some level of human participation and involvement in return for remuneration. There are many ways to devise compensatory routes. Unfortunately, no such discussion can take place for several more years.
This administration is so heavily focused on grift, insider dealings, and the gathering of personal wealth, that the idea of thinking this far ahead, thinking about the general welfare of average Americans, debating an uncomfortable, difficult and controversial topic, is entirely out of the scope of reality and possibilities at the moment.
We are bound to run into this iceberg head on.
Conclusions
The main point of this missive was not to push the reader into a deep state of depression regarding “AI” but simply to demonstrate that “AI” does not need to evolve further or achieve a state of “AGI” (Artificial General Intelligence) to succeed or be majorly consequential.
It is already of consequence.
Pets.com was a bubble stock. The internet was not.
The internet has more traffic and use today, then when Pets.com went out of business.
“AI” is not a bubble. It cannot be. It has already changed everything, and will continue to do so.
Have some companies made unprofitable investments in “AI”? Probably. More than likely actually, but it doesn’t make “AI” a HOAX or a bubble.
META spent $70B on the “Metaverse”. That investment went nowhere. META is not a bubble stock because of it. The same will be true for their multi-billion dollar spend on “AI”.
“AI” is as much a FAD as the internet was.
Discount it at your own peril.


Well done TN.
Much I’ll reread
I took 5 Waymo’s, 4 Uber, 1 Taxi in Austin last week. First time but I’ve watched them in Scottsdale.
Complete overriding thought is your safer than human. I laughed at these news reports about the school bus thinking how many human accidents do they report? You had the stats.
I was sold
I’ll risk it w Alphabet before Elon
Maybe Amazon next Zoosa or whatever.
Good stuff! Enjoy the weekend.