You Can’t Build a Thinking Machine with a Combustion Engine
Why AGI Isn’t Just Around the Corner, and Why That Matters
Every few weeks, another AI breakthrough hits the headlines.
A chatbot sounds wise. A self-driving car avoids a crash. An image generator fools the eye.
The future feels close.
But here’s the truth: you can’t get to a new destination if you're stuck using the wrong engine.
Trying to reach artificial general intelligence (AGI) with today’s computing infrastructure is like trying to build a Tesla using combustion engine parts. You might add sensors, digital dashboards, and smart cruise control. But the foundation is still spark plugs and pistons.
It looks modern, it performs well, but it will never be electric.
And intelligence, like electric mobility, requires a new kind of platform.
⚙️ The Wrong Engine for the Job
Most of what we call “intelligent systems” today—including large language models like ChatGPT and self-driving platforms like Waymo—run on a computing architecture designed for fast, serial task execution. This architecture, based on interrupt-driven silicon, has been with us since the mid-20th century.
It is incredibly efficient at following instructions and scaling operations.
But it was never designed to think.
It reacts. It does not reflect.
The compute beneath modern AI breaks cognition into fragments. It treats memory as something to cache, not something to live inside. It slices attention into cycles and predicts based on statistical probability. It is the engine of fast response, not deep understanding.
It is so quick, we can no longer see the actions, the moments, the interrupts. But under the magic of software and the amazing speed of hardware improvements, the gremlin is still there tricking us all not to look behind the curtain.
🧠 Intelligence Isn’t Prediction
Today’s AI systems are powerful predictors.
They can guess the next word, lane shift, or market move.
But when something truly unexpected happens—when a question requires real-world context, or a decision involves moral weight—they cannot respond with comprehension. They simulate awareness. They do not possess it.
As cognitive scientist Abeba Birhane explains:
“These systems mistake engineering for agency. They operate without embodiment, context, or precariousness—all fundamental to how humans know the world.”
(Birhane & McGann, 2024)
🛑 Waymo Isn’t Thinking. It’s Memorizing the Road.
Waymo’s autonomous vehicles are often seen as evidence of AI maturity.
But behind the scenes, they rely on billions of training miles, rigid scenario modeling, and constant human supervision.
When something unpredictable occurs—say, a fallen traffic light or an erratic cyclist—Waymo doesn't reason its way out. It pauses. Or calls for help.
Not because it is careful. But because it doesn’t know what it’s seeing.
There is no driver inside. Just an incredibly well-tuned reaction engine.
🧬 Real Intelligence Begins With a Different Substrate
If we truly want machines that understand, decide, and grow—we need to stop upgrading combustion platforms.
AGI will not emerge from more layers, bigger datasets, or cleaner outputs.
It will require a complete shift in substrate.
Not just faster processors, but entirely new architectures.
Think neuromorphic circuits that resemble brain pathways.
Think quantum systems capable of probabilistic reasoning.
Think embodied systems that integrate physical context with thought.
Because intelligence isn't just a feature.
It's an emergent property of how a system is built.
📚 Scientific Signals Point to the Shift
Neuromorphic Computing Roadmap (Christensen et al., 2022)
Silicon systems burn too much power for cognition and lack real-time learning. Brain-inspired chips offer orders-of-magnitude efficiency gains.Large Models of What? (Birhane & McGann, 2024)
Language models are fundamentally disembodied and cannot replicate human understanding.LLMs Without Grounding (Xu et al., 2025)
Language models capture abstract knowledge but fail at physical and motor-grounded reasoning due to lack of sensory experience.
🔜 What Comes Next
This is the first in a series of essays exploring the structural limits of today’s AI—and what it will actually take to move forward. I’m also launching a companion series explaining why so many of us are being fooled into thinking we’re on a clear path to AGI.
It’s like trusting Google Maps. It looks precise. It sounds confident. But sometimes it leads us down strange detours, and we only realize too late that we’re not actually getting closer to where we want to go.
Coming topics include:
Why large language models are dashboards, not drivers
Why scaling won't deliver comprehension
Why self-driving systems still need humans
What real AGI would require at the physical level
If we want more than machines that mimic intelligence,
we have to stop pretending the road we are on will take us there.
It won't.
We need a new engine.
A call out to my wonderful Child who inspires me to write these thoughts down and AI being a good editor. ~ Thank you.