Maybe AI Isn't the Next Big Thing, and That's Good News
· by Michael Doornbos · 3146 words
Last week, I sat through a vendor pitch. The product was a security tool I’d seen a version of three years ago. The rep opened the deck by telling the room the product was now AI-powered. Somebody asked what that meant. The rep said it uses AI to correlate alerts. Somebody asked how, specifically. The rep pulled up a slide labeled “AI-Powered Correlation Engine” with a picture of a brain. Somebody asked whether the underlying detection logic had changed. The rep said the engine now uses AI. This went on for ten minutes. Nobody bought anything.
I walked out of that meeting with a thought I couldn’t shake. That vendor wasn’t selling a new technology. He was selling the old one with a new label because it was what the industry had been told to want. And the more I thought about it, the more I started to wonder whether that moment was telling me something much bigger than one bad pitch.
Then yesterday I read a piece on The Next Wave called “AI could be the end of the digital wave, not the next big thing,” and it gave me the frame I was missing. It’s a commentary on two essays by Nicolas Colin at Drift Signal, both of which lean on Carlota Perez’s model of long technology surges. I’ve been sitting with the argument for a day. I think it’s the most useful read on the AI moment I’ve seen in years.
I want to walk through it, push on it a little, and then explain why I find it a hopeful reading rather than a depressing one.
The thought experiment
The piece opens with a question I’ve been turning over since I read it:
Just by way of a thought experiment: what if the current surge in the bunch of technologies that goes under the label of ‘AI’ isn’t the beginning of a whole new technology surge, but is actually the final stage of the digital surge that started in the 1970s and accelerated at the turn of the century?
That question cuts against almost everything you’ve heard for three years. Every keynote, every VC deck, every magazine cover says AI is the new electricity, the next industrial revolution, the thing you’d better run to catch. The Next Wave, reading Colin, is asking whether we’ve got the polarity reversed. Not the start of a wave. The end of one.
Perez in thirty seconds
To make that case, you need Carlota Perez’s model, and the author gives the short version. Perez, building on Christopher Freeman, argues that major technological revolutions occur in 50- to 60-year surges rather than as a smooth curve. Canals and cotton. Steam and railways. Steel and heavy engineering. Cars and oil. And most recently, Information and Communications Technology, which dates to 1971 and the invention of the microprocessor.
Her language is worth learning, because it does the explanatory work that the AI discourse is missing. She splits each surge into an installation period and a deployment period, separated by what she calls a turning point. Installation is the slow, chaotic first half where infrastructure gets built, mostly by investors who don’t yet know what they’re funding and users who haven’t shown up. The turning point is usually a financial crash that wipes out the speculative investors and transfers the infrastructure to whoever knows how to run it. Deployment is the long back half where the technology gets cheap, ubiquitous, and boring, and ordinary businesses make ordinary returns by applying it to real problems. Perez calls the mature phase a “golden age” when the conditions allow it, and a “deployment without a golden age” when they don’t. The ICT surge she dates to 1971 crashed in 2000, entered its deployment phase from roughly 2003 on, and, by her clock, is now well into maturity.
The question Colin is asking is where AI sits on that curve. His answer, quoted in the piece, is blunt:
Seen through a late-cycle lens, today’s markets show signs that we’ve entered the maturity phase of the computing and networks revolution. The theory, therefore, leads to specific, testable predictions about where capital should go and which strategies will outperform.
Three things that don’t fit the “new surge” story
The Next Wave pulls three indicators from Colin that I found convincing.
The first is the shape of recent funding. As Colin puts it:
The startup funding collapse of 2022 wasn’t just a correction, it may be structural. As investor Jerry Neumann argued in his landmark Productive Uncertainty, startups rely on uncertainty as a competitive edge. When good ideas become obvious to everyone, including well-funded incumbents, the startup model faces real strain.
The second is who actually launched the current AI era. Colin again:
Then came AI, revealing new dynamics. ChatGPT’s breakthrough didn’t come from a garage startup but from OpenAI, backed by Microsoft’s vast computing power. Google, Meta, and Amazon responded with billions. This pattern, big tech deploying huge capital against well-understood problems, fits the late-cycle theory exactly.
At the start of a real new surge, investment tends to be patchy and confused. The sector exists, but it isn’t legible yet. Nobody knows which companies matter. With AI, the money is so vast and so concentrated that it looks nothing like an early-stage bet. It looks like incumbents are defending turf.
The third is the one I find most persuasive:
Most tellingly, platform saturation now looks almost complete. Digital transformation has reached most sectors where computing and networks can plausibly work. What remains, healthcare delivery, education, construction, government services, may reflect the paradigm’s natural limits, not untapped markets.
That’s a hell of a list, and it matches my experience. The sectors still untouched are untouched for real reasons unrelated to processor speed or model size. They’re hard in ways software has historically been bad at.
Big box retail for computing
This is the part of the argument I keep coming back to. The Next Wave makes the point that at the end of each Perez surge, there’s a late-deployment push in which the old paradigm aggressively drives into the corners of the economy it hadn’t reached. The cars and oil surge peaked in the 1970s, but that’s also when the UK built out its motorway network and the US filled in its suburbs with edge-of-town business parks and big-box retail. All of that was cars and oil extending their reach, not a new wave. It just felt new because it changed where people lived and shopped.
The author draws the line directly:
Colin’s arguing that AI is the equivalent of bigger roads and big box retail, different, but more about embedding the technology more deeply than the kind of transformational change that eventually causes a new and distinctive form of abundance.
Colin himself, quoted in the piece, puts it this way:
Like lean production, which extended mass production’s dominance for decades through efficiency gains, AI doesn’t mark computing’s end but its maturation. The technology spreads to previously untouchable sectors, creating the illusion of radical novelty whilst actually representing computing and networks’ final conquest of the physical economy.
Big box retail for computing. Once you see it that way, you can’t unsee it. The vendor in my meeting wasn’t selling a new product; he was bolting the paradigm onto an existing workflow and slapping a new label on it. Microsoft isn’t inventing a new kind of economy; it’s squeezing more per-user revenue out of the enterprise software it already sold you. Notion isn’t entering a new category; it’s defending the old one by bundling features it can’t sell on its own. That’s what maturity looks like.
The crash this predicts
The part of Perez’s model that the piece doesn’t dwell on, and I think it’s the most interesting one. Each surge, in her formulation, gets a financial crash as its turning point. The infrastructure was overbuilt during installation, speculative money got ahead of real economics, and the correction transfers the infrastructure to people who can actually run it. Canals had a crash. Railways had a crash. Electricity had a crash. The ICT surge crashed in 2000, when the dot-com bust wiped out the people who’d paid for the dark fiber, and the server farms, and deployment-era companies like Google and Facebook got to use the cleaned-up infrastructure on the cheap.
If Colin is right that AI is a late deployment of ICT and not a new surge, you’d still expect a crash, because the capex buildout for AI infrastructure is running at surge-installation scale without surge-installation economics to back it up. Hundreds of billions of dollars of GPUs, data centers, and power contracts, on the theory that the returns will be extraordinary. If the returns turn out to be ordinary, which is what late-deployment returns always are, the math doesn’t work, and somebody eats a lot of writedowns.
That’s the sharpest testable prediction in the whole theory. Either the capex pays back at historical software margins, or it doesn’t. If it doesn’t, there’s a crash coming for the hyperscalers, and it’s probably closer than anyone wants to admit. I’m not predicting the timing. I’m saying the model predicts the shape, and if you believe the model, you should be planning for it.
Where this could be wrong
I’ve been presenting this like it’s settled, and it isn’t. Let me give the argument an objection.
The strongest case against Colin is that the scaling laws still hold. If you can keep buying more intelligence by buying more compute, and if that intelligence keeps unlocking capabilities that weren’t possible at the previous scale, then we aren’t in late deployment; we’re still in frenzy. The shape of the investment resembles incumbents’ because only incumbents can afford the next scale, not because the game is over. In that reading, the capex makes sense, the returns will be extraordinary, and Colin is wrong in a way that will be obvious in five years.
I don’t dismiss this. It’s the real argument, and it deserves a real answer. My answer is that it’s exactly the argument you’d expect at the top of a frenzy, when the scaling story is still intact, but the marginal returns are already starting to flatten. GPT-4 to GPT-5 wasn’t GPT-3 to GPT-4. The jumps are getting smaller, and the costs are getting bigger. If the curve is still exponential, Colin loses. If the curve is starting to round over, Colin wins. The honest answer is we don’t know yet, and anyone who tells you they know is either selling something or pretending harder than they should.
The other objection, which I take more seriously, is that the real new surge might already be starting somewhere we aren’t looking. Perez’s model actually requires this. The late deployment of one surge overlaps with the quiet installation of the next. If that’s happening, it’s probably in biology, energy storage and generation, or materials science. Not in anything with a brain icon on the slide. The fact that nobody has a confident story about what the next surge is doesn’t mean it isn’t there. It means we’re in exactly the part of the cycle where you can’t see it yet.
This framing could be too neat. But even the objections land more within Colin’s model than outside it.
The pushback is real
The piece also assembles some evidence that normal people are not as excited about any of this as the keynotes suggest. Ted Gioia’s music blog gets quoted:
Most people won’t pay for AI voluntarily, just 8% according to a recent survey. So tech companies need to bundle it with some other essential product.
And Ed Zitron on Notion:
Notion bumped its Business Plan from $15 to $20 a month per user thanks to its new “AI features,” which I imagine sucked for previous business subscribers who didn’t want “AI agents” or any of that crap but did want things like Single Sign On and Premium Integrations. The result? Profit margins dropped by 10%. Great job everybody!
That’s not what the early phase of a beloved new technology looks like. It’s what the late phase of an overplayed hand looks like. The force-feeding of AI features into products people already paid for is a tell. So is the local pushback against data centers, which the piece notes covers the entire US map, red states and blue ones alike.
And there’s a separate contrast worth pulling out. The Next Wave quotes RAND, via the Exponential View newsletter, on how China is deploying AI:
In Washington, the AI policy discourse is sometimes framed as a ‘race to AGI.’ In contrast, in Beijing, the AI discourse is less abstract and focuses on economic and industrial applications that can support Beijing’s overall economic objectives.
Azeem Azhar adds that Chinese teams “publish leaner open source architectures and partner with specialists in areas such as healthcare analytics and adaptive learning.” The Next Wave reads this as a choice driven by constraints that happens to match the moment. China has less compute than the US and has to build lean, and lean is exactly what a late-cycle deployment phase rewards. The American framing, where AGI arrives any minute and reorders civilization, starts to look like an artifact of believing you’re at the start of a new surge when you’re actually at the end of an old one.
Why is this the good news?
I said at the top that this is the hopeful reading. Here’s why.
If AI is a new 50-year surge, then you’re either a hyperscaler with ten billion dollars of capex or you’re roadkill. There’s no path for a normal person to matter. The only meaningful question is which giant you work for.
If AI is the late phase of the digital surge, the calculus flips. The Next Wave points out the consequence:
late stage post-deployment technologies do produce returns on investment, but they’re normal returns, not increasing returns.
Normal returns mean normal businesses. Specialists work on hard sectors, which means room for people who actually understand those sectors. The advantage goes to whoever can walk into a hospital, a classroom, or a power plant and make the technology land, not to whoever raised the biggest round. Competence beats capital for the kinds of problems that are left, because capital has already tried the easy problems, and AI is the tool it’s using to squeeze the last efficiency out of them.
It also means the next real surge hasn’t started yet, or, if it has, it’s small, confusing, and unfashionable, and probably being built by people who don’t know they’re at the beginning of something. That’s exciting, not depressing. It means the game isn’t decided. It means there’s a real future to play for that isn’t a rerun of the one that started in 1971.
What Colin’s theory actually predicts
Before I close, let me pull the testable predictions out of the argument, because Colin makes a point of calling them testable, and the piece never enumerates them. If the late-cycle framing is right, you should expect to see:
- Ordinary returns on AI investment, not extraordinary ones, visible in the margins of companies that bet heavily on AI features.
- A crash or heavy writedowns on AI infrastructure capex within the next few years, as the installation-scale spend fails to find deployment-scale revenue.
- The Chinese lean-deployment approach outperforms the American AGI-race approach on real-world economic impact, measured by which specific industries actually get more efficient.
- Continued failure of AI products to get voluntary paid adoption, driving continued bundling and continued margin compression at the bundlers.
- The startup model remains structurally weak in any sector where the problem is well understood and the incumbents have the data.
- Real action in the previously untouched sectors (healthcare, education, construction, infrastructure, government) is happening through boring efficiency work, not dramatic platform plays.
Watch those over the next three to five years. They’re falsifiable. If most of them turn out wrong, Colin is wrong, and you should go find an AGI job. If most of them turn out right, the frame is holding, and you should plan accordingly.
What I take from it
A few things, if you find the framing useful.
The first is that AI isn’t a special category. It’s computing. The questions are the same ones you’d ask about any other software. Does it solve a real problem? Does the math work? Are the users asking for it, or are you shoving it at them? The AI label doesn’t change any of that, it just charges more.
The second is that the interesting work is in sectors that computing hasn’t yet reached. Healthcare, education, construction, government services, the industrial and infrastructure sectors. Those aren’t untapped markets waiting for a push, they’re hard for real reasons, and the work of making technology actually fit them is slow and unglamorous and pays in regular installments for a long time. I’ll take that over chasing the next keynote.
The third is that the late phase of a surge is always full of bundling and force-feeding, because the incumbents have run out of better ideas. If you show up with a product that does one thing well, charges for it honestly, and doesn’t try to upsell you on an assistant you didn’t ask for, you look unusual. That’s a competitive advantage nobody is pricing in yet.
And the last is a posture toward whatever comes next. Don’t bet on it, don’t predict it, don’t write thinkpieces declaring you’ve found it. Stay curious, stay solvent, and read the stuff nobody is paying attention to. Most of it won’t pan out, and that’s fine. You’re reading widely, so you’ll recognize the real thing when it shows up, not so you can be first in a race nobody has called yet.
Back to the vendor
Which brings me back to that meeting. The rep had a product that probably did something useful. Somewhere underneath the brain icon and the correlation engine slide, there was an improvement worth paying for. He just couldn’t describe it because he’d been trained to sell a revolution rather than a 15% improvement to an existing workflow. And 15% on an existing workflow is exactly what late-deployment technology is supposed to deliver. Boring, real, profitable, and invisible to anyone with a podcast.
If he’d walked in and said, “This tool catches 15% more credential-stuffing attempts than the version you have now, here’s how, here’s the data,” I think half the room would have signed. That’s the pitch, the moment actually rewards. That’s big-box retail for computing, and there’s nothing wrong with big-box retail. It just isn’t the next industrial revolution, and the people who keep insisting it is are about to find out what happens when you build installation-scale infrastructure for a deployment-scale market.
Go read the original piece. It’s the best thing I’ve read on any of this in a long time.