Behavior is Blocking $1 Trillion of Services

For most of the last decade, the gap between what technology could do and what people did with it was a technology gap. Today, the gap is a behavior gap. And human cognition does not scale with Moore’s law.

Max Kane
May 10, 2026
6
min read
Industry Trends

If you spend any time on X, you'd be forgiven for assuming AI is already replacing accountants, insurance brokers, and investment banking analysts, creating a perfect dystopia for the American worker.

The reality on the ground couldn't be more different.

Walk into a 50-person accounting firm in Cleveland, a regional hospital system in Tampa, or a 300-broker insurance agency in Charlotte. They've all signed up for ChatGPT. But press them on what part of their job AI now does for them, and the answer in most cases is, "None."

For most of the last decade, the gap between what technology could do and what people did with it was a technology gap. Today, the gap is a behavior gap. And human cognition does not scale with Moore’s law.

Behavior is the new blocker. It's about to make every confident forecast for AI's economic impact look early.

The Solved Problem

The first use case where AI capability has cleared the bar of "useful enough to fundamentally change a job" is software engineering.

Claude Code, Cursor and Codex have completely changed how engineers work in a matter of months. They went from writing code to prompting models, faster than anyone expected. This is a historic success of new technology adoption.

The people watching that happen draw a straight line from engineers to the rest of the economy. If software engineers rebuilt their workflow in months, surely accountants and brokers and analysts will too. It's why so many people who live inside this technology have a religious certainty that the economy is about to follow.

But I think they're wrong.

Engineers Are Not the Median Worker

The reason coding fell first isn't only that the humans doing the work are uniquely positioned to adopt new technology. It's also that the work itself has a structural advantage.

Software engineering is a deterministic environment with an instant feedback loop. The code either runs or it doesn't. The compiler is the AI's auditor in real time, and a wrong answer costs the engineer a few minutes of debugging.

The rest of the economy — the long, unglamorous middle of services — is probabilistic and high-stakes. An accountant's wrong answer surfaces in an IRS audit two years later. An underwriter's wrong answer surfaces when a claim hits in five years. In these fields, not only is the cost of a hallucination higher, but the time-to-discovery is longer.

This is the reason evaluations are so important to Applied AI. Companies can’t wait for actual feedback — filed claims, failed audits — so they rely on effective eval frameworks.

There's also the human side. Engineers are young. They love to optimize. They view new tools as a status symbol, not a threat. Their managers are themselves engineers. The cost of trying a new IDE plugin is zero.

So engineers are an extreme distribution: the most adaptable workers, in the most forgiving feedback environment, in the workplaces most architected for change. Watching them adopt AI in months and concluding the rest of the economy will follow on the same curve is a category error. It's like watching teenagers adopt TikTok and forecasting the same uptake at a nursing home.

As Marc Andreessen recently put it, "They believe that because the technology makes something possible, 8 billion people are all of a sudden going to change how they behave. It's like — no."

It Always Takes Twenty Years

The ATM was technically operational in 1967. It took until the mid-1980s, close to twenty years, for it to become the default way Americans interacted with their bank. The hardware wasn't the bottleneck; behavior was.

Cloud was commercially available in 2006. Twenty years later, somewhere between 50% and 60% of enterprise workloads run in public cloud. Cloud didn't ask the average worker to change a thing, and it still took two decades to reach a partial majority.

Electronic health records (EHRs) were supposed to transform American medicine. The federal government passed the HITECH Act in 2009 with roughly $30B in incentives. Hospital adoption went from under 10% in 2008 to over 95% within a decade, one of the fastest enterprise rollouts in history.

But ask any doctor how it actually changed their work. The answer isn't "I'm more productive." It's "I now spend two hours on documentation for every hour I spend with a patient." EHRs are a study in the difference between installation and adoption — or, put differently, technology and behavior.

In the case of EHRs, the technology was designed for the back-office billers, administrators and regulators, not for the doctors doing the work. It got installed everywhere, but the behavior change it was supposed to enable never arrived.

AI sits at the intersection of all three lessons. The trust gap is wide (ATM). The workflow change required is enormous, not infrastructural — Cloud was the easier kind, and it still took twenty years. And most enterprise AI today is being bought by CEOs to monitor productivity rather than by the workers who would use it to do their jobs (EHRs again).

What is consistent across these examples, however, is new technology gets used in the shape of the old one for years before it becomes itself. The ATM was treated like a cautious teller. The cloud was treated like a remote data center. The EHR was treated like a digital filing cabinet. Each became something different only after a generation of users learned to think about the work itself differently — and that took decades.

Ivan Zhao made a related point recently, quoting Marshall McLuhan: "We are always driving into the future via the rearview mirror." New technology arrives, and we use it the way we used the old one. The behavioral migration to use AI natively will take a decade or more.

Pilots Don’t Drive GDP

The most under-discussed chart in the AI economy is the one comparing pilot revenue to production revenue.

MIT's NANDA initiative published a study last summer surveying 350 employees, interviewing 150 leaders, and analyzing 300 public AI deployments. The headline number was that 95% of enterprise generative AI pilots delivered no measurable P&L impact.

The exact percentage is debatable. The directional truth isn't: enterprises are buying AI at a furious pace but only managing to deploy it at a tepid one. CEOs have a board mandate to "do something on AI." Budgets exist. Pilots are easy to start, but the trickle-down to revenue, margin, and headcount has not shown up in the financials of any traditional business.

Paul Kedrosky, on a recent episode of Odd Lots, walked through the math: AI infrastructure spend in 2025 was on the order of $400B against AI revenue closer to $60B. A 6-to-7x gap. The dot-com fiber buildout peaked at 4x. Railroads in the 1870s peaked at 2x. That gap is, downstream of everything else, a bet on human behavior changing.

The academic literature agrees. MIT's Daron Acemoglu modeled AI's contribution to total factor productivity — economists' shorthand for how much more output the economy gets from the same labor and capital — and projected a 0.7% gain over the next decade. Goldman pushed back, but their bull case still tops out at single-digit percentage points spread over ten years.

Something has to give. My bet is the timeline does.

Insurance Brokers Are Not Engineers

Novella is an AI-native wholesale insurance brokerage. Every day our team works with brokers across the country placing hard-to-insure risks. These brokers are experienced professionals who run real businesses and have real opinions about technology.

They also do their job today, in 2026, almost exactly the way their parents did it in 1996. Email. PDF. Phone. Spreadsheet.

It's tempting to call that backwardness and insist AI will fix it. But it isn't. The PDF in insurance isn't a habit — it's a legal artifact. It's the document of record for the broker's E&O insurance, the regulatory filing, the audit trail, and the chain of liability between broker, wholesaler, carrier, and customer. In 1996, the PDF was the rational choice. Three decades later, it has become a standard almost impossible to uproot.

This is what economists call path dependency: the trajectory of an industry is shaped less by what's optimal today than by what was optimal at the moment of its formation. QWERTY keyboards beat better-designed alternatives because typists already knew them. VHS beat Betamax for the same reason. The broker's PDF-and-email workflow is the same kind of artifact, and every service has its own path dependency.

This is what makes AI adoption so hard. It isn't blocked by one thing. It's blocked by two, simultaneously. The individual broker has to change how they work. And the system they operate in — the carriers, the regulators, the lawyers, the comp plans, the accreditation regimes — has to change with them.

Services Over the Next Decade

If behavior is the blocker, the path AI takes through the service economy will look very different from the one The Timeline assumes. It will play out in three phases — each respecting how slowly users actually change, each building on the last.

Phase One is the world of today. AI agents quietly take over the work humans used to do: reading and responding to emails, filling out PDFs, drafting proposals, reviewing claims. The interface doesn’t change. The customer still gets a PDF; the broker still works in their inbox; the underwriter still uses the same forms. From the outside, nothing has changed. From the inside, an agent is doing 80% of the work.

The unlock here is incentives. Service businesses need to align their incentives around outcomes instead of work done.

Then Phase Two is when users start to notice. The work is faster, cheaper, and more reliable than they’re used to, and they start asking for something different. The interface begins to mutate. As Ivan Zhao has pointed out, a decade from now we won’t query AI the way we Google today; the same will be true for every service interaction. Brokers won’t ask their wholesaler for a marketing list; the marketing list will arrive before they ask. You won’t send your accountant your 1099s; an agent will sit on top of your accounts and automate your tax return for you to review.

The companies that win this phase won’t force users into new paradigms. They’ll watch users discover them, then build the company around what users naturally pull for. Behavior leads, technology adapts, then interfaces develop.

Phase Three is the deepest change, and the furthest out. The system around the work — the protocols, the documents, the regulatory artifacts — gets rebuilt for AI. Agents speak to agents. PDFs no longer need to be filled out because agents communicate directly. CRM interfaces fade because humans are no longer their primary users; agents are. Every workflow goes from built for human capability to built for agent capability.

This is the phase that produces the trillion-dollar TAMs. It’s also the slowest, because it requires every component changing at once. And this will only start once behaviors have changed.

None of this happens overnight. It will likely take a decade for major parts of the service economy to make it through Phase Three. But the first $1 Trillion service company will not only build agents to handle human work, it will completely rebuild the system around the work. This is our thesis at Novella, and the approach we are taking to reinvent how insurance brokers work.

The technology is solved. The behavior is the blocker. The companies that fundamentally reinvent services are those that respect the sequence — let users lead, build the incentives, and rebuild the system around them, on the long arc that human behavior requires.

Table of Contents
Share this article

More like this

Product Updates

Our First Novella

Max Kane
April 30, 2026