A reading of what is actually happening in the market, for those trying to make sense of AI adoption at the organisational level.

Three numbers worth keeping in mind

Base44, a startup launched in January 2025 by an Israeli founder (ex-Unit 8200, Forbes 30 Under 30), reached $1M ARR in three weeks after launch. Six months later, Wix acquired it for $80M in cash. Number of employees at the time of acquisition: 8. No VC money, no marketing spend.

Lovable, an AI app builder from Sweden, went from $0 to $200M ARR in 12 months, one of the fastest software growth curves in history.

Anysphere (Cursor) passed $1B in annualised revenue with fewer than 300 people. At $500M ARR, this worked out to $5M per employee, the highest revenue-per-employee ratio in the history of software.

These numbers are not anomalies. They are a signal. And if you ignore them because they look like Silicon Valley curiosities, you shouldn't, because the next threat to your market position will not come from your obvious competitor. It will come from a three-person team that did not exist a year ago.

The question is why. And, more importantly, what this means for your operating model.

AI is a multiplier, not an equaliser

The dominant narrative of the past two years has been that AI "democratises" productivity. That suddenly everyone is able to do things they could not do before.

That is not quite what is happening. What is happening is more interesting.

Take a person with no programming knowledge. With AI, they produce a value we can call 1. Without AI, they would have produced zero. The improvement is unlimited in percentage terms, but in absolute terms, the output is small.

An average programmer with AI: value 5.

A very good programmer with AI: value 10.

The same pattern applies to every function. The average product manager goes from 1 to 5. The good one goes from 5 to 10, and gets there much faster. The average designer reaches 5. The experienced one reaches 10.

AI does not deliver the same value to everyone. It is a multiplier of existing experience. The more experienced the user is in their domain, the bigger the multiplier.

On its own, this is not dramatic. The dramatic part happens when we add one more variable.

The 10×10 Rule

Think of someone who is both a programmer and understands design. They can translate design elements into code automatically, see trade-offs in real time, and make decisions that would otherwise require two people and three meetings.

With AI, this person does not become 10x better. They become 10 × 10 = 100x.

Why? Because cross-skills, when they live in the same mind, do not add up. They multiply. AI simply amplifies the multiplier across every dimension at once.

Add product or customer discovery skills to the same person. The product reaches x1000.

This is the 10×10 Rule. And it explains everything.

It explains why an 8-person team delivered $80M in value in six months. They were not working harder than a 500-person team. They had people with multiple skills in the same head, and AI was multiplying each of those skills at the same time. The founder of Base44 is the clearest example. He has publicly said that he has not written front-end code for months. AI writes the code for him. He then uses AI to make product decisions, uses AI to make design decisions, and uses AI to do customer discovery, all in the same session. Every one of his cross-skills is being multiplied by AI in parallel. That is how the output compounds.

It explains why Cursor reached $1B ARR with 300 people instead of 3000. They were not simply "more innovative". They hired a specific type of person: generalists with cross-functional fluency, for whom AI multiplies not one skill but three or four at once.

The Multiplier Gap

If the 10×10 Rule explains what is possible, the Multiplier Gap explains why most companies never get there.

You receive the mandate from the board to "adopt AI". You hire a Head of AI. You run pilots in support, in engineering, maybe in marketing. Each function sees a 2x or 3x improvement in some metric. You celebrate. You write a case study.

And then you wonder why the ROI never looked like the presentations you saw at Davos.

The answer is simple: you took AI as a sum instead of a product.

When you apply AI inside each silo separately, each function improves linearly. Engineering goes x10, but product cannot spec fast enough, sales cannot sell fast enough, legal cannot approve fast enough. The bottleneck moves, it does not disappear. One x10 is cancelled out by the next x1.

This is the Multiplier Gap. The distance between the multiplier you could have captured and the one you actually capture in practice. In theory x100, in practice x2.

That x2 is 2% of the value. And your 3-person competitor takes the other 98%.

What it looks like when it goes wrong: a familiar story

Let me describe a pattern I see repeatedly in companies trying to adopt AI "seriously".

The CEO announces an AI-first strategy. An AI Center of Excellence is set up. A budget is created for 10 to 15 AI use cases. Each function proposes one: a support chatbot, document generation in legal, lead scoring in sales, resume parsing in HR.

Twelve months later, six of the ten pilots have been frozen. The remaining four are running in production but with limited adoption. ROI is positive but mediocre. The CEO hears from consultants that "AI is overhyped" and starts losing interest.

Meanwhile, a competitor that started 18 months ago with three people now has twenty-five, and is taking your customers with products you are still discussing in committee.

The diagnosis is not that AI failed. It is that you applied it as if it were a software upgrade, when it is organisational redesign. You put faster tools into a slower structure, and were surprised that the speed did not change.

The question that remains is: what does a structure look like that is not slower than the tools running on top of it?

The organisational equivalent of AI-native teams

If an 8-person team can be worth $80M in six months, the goal for a larger company is not to shrink to 8 people. It is to artificially reproduce the conditions that enable this kind of speed.

These conditions have three components, and all three must be in place at the same time.

Cross-fluent teams, not just cross-functional. In an individual, skills live in the same mind and multiply. The organisational equivalent is not "I have a designer and an engineer in the same meeting". It is having people who actually speak the language of two or three functions. An engineer who understands customer discovery. A PM who reads code. A designer who understands unit economics. Without this, the team is just a co-location of specialists, not a product.

A shared context layer that works as a "shared brain". In an individual, whatever one skill knows, the other one knows automatically. In a company, this does not happen naturally. Context is scattered across dozens of tools, across Confluence, Slack, emails, ERP, CRM. To capture the multiplier, customer data, decisions, lessons learned from failures, product metrics, and market signals must be accessible to every function, and to every function's AI, at the same time. This is not a data warehouse project. It is a new architectural layer.

Short feedback loops. For an individual with cross-skills, feedback is instant. In a company, this translates into very short cycles between discovery, build, ship, and learn. If an idea travels from product to engineering to QA to marketing to sales over several months, the multiplication dies in the handoff delays. AI makes each function fast individually, but if the handoffs stay slow, you gain very little.

Taken together, the organisational equivalent is a synthetic entity that thinks like a person with multiple skills, with the same speed and the same coherence, but with the output of an entire company.

Why it rarely happens: the two levels of obstacles

In our conversations with a wide range of organisations, we see the same obstacles repeat. They fall into two categories.

Architectural obstacles relate to how information and work flow. Fragmented context: each function improves in isolation because its AI sees only one piece of the value chain. Bottleneck migration: when you apply AI only to one function, the bottleneck moves to the next one. Engineering produces x10, product cannot spec fast enough, sales cannot sell fast enough. The success of one function becomes frustration for the others. Speed mismatch: engineering and design adopt AI quickly, legal and compliance do not, and friction grows. These three are technical-organisational and can be solved with new architecture.

Human obstacles are harder. Middle management as a friction point: middle layers of management existed to coordinate and approve. When AI-enabled pods can self-coordinate, middle management becomes the point that cuts speed instead of adding it. This is the most politically painful part of the transition, and usually the one that gets ignored. Taste as the new scarce asset: AI amplifies judgement, it does not replace it. For decades, companies have been trained to hire for experience and specialisation, not for taste and judgement, and existing HR mechanisms do not know how to change direction. Wrong metrics: per-function KPIs show the wrong picture when value is produced in a multiplicative way through the pod.

The difficult part is that companies usually deal only with the first category. They buy platforms, unify data, run integrations. And then they wonder why the multiplier never kicked in, when the real bottlenecks were human.

The reframe that is needed

This is not a new problem. It is an old pattern in a new form.

When the industrial revolution brought mass steel production in the 19th century, civil engineers suddenly had a material with completely different properties from stone: lighter, more flexible, capable of much longer spans. And what did they do? For decades, they kept designing bridges the way they had designed them in stone. Heavy, massive, with short spans and huge piers. They were using the new material with the logic of the old one.

It took a generation for the Eiffels and Roeblings to appear and show that steel is not just stone that weighs less. It is a material that allows a completely different architecture: suspension bridges, cable-stayed designs, light structures with spans that would have been unthinkable in stone.

AI today is at the same stage. Most of the AI we see in companies is stone bridges made of steel. Same logic, same structure, just faster. The real multiplier only shows up when the architecture changes, not the material.

That is why the question is not "how do I add AI to my existing organisation". It is "what kind of organisation would I build if I started from scratch today with AI as a given".

The first frame gives you pilots that do not scale. The second requires a commitment to change four things at once: structure, metrics, hiring, and decision rights. Whoever pulls only one of the four levers gets linear improvement, and ends up believing that AI was overhyped.

AI adoption is not a project with a budget, a PM, and a deadline. It is an operating model change. And that is the central difference between companies that capture the multiplier and those that only see a linear improvement.

Four questions worth bringing to your next leadership meeting

If the change is structural, then the right questions are not about AI. They are about your structure. Here are four that are worth bringing to the next leadership meeting.

  1. Take a typical decision you used to make five years ago, say a new product launch or a strategy shift. If you made it today with AI, would it have the same structure, just faster? Or would it actually be different as a process? If the answer is "the same but faster", you are building stone bridges out of steel.
  2. If we were starting the company from scratch today with AI as a given, how would it be structured? Into how many pods, with what skills in each, and with how many approval layers between an idea and its execution? How far is that picture from our current structure?
  3. Where we have applied AI, does each function improve on its own, or does the product between them kick in? Specifically: do our AIs share a common context, or does each one see only its own silo?
  4. How many of our top 20 people actually have cross-functional fluency across two or three domains? If the answer is fewer than three, what is our strategy for the next 18 months?

The answers to these questions say more about your position in the next decade than any AI strategy deck.

Because in the end, the real question is not whether you will adopt AI. It is whether you will adopt it as a tool or as a structural element. Companies that see it as a tool will capture 2%. Those that see it as a structure will capture 100%. The difference between the two will not show up in quarterly results right away, but it will be visible in five years, when it will be too late to reverse.


The Multiplier Gap is one of the core patterns we observe in our conversations with organisations trying to do serious work with AI. If you recognised your own data in any of the above, the conversation continues here.