AI ROI Is Forcing a Commercial Reckoning
The AI conversation in boardrooms has changed. Not long ago, the debate was about adoption. Should we experiment? Where can we apply it? What tools should we back?
That phase is over.
Today, the dominant question is simpler and more dangerous: where is the return, and who is accountable for it?
IBM’s has just released a whitepaper on the “The Race for ROI” on how AI is impacting productivity in the EMEA region. Having interviewed 3,500 executives it finds that approximately one in five organisations have already realised ROI goals from AI-driven productivity initiatives, with a further 42% on average expecting to achieve ROI within 12 months across cost reduction (41%); time savings (45%); increased revenue (37%); employee satisfaction (42%) and increased Net Promoter Score (43%).
But more importantly, it exposes a contradiction that most AI strategies are quietly running into. The paper insists AI is already delivering ROI or will do so imminently. At the same time, it acknowledges that real value only shows up when organisations redesign operating models, rework value streams, introduce new governance structures, and realign teams around AI.
Those two ideas do not sit comfortably together.
If AI value were inherent and automatic, none of that redesign would be necessary. You would deploy the tools, bank the gains, and move on. The fact that so much structural change is required tells us something else is happening.
AI is forcing businesses back to first principles of value exchange. And many commercial models are not built for that conversation.
When Productivity Becomes Visible, Value Gets Questioned
The whitepaper leans heavily on productivity as the primary proof point. That is not accidental.
Productivity is one of the few outcomes that can be pointed to with confidence across functions and sectors. Time saved. Faster processing. Higher throughput. Fewer manual steps.
Take the City of Helsinki case cited in the paper. By deploying a network of AI-powered digital assistants across healthcare and social services, the city was able to handle up to 300 customer contacts per day across multiple languages. The outcome was clear: employees had more time to spend on higher-value work.
Or consider Lidingö stad Municipality in Sweden, which used generative AI to streamline data masking for public document requests, cutting processing time by 50%.
These are real, tangible gains. But they also illustrate the deeper shift underway.
Once productivity improvements are visible and measurable, they stop being abstract benefits. They become economic signals. And economic signals invite scrutiny.
If time is saved, where does it go? If costs are reduced, who captures the benefit? If processes are faster, why does pricing stay the same?
AI does not create these questions. It simply makes them unavoidable.
The Hidden Shift: Risk Is No Longer One-Sided
Traditional SaaS and services models were built on a quiet assumption. The customer carried most of the risk.
You paid for access. You adopted the tool. If value did not materialise, that was framed as an adoption issue, a change management problem, or a mismatch of expectations.
AI disrupts that arrangement.
When vendors claim that AI will materially improve productivity, decision-making, or efficiency, and those outcomes are measurable, customers inevitably ask who stands behind them.
The Al Rajhi Capital case in the IBM whitepaper is instructive. By unifying platforms and embedding AI into a “super app” for investment services, the firm reportedly saw a 40% increase in brokerage business volume and a 1,000% increase in asset management onboarding.
Those are not marginal improvements. They are commercially meaningful outcomes.
But they also raise a harder question. If AI is now positioned as a driver of growth and productivity at that level, does the vendor still get paid the same way? Does the customer still absorb all the uncertainty? Or does accountability start to shift?
This is the part most AI strategies are not ready to confront.
Why Traditional Commercial Models Start to Fracture
Most SaaS pricing models assume predictable margins and low marginal costs. Once the software is built, selling more of it does not meaningfully increase cost.
AI breaks that assumption.
Inference, orchestration, data access, and continuous agentic workflows introduce cost structures that scale with usage, not seats. The more value a customer derives, the more it can cost to deliver.
If pricing remains fixed while costs become variable, margins erode. If pricing becomes variable, customers demand transparency and proof.
The subscription model also assumes that value is durable even when it is not precisely measured. AI undermines that by making value easier to observe and easier to compare. This is why buyer scepticism is rising alongside AI adoption. It is not resistance to AI. It is resistance to paying for potential without evidence.
AI as the Catalyst for the Value Conversation
What the IBM paper inadvertently shows is that we are moving into a value economy, whether we like it or not. In a value economy:
Value must be identifiable, not implied
Value must be measurable, not assumed
Value must be visible to the customer, not just the vendor
Value must be monetised with accountability
AI accelerates all four pressures.
The Nedgia, grupo Naturgy case in the energy and utilities sector makes this clear. By embedding generative AI into contact centre operations, the company reduced waiting times and automated routine tasks, freeing human agents for higher-value support.
Again, the gain is real. But once value is embedded that deeply into operations, it becomes harder to justify generic pricing and soft commitments.
This is why so many AI initiatives stall after pilot success. The technology works. The demos impress. But when the conversation turns to scaling, pricing, and accountability, the commercial foundations are missing.
The Real Strategic Choice Leaders Face
The most important decision leaders face is not which AI tools to adopt. It is whether they are willing to operate in a world where value exchange is explicit. That requires uncomfortable changes.
Measurement stops being a reporting exercise and becomes a commercial capability. Finance moves from cost control to unit economics and value recovery. Sales shifts from persuasion to proof. Customer success becomes accountable for outcomes, not activity.
Most importantly, leadership must decide who owns value delivery end to end.
This is not about buying more tools or running more pilots. The practical moves are structural and commercial.
1. Force a single, explicit definition of “value” for AI initiatives
If productivity, cost savings, revenue uplift, and experience are all being claimed at once, you do not have a value definition. You have a narrative.
Pick one primary value outcome per initiative and document:
What changes
For whom
By how much
Over what time period
If you cannot do this cleanly, ROI conversations will stay vague.
2. Establish baselines before scaling anything
Many organisations are reporting gains without clear baselines. That works once. It does not survive CFO or procurement scrutiny.
Before scaling AI use cases:
Lock baseline metrics
Agree attribution rules
Decide what counts as success versus noise
This is not analytics hygiene. It is commercial credibility.
3. Map AI usage to unit economics, not just outcomes
The report talks about ROI but avoids cost mechanics. You cannot.
You need to understand:
Which AI-driven activities drive variable cost
How those costs scale with usage
Where margin risk sits as adoption grows
If finance cannot explain this, pricing will lag reality.
4. Decide explicitly where accountability sits
Someone must own value delivery end to end.
Not adoption. Not enablement. Not experimentation.
Actual economic impact.
Without this, AI remains a distributed hobby and ROI remains a slide.
5. Stress-test your pricing model against measurable outcomes
If customers can now see value more clearly, they will question pricing more aggressively.
Ask:
What happens if customers demand proof before renewal?
What happens if productivity gains are benchmarked across vendors?
What happens if AI usage grows faster than revenue?
If your model only works while value stays fuzzy, it is already fragile.
The IBM report is right about one thing: the race is on.
But it is not a race to deploy AI.
It is a race to credible value creation.
AI makes value visible. Visibility forces accountability. Accountability reshapes pricing, contracts, operating models, and leadership behaviour.