AI Tips

Why 90% of Companies Using AI Are Getting 0% Value

Here's the uncomfortable truth about enterprise AI in 2026: almost every company is using AI. Almost none of them are getting real value from it.

According to Pertama Partners' 2026 AI Project Failure Statistics report, 80% of AI projects fail to deliver expected value. MIT's GenAI Divide report puts it even higher — 95% of generative AI pilots never make it past the experimental phase. And PwC's 2026 Global CEO Survey found that 56% of CEOs report getting nothing measurable from their AI investments.

Meanwhile, budgets keep climbing. EY's research reports that 88% of mid-to-large organizations now spend over 5% of their IT budget on AI. S&P Global found that 42% of companies scrapped the majority of their AI initiatives in 2025 — up from 17% the year before. That's not a learning curve. That's a pattern.

Companies are spending more on AI than ever before and getting less from it than they expected. Not because the technology doesn't work. Because how they're implementing it doesn't work.

This post breaks down why the gap exists, what's driving it, and what the organizations that actually extract value from AI are doing differently.

1. The Adoption Gap: Buying AI Is Not the Same as Using AI

There's a critical difference between deploying AI and adopting AI. Deploying means you've purchased tools. Licenses are active. Maybe you ran a pilot. Maybe leadership gave a presentation about the company's "AI-first" future. Adopting means your teams have changed how they work. AI is built into their day-to-day workflows — not sitting in a tab they opened once during onboarding.

Writer's 2026 Enterprise AI Adoption survey — conducted in partnership with Workplace Intelligence across 2,400 employees and C-suite leaders globally — found that 79% of executives acknowledge struggling with lagging ROI, strategy gaps, and internal power struggles around AI. Nearly half (48%) said AI adoption at their company has been a massive disappointment. And 75% of executives admitted their company's AI strategy is "more for show" than actual internal guidance.

The EY 2025 Work Reimagined Survey, which surveyed 15,000 employees and 1,500 employers across 29 countries, tells a similar story. 88% of employees use AI at work — but primarily for basic tasks like search and summarization. Only 5% are using AI in ways that actually transform how they work. And 64% of employees report increased workloads over the past year, yet almost none are using AI to address that.

That's the gap. Not access. Not awareness. Depth of use. Most organizations have deployed AI everywhere and adopted it nowhere.

2. Why Most AI Initiatives Fail: Three Root Causes

The failure pattern isn't random. Across every major enterprise AI study published in the past year, three root causes appear again and again.

The first is treating AI as a tool problem instead of a people problem.BCG's research on AI transformation identified what they call the 10-20-70 rule: only 10% of AI success comes from algorithms, 20% from technology and data, and 70% from people and processes. Seventy percent of the value comes from the part most companies spend the least time on. Organizations that fail at AI almost always over-invest in the technology and under-invest in the change management, workflow redesign, and team enablement required to make it stick.

Harvard Business Review's February 2026 research confirms this. AI initiatives stall not because employees don't understand the technology — but because their anxiety about relevance, identity, and job security drives surface-level use without real commitment. Leaders who treat AI adoption as a psychological and contextual challenge, not just a technical rollout, are far more likely to convert experimentation into sustained impact.

The second root cause is experimenting without strategy.Writer's 2025 enterprise AI adoption report found that enterprises without a formal AI strategy report only 37% success in adoption. Those with a strategy report 80%. The difference isn't the tools. It's the direction. Most companies are running AI pilots with no defined success metrics, no workflow integration plan, and no accountability structure. They're testing AI in isolation — a marketing team trying ChatGPT here, a sales team testing a copilot there — with no coordination across departments. Pertama Partners' data found that projects with pre-defined success metrics achieve 4.5x the success rate of those without them. Yet the majority of pilots launch without any measurable criteria for what success looks like.

The third root cause is automating broken processes. The most common AI mistake isn't picking the wrong tool. It's applying AI to a process that was already broken. BCG's research draws a clear line here: organizations that succeed with AI fundamentally redesign their workflows. Those that fail try to automate old, broken processes. AI doesn't fix bad processes — it scales them. If your content workflow has no structure, AI will produce more unstructured content faster. If your sales process has no qualifying framework, AI will qualify leads using the same flawed criteria at higher volume. Speed without direction is just faster chaos.

3. The Numbers That Should Concern Every Executive

Let's put the adoption gap into perspective with the data that actually matters:

According to Pertama Partners' 2026 analysis, 80% of AI projects fail to deliver expected value. MIT's GenAI Divide report found that 95% of generative AI pilots never move beyond the experimental phase. PwC's 2026 Global CEO Survey reports that 56% of CEOs say they've gotten nothing measurable from AI adoption efforts.

S&P Global data shows that 42% of companies scrapped the majority of their AI initiatives in 2025 — more than double the 17% abandonment rate from the year prior. Writer's 2026 enterprise survey found that 75% of executives describe their company's AI strategy as performative rather than operational. And EY's Work Reimagined Survey shows that only 5% of employees are maximizing AI to transform their work, despite 88% having access.

On the financial side, Pertama Partners reports that large enterprises lose an average of $7.2 million per failed AI initiative. Completed-but-failed projects cost an average of $6.8 million while delivering only $1.9 million in value — a negative 72% ROI. Abandoned projects still average $4.2 million in sunk costs. These aren't edge cases. This is the mainstream experience with enterprise AI right now.

4. What the Organizations Actually Getting Value Are Doing Differently

The companies that extract real ROI from AI share a consistent set of behaviors. None of them are about picking better tools.

They define success before they start.Pertama Partners' data shows a 4.5x improvement in success rates when metrics are defined before project approval. Successful organizations refuse to approve AI projects without quantified success criteria and minimum viable outcomes defined upfront. They establish accountability for business results and track adoption metrics alongside technical ones.

They invest in people, not just platforms.BCG reports that 98% of employees who receive proper AI upskilling go on to generate new AI use cases on their own. 85% increase their usage over time. The investment in training compounds. The investment in licenses alone doesn't. Deloitte's 2026 State of AI report found that the AI skills gap is the single biggest barrier to integration — and the number one way companies are addressing it is through education, not role redesign or workflow automation. Training first, tooling second.

They redesign workflows — not just individual tasks. There's a meaningful difference between "use ChatGPT to write emails faster" and "redesign the entire client communication workflow with AI as a core layer." The first is a productivity tweak. The second is adoption. HBR's March 2026 analysis of the "last mile" problem concluded that the primary obstacle is rarely model quality or data availability. It's the work of embedding AI into how teams actually operate day to day. The organizations pulling ahead aren't layering AI onto existing processes. They're rethinking the processes entirely.

They make adoption visible and measurable. Successful companies track adoption alongside technical metrics. They know which teams are using AI, how deeply, and what outcomes it's producing. They have dashboards, not assumptions. Without adoption data, CIOs have no credible answer when the board asks: is this working?

5. The BCG 10-20-70 Rule: Where Your AI Investment Should Actually Go

BCG's 10-20-70 framework provides the clearest model for why most AI investments underperform. According to their research across hundreds of enterprise AI implementations, the value distribution breaks down like this: 10% of results come from algorithms and AI models, 20% come from technology infrastructure and data, and 70% come from people, processes, and organizational change.

The implication is direct. Most companies are pouring budget into the 10% — evaluating models, licensing platforms, building tech stacks — while spending almost nothing on the 70% that actually determines whether AI delivers value. The algorithm doesn't matter if your team doesn't use it. The platform doesn't matter if your workflows aren't designed for it. The data doesn't matter if your people don't trust the outputs.

Leading organizations flip this ratio. They invest the majority of their AI budget into training, workflow redesign, change management, and adoption tracking. The technology is the smallest line item — not because it doesn't matter, but because without the people and process work, even the best technology produces zero return. As BCG's 2025 report on agents and AI value creation reinforces: the guiding principle remains to follow the 10-20-70 rule to emphasize people and processes, regardless of how the underlying technology evolves.

6. A Framework for Moving from Experimentation to Operational AI

Based on the research and what we see working with organizations at Elevated Strategy AI, the path from experimentation to real value follows four phases.

Phase 1 is Experimentation. This is where most companies are stuck. The focus is on tool access and initial pilots — evaluating AI platforms, running limited tests, letting individual teams explore. There's value here, but only if it's structured. Without defined criteria for what you're testing and how you'll measure it, experimentation becomes permanent. ISG's 2025 State of Enterprise AI Adoption report found that only 31% of AI use cases studied had reached full production — double the prior year, but still leaving the majority stuck in pilot.

Phase 2 is Strategy. This is the most-skipped phase. The focus shifts to direction and success criteria — defining what AI adoption actually means for your organization, mapping which workflows benefit most from AI integration, establishing measurable outcomes, and building a phased roadmap. Writer's data shows this single step changes adoption success rates from 37% to 80%.

Phase 3 is Implementation. The focus here is workflow redesign and team enablement. This is where you embed AI into actual processes, train teams on how to use it within their specific roles, build prompt systems and internal knowledge bases, and create the support structures that sustain daily use. This is the 70% that BCG's research says determines outcomes.

Phase 4 is Optimization. The focus becomes measurement and scaling. Track adoption across teams and departments. Measure ROI against the criteria defined in Phase 2. Identify what's working and expand it. Retire what isn't. Deloitte's 2026 report notes that the number of companies with 40% or more of their AI projects in production is expected to double in six months — but only among those with structured approaches to scaling. This phase never ends — it's the continuous loop that separates organizations building long-term AI capability from those running one-time projects.

Most companies jump from Phase 1 directly to Phase 3, skipping the strategy work entirely. They implement without direction. That's why the failure rate is so high.

7. What You Can Do This Week

If you're reading this and recognizing your own organization in the data, here are three actions you can take immediately.

First, audit your current AI usage. Not licenses — actual usage. Which teams are using AI? For what tasks? How often? If you don't have this data, that's your first problem. You can't improve what you can't see.

Second, define one workflow to redesign with AI. Not a tool to test. A workflow. Pick the highest-value, most repeatable process your team runs and map what it looks like with AI embedded — not bolted on. Start with how work actually gets done today, then design how it should work with AI as a core layer.

Third, measure adoption, not just deployment. Create a simple dashboard that tracks three things: who is using AI, how deeply they're using it, and what outcomes it's producing. If you can't answer those three questions, your AI investment is a guess — not a strategy.

The gap between companies that buy AI and companies that use AI is widening every quarter. The organizations that close it in the next 12 months will have a structural advantage that compounds. The ones that don't will have spent millions on tools no one uses.

The technology is ready. The question is whether your organization is.

FAQ

Why do most AI initiatives fail to deliver business value?

The primary reasons are organizational, not technical. BCG's research shows that 70% of AI success depends on people and processes — areas most companies under-invest in. Common failure patterns include experimenting without strategy, automating broken workflows, and treating AI as a tool deployment rather than a change management initiative. Writer's 2026 survey found that 75% of executives admit their AI strategy provides no real operational guidance.

What is the BCG 10-20-70 rule for AI?

BCG's 10-20-70 framework shows that only 10% of AI value comes from algorithms, 20% from technology and data infrastructure, and 70% from people, processes, and organizational change. Leading organizations invest the majority of their AI budget into training, workflow redesign, and adoption tracking — not just technology licensing.

How can organizations measure AI adoption effectively?

Effective AI adoption measurement goes beyond license counts and login data. Organizations should track which teams are actively using AI, how deeply it's integrated into their workflows, and what measurable business outcomes it produces. This requires dedicated dashboards that connect usage data to ROI metrics defined before implementation begins. Deloitte's 2026 State of AI report identifies the AI skills gap as the top integration barrier, reinforcing that measuring adoption requires tracking both capability and behavior change.

What percentage of AI projects fail in 2026?

Multiple sources report high failure rates. Pertama Partners' 2026 analysis shows an 80% project failure rate across enterprises. MIT's GenAI Divide report found that 95% of generative AI pilots fail to move beyond experimentation. PwC's 2026 Global CEO Survey reports that 56% of CEOs have seen no measurable value from their AI investments. S&P Global data shows that 42% of companies abandoned the majority of their AI initiatives in 2025.

What separates companies that succeed with AI from those that fail?

Successful organizations consistently do four things: they define success metrics before starting AI projects (which produces 4.5x better outcomes according to Pertama Partners), they invest heavily in team training and upskilling (BCG reports 98% of upskilled employees generate new use cases), they redesign workflows around AI rather than layering it onto existing processes, and they track adoption data continuously to measure real usage and outcomes — not just deployment.

How much do failed AI initiatives cost enterprises?

According to Pertama Partners' 2026 data, large enterprises lose an average of $7.2 million per failed AI initiative. Completed-but-failed projects average $6.8 million in cost while delivering only $1.9 million in value — a negative 72% ROI. Abandoned projects still average $4.2 million in sunk costs.

WRITTEN BY
Nardeep Singh

AI Strategist

Nardeep Singh is a marketing technology executive with 12+ years leading AI implementation and digital strategy in healthcare. She is the founder of Elevated Strategy and creator of AI Nuggetz, a growing community of marketing and technology professionals learning to apply AI. She holds an M.S. in Information Technology Management.

← All PostsBook a Call