Strategy
February 2, 2026

Your company doesn't need an AI strategy — it needs a trust strategy

The bottleneck to AI value isn't technology. It's trust. Employees, buyers, infrastructure leaders, and markets are all saying the same thing from different directions. Until you solve for trust, your AI strategy is theater.

Your company doesn't need an AI strategy — it needs a trust strategy

The gap nobody wants to name

Every company has an AI strategy now. Or at least a slide deck that says they do. Budgets are approved. Tools are purchased. Mandates are issued. And yet the results keep disappointing.

Fifty-six percent of CEOs told PwC that AI has failed to boost revenue or lower costs. Ninety-five percent of companies in an MIT study saw no meaningful revenue growth from generative AI. Fifty percent of GenAI projects get abandoned after proof of concept, according to Gartner.

The usual explanation is execution. Better tools, better training, better data pipelines, better governance. But when you look at where AI initiatives actually stall, a different pattern emerges. The bottleneck isn't technology. It's trust.

Not trust as an abstraction. Trust as a measurable, systemic constraint operating across four dimensions simultaneously: your employees, your buyers, your infrastructure, and your market. Each one is quietly strangling AI value, and almost nobody is treating them as a connected problem.

Your buyers trust AI for research. They trust humans for decisions

Forrester surveyed nearly 18,000 global business buyers for their 2026 Buyers' Journey report. Ninety-four percent use AI during their buying process. That number looks like validation. It isn't.

Buyers lean on AI for speed and breadth of insight. They use it to scan markets, compare options, surface information they'd otherwise miss. But when it's time to make the actual decision, they validate everything against trusted human sources. Peers. Industry analysts. Product experts. People within their networks whose judgment they've tested.

The buying groups are getting larger, not smaller. On average, 13 internal stakeholders and nine external participants now influence a purchase. For purchases that include AI features, the buying group doubles in size compared to those without. More people means more validation. More validation means more trust gates.

The implication is uncomfortable for anyone selling AI-powered anything. Your buyers will use AI to find you. They will not use AI to trust you. That last mile is still human, and it's getting longer, not shorter.

Your employees don't trust that AI has their back

Carolyn Dewar, a McKinsey senior partner, named the dynamic plainly in Fortune: the AI adoption story is haunted by fear. Leaders ask their people to make bold experiments with AI while simultaneously launching efficiency programs that look like tomorrow's job cuts. When people feel exposed, they play small.

The data from Section's survey of 5,000 knowledge workers confirms what that looks like in practice. Two-thirds of individual contributors feel anxious or overwhelmed about AI. Eighty-one percent of C-suite executives think their company has a clear, actionable AI policy. Twenty-eight percent of individual contributors agree. That's not a communication gap. That's a trust gap.

The consequences are predictable. People nod in meetings. They comply without contributing judgment. They experiment privately and never share what they learn because the culture doesn't reward that kind of visibility. Researchers call it shadow AI. The organization loses the compounding value of shared learning, and leadership doesn't even know what they're missing.

The relationship between trust and adoption is direct. When people trust their manager, they use AI more. When they don't, they treat the mandate as one more thing to survive rather than something worth engaging with. No amount of tool selection or training budgets fixes that. You can't optimize your way past fear.

Your infrastructure isn't trusted to hold

Cockroach Labs surveyed 1,125 senior cloud architects, engineers, and technology executives across North America, EMEA, and Asia-Pacific. Eighty-three percent said AI-driven demand will push their data infrastructure to failure without major upgrades within two years. Thirty-four percent expect failure within eleven months.

That's not a long-term planning problem. That's a ticking clock.

Nearly two-thirds of respondents said their leadership teams underestimate how fast AI demands will outpace existing systems. The AI strategy assumes the foundation will hold. The people responsible for the foundation are saying it won't.

This is trust at the infrastructure level. Every AI initiative runs on the assumption that the underlying systems can handle the load. When the engineers who maintain those systems are saying otherwise, the strategy is built on a foundation that the foundation team doesn't believe in. That's not a technical problem. That's a credibility problem that manifests as technical failure.

Your market doesn't trust what you're claiming

AI washing is now a litigation category. The SEC fined two investment advisers $400,000 for exaggerating their AI capabilities. Innodata's stock dropped more than 30% after a report called its AI technology "smoke and mirrors." Shareholders filed suit. The FTC has opened its own line of enforcement.

The pattern goes beyond legal exposure. Companies that overstate AI capabilities are eroding market trust in AI broadly. Every inflated claim makes the next legitimate claim harder to believe. Buyers are already wary. Regulators are watching.

Prosper Insights & Analytics found that 26% of consumers explicitly say they don't trust that AI has their best interests in mind. That's not a question about capability. It's a question about intent. And when more than a quarter of your potential customers are questioning whether the technology is even on their side, the problem isn't the technology's performance. It's the market's confidence that anyone is using it honestly.

The data that connects it

Burson released a study in January quantifying the financial value of corporate reputation. The headline finding: companies with strong reputations earn a measurable "reputation return" of up to 4.78% in additional shareholder value, creating what they call a $7.07 trillion global reputation economy.

But the finding that matters for the AI conversation is buried in the workplace dimension. Though ranked lowest in perceived importance by executives, workplace reputation showed an 11.8% performance gap between the best and worst companies in the study.

Burson's global corporate affairs lead, Matt Reid, put it directly: "Businesses must go beyond having an 'AI strategy' and create an 'AI people strategy,' because how they manage this transition will be a powerful statement about how they value their employees." Companies that invest in reskilling and co-create the future with their people earn a reputation dividend. Companies that treat AI as a tool for headcount reduction pay a reputation tax, with efficiency gains offset by reputational losses.

That's the connecting thread. The trust gap isn't just an employee problem or a buyer problem or an infrastructure problem or a market problem. It's the same problem expressing itself in four different dimensions. And the companies that solve it in one dimension tend to solve it in the others, because the underlying behavior is the same: treating people as the point, not the obstacle.

What you're actually looking at

Strip away the language of AI strategy and what you're left with is a trust problem with a technology label.

Your employees don't trust that AI adoption won't cost them their jobs. So they comply without engaging. Your buyers don't trust AI outputs enough to make decisions on them alone. So they add more humans to the process. Your infrastructure leaders don't trust the foundation to hold. So they're planning for failure rather than planning for scale. Your market doesn't trust that companies are being honest about AI capabilities. So they're filing lawsuits and walking away.

Each of these trust failures has a measurable cost. Combined, they explain most of the gap between what AI promises and what it delivers.

The companies that figure this out won't be the ones with the best models or the largest training budgets. They'll be the ones that recognized the actual constraint. Not "how do we deploy AI faster?" but "how do we build the trust that lets AI deployment work?"

That question changes everything downstream. How you communicate about AI internally. What you promise the market. Whether your employees become allies or remain bystanders.

Call it a trust strategy if you want. Call it whatever you want. But if you're still calling it an AI strategy and wondering why the results aren't showing up, the name might be the first thing worth changing.

Share this journal entry

Get The Pepper Report

The short list of what we're writing and what we're reading.

Published monthly
Thoughts, guides, headlines
Light editorial, zero fluff