Culture
January 16, 2026

The real innovation bottleneck isn't technology — it's fear

Companies are spending billions on AI while half their workforce doesn't feel safe enough to say it isn't working. The bottleneck isn't the technology. It's the culture.

The real innovation bottleneck isn't technology — it's fear

The money is moving. The culture isn't.

McKinsey saved 1.5 million hours last year through AI. They deployed 25,000 AI agents across the firm. In six months, those agents generated 2.5 million charts. The productivity gains are real, and nobody serious is questioning whether AI can deliver value in the mechanical parts of knowledge work.

But McKinsey's own global managing partner, Bob Sternfels, is publicly naming the three things those AI agents can't do: aspiration, judgment, and true creativity. Not philosophical ideals. The actual skills that separate teams that execute from teams that comply. The machines handle synthesis and search. The humans need to set direction, make hard calls, and think originally.

All three of those skills require one condition most organizations haven't built: an environment where people feel safe enough to exercise them.

Half your people don't feel safe speaking up

The APA's 2024 Work in America survey found that 49% of employees experience low psychological safety at work. Nearly half the workforce doesn't feel safe enough to say "this isn't working" or "I think we're solving the wrong problem."

Google figured this out a decade ago. Project Aristotle studied 180 teams over two years, examining 250 different attributes to find what made some teams dramatically outperform others. The answer wasn't talent, resources, or strategy. It was psychological safety. Whether people felt safe enough to take interpersonal risks, to voice concerns, to admit mistakes without it becoming a career event.

The teams that performed best weren't the smartest. They were the ones where someone could say "I don't understand this" and get help instead of judgment.

A recent Infosys and MIT Technology Review study made the connection to AI explicit: 83% of business leaders acknowledge that psychological safety has a measurable impact on whether AI initiatives succeed. Only 39% describe their organization's current safety levels as "high." The gap between knowing it matters and actually building it is where most companies are stuck right now.

AI makes the stakes higher, not lower

AI isn't just another technology rollout. It's one that directly threatens people's sense of professional identity and job security. The IMF estimates it will negatively impact roughly 30% of jobs in advanced economies. People know this.

The mid-level analyst running reports that an AI can now generate in seconds knows it. The marketing lead writing copy that a language model can draft in minutes knows it. The project manager whose status updates and summaries are exactly the kind of "synthesis work" that McKinsey's agents just automated knows it.

Being cautious in that environment isn't paranoid. It's sensible. If your organization has a history of reorganizing teams every quarter or punishing visible mistakes, staying quiet isn't cowardice. It's self-preservation.

The Infosys/MIT study found that 22% of leaders have personally hesitated to lead AI projects because they were afraid of failing. If the people at the top feel this way, imagine what it's like three levels down.

This creates a specific pattern. The organization announces an AI initiative. Leadership is publicly enthusiastic. Privately, people are terrified. Not of the technology, but of looking incompetent. Of asking the obvious question. Of being the team that tries something and fails visibly.

So they do the rational thing in a low-safety environment: they go quiet. They nod in meetings. They comply without contributing judgment. Or they do something even more damaging: they experiment with AI tools privately, learn useful things, and never share those learnings because the culture doesn't reward that kind of visibility.

Researchers call this "shadow AI." People learning behind closed doors because the open floor doesn't feel safe enough for honest experimentation. The organization loses the compounding value of shared learning, and nobody even knows what they're missing.

The silence is expensive

Work Institute's 2024 retention data shows that 63% of employee exits are preventable. The top drivers are career stagnation and poor management. People don't leave because the work is hard. They leave because they stopped growing, and they stopped growing because the environment stopped being safe enough to take the risks that growth requires.

Now layer an AI mandate onto that environment. You're asking people who already feel stuck to embrace the most disruptive technology shift in a generation, without addressing why they feel stuck in the first place.

U.S. companies spent nearly $900 billion replacing employees who quit in 2023. Nine hundred billion. Most of those departures were preventable, and most of the reasons track back to the same root: cultures where people don't feel heard or safe enough to do their best work.

You can have the best AI stack on the market. If your people are too afraid to tell you the pilot isn't working or the strategy is wrong, you'll optimize the wrong things faster. That's not progress. That's expensive silence.

The real bottleneck

The companies that will navigate the AI era well won't be the ones that adopted tools first. They'll be the ones that built cultures where people could actually use them honestly.

Not cultures where adoption was mandated from the top and measured by login frequency. Cultures where a mid-level analyst could say "this output is wrong and here's why" and be heard. Where a team lead could shut down a pilot that wasn't working without it becoming a career liability. Where someone could admit they don't understand the tool yet and get help instead of a performance note.

Sternfels named aspiration, judgment, and creativity as the three skills AI can't replace. Every one requires the willingness to be wrong in front of other people. Aspiration means pushing for something that might not work. Judgment means calling out what others don't want to hear. Creativity means proposing ideas that haven't been validated yet.

Those skills don't survive in fearful cultures. Not because people don't have them. Because people don't feel safe using them.

Google studied 180 teams and found the differentiator wasn't talent. The APA surveyed thousands of workers and found half don't feel safe at work. MIT and Infosys asked leaders and found four out of five know psychological safety matters for AI success. Fewer than two in five have built it.

The innovation bottleneck isn't what you're buying. It's what you're not fixing.

Share this journal entry

Get The Pepper Report

The short list of what we're writing and what we're reading.

Published monthly
Thoughts, guides, headlines
Light editorial, zero fluff