The case for small AI
The AI narrative is all moonshots. The results are coming from chatbots, coding agents, and service assistants. Meanwhile, companies that cut headcount to fund big AI bets are quietly hiring people back.

The moonshot problem
Every vendor pitch, every boardroom presentation, every AI strategy deck opens the same way. Massive transformation. Enterprise-wide deployment. Billions in infrastructure. The implicit promise is that AI's value scales with the size of the commitment.
Unisys looked at what's actually working and found the opposite. In their 2026 enterprise technology forecast, CEO Mike Thomson described the shift: "In 2026, we are going to see more functional deployments of AI, a focus on quality rather than cost-cutting, and the emergence of AI applications that will deliver repeatable, ROI-driven results."
The majority of successful deployments are not the large-scale initiatives getting the press coverage. They're smaller, task-based integrations folded into existing processes. Smaller datasets that are easier to clean. Lower investment. Smoother change management. Quicker results. The enterprise transformation playbook still gets funded. It just doesn't deliver.
Three things that work
Unisys identified three application types emerging as the repeatable pattern: chatbots for employees and clients, AI coding agents, and AI-driven service assistants.
What they share isn't sophistication. It's constraint. Each one addresses a specific, well-bounded task within an existing workflow. Nobody has to reorganize a department. Nobody has to retrain an entire division. The AI fits into what people already do, handles the repetitive parts, and gets out of the way.
Unisys describes them as "packaged, measurable, and quick to deploy." The key phrase in their analysis: "The wins will come from projects that assist rather than replace."
IWG CEO Mark Dixon made the same observation from an operator's perspective. "You can't use a bot for every job," Dixon said in a January interview with TheStreet. "But if you pick the right ones, you get a significantly better outcome." He described IWG's chatbots handling invoice questions and fielding multi-part customer emails with precision, and AI scheduling tools scanning weather data, transit times, and staffing databases to figure out exactly how many people to put where, and when. Simple applications. Measurable results. No organizational upheaval required.
The people you need are the people you cut
Here's the data point that should change how companies think about AI and headcount.
Forrester's Predictions 2026 report on the future of work found that 55% of employers now regret laying off workers because of AI. Half of all AI-attributed layoffs are projected to be reversed. Companies made those cuts based on what Forrester called "the future promise of AI" rather than proven capabilities. The AI wasn't ready. The work still needed doing. And the people who understood the work were gone.
The Unisys analysis arrived at the same conclusion from a different direction: "Companies that initially planned AI-related headcount cuts are reversing course, finding those reductions slow implementation and limit returns."
That sentence is worth reading twice. The companies that cut people to fund AI found that the cuts made the AI harder to implement.
This isn't a soft argument about morale or culture. It's structural. AI doesn't replace workflows. It changes them. The people who understand the existing workflows are the ones who can identify where AI fits, test whether it's working, and course correct when it isn't. Cut them and you lose the institutional knowledge that the AI was supposed to augment.
Klarna is the case study. Between 2022 and 2024, the company eliminated roughly 700 customer service positions and replaced them with an AI assistant built with OpenAI. At its peak, Klarna's AI handled two-thirds of all customer interactions. CEO Sebastian Siemiatkowski celebrated the efficiency gains publicly.
Then quality dropped. Customers got generic, repetitive responses to complex questions. Issues went unresolved. The brand took visible damage. By spring 2025, Siemiatkowski reversed course. "We went too far," he admitted. Klarna started hiring humans again, this time in a hybrid model where AI handles the simple interactions and people handle the rest.
Forrester sees the same pattern across the market. Companies cut based on projected capability. They discovered the capability wasn't there. They rehired, often quietly, sometimes offshore, sometimes at lower wages. The net effect was disruption without the promised return.
The two-speed problem
Dixon offered a framework that explains why small wins need protected space to emerge. "You need to run your business at two speeds," he said. "Separate your AI and new business methods team from your business-as-usual team. Business-as-usual teams are busy with day-to-day work and tend to resist change."
This clarifies why the small AI pattern succeeds and the moonshot pattern doesn't. Operators managing daily workloads can adopt a chatbot that handles invoice questions. They cannot simultaneously run their department and architect an enterprise-wide AI overhaul. Small tools meet people where they are. Big initiatives demand they be somewhere else entirely.
Faisal Hoque, writing in Fast Company, made the adjacent point: "Technology is rarely the constraint. Most companies can access impressive AI tools. What they lack are the management systems needed to deploy those tools strategically." His framework for building an AI innovation pipeline starts not with technology selection but with baseline assessment. What are you actually doing now? Where does AI genuinely help?
The companies getting value aren't asking "what can AI do?" They're asking "what are we already doing that AI can make better?" The first question leads to moonshots. The second leads to chatbots, coding agents, and service assistants.
Start where the work is
The pattern across the research runs one direction. Big AI bets produce big disappointment. Small, targeted deployments produce repeatable value. Companies that cut people to make room for AI lose the people who make AI useful.
The path to real AI value isn't glamorous. It won't make for a great board deck. It starts with a question every vendor hopes you won't ask: what specific thing could AI do this week to make someone's actual work better?
The companies that keep asking that question will compound small wins into something the moonshot crowd never reaches.
Get The Pepper Report
The short list of what we're writing and what we're reading.