Briefing
February 9, 2026

The commercial AI landscape in 2026

The AI market moves fast and the coverage is mostly noise. A clear-eyed look at who the major players are, what is actually changing, and what it means if you are running a company.

The commercial AI landscape in 2026

The problem with most AI coverage

Most writing about AI falls into one of two camps: breathless hype aimed at investors, or dense technical analysis aimed at machine learning engineers. If you're a mid-market executive who needs to understand this well enough to make real decisions, neither helps.

This is the piece we'd want to read if we were in your position. No hype. No jargon tourism. No "top 10 AI tools that will TRANSFORM your business." Just a clear survey of who the major players are, what's actually shifting, and what it means for companies that need to act on this stuff rather than just tweet about it.

Enterprise spending on large language models more than doubled in the first half of 2025, reaching $8.4 billion. Inference spend alone is projected to hit $15 billion by the end of this year. That money is going somewhere, and the decisions companies make now about where to place their bets will compound for years. Understanding the market isn't optional anymore.

The Big Three

Three companies dominate commercial AI right now: OpenAI, Anthropic, and Google. Together they account for roughly 75-80% of enterprise AI spending. Everyone else is fighting for scraps or serving a niche.

OpenAI

OpenAI is the name everyone knows. They created the category with ChatGPT, and they've parlayed that head start into the broadest product portfolio in the space. GPT-5 and its variants (mini, nano, 5.2, the new 5.3-Codex) cover everything from cheap lightweight tasks to the most demanding enterprise workloads.

Their strengths: distribution, brand recognition, and a product suite that spans chat, API, enterprise deployment, and agentic coding tools. If you've heard of one AI company, it's this one. That awareness matters when you're trying to get organizational buy-in.

The interesting story is the trajectory. OpenAI held 50% of enterprise market share in 2023. By mid-2025 that had dropped to roughly 27%. They're still the revenue leader, targeting $30 billion for 2026, but the market is moving underneath them. Their response has been to push aggressively into enterprise sales and to ship models at a relentless pace. Whether velocity can substitute for the focus their competitors have found is the open question.

Anthropic

Anthropic is the company that most mid-market executives haven't heard of but probably should have. Founded by former OpenAI researchers, they've taken a different approach: fewer products, deeper enterprise relationships, and a bet on reliability and safety that turns out to matter when you're building production systems.

Their flagship model family is Claude. It's become the default choice for coding and enterprise applications. Claude commands 42% market share among developers and has grown from 12% to roughly 32-40% of overall enterprise share in two years. That growth didn't come from marketing. It came from the models being consistently better for the kinds of work enterprises actually do.

Anthropic is targeting $15 billion in revenue for 2026, roughly half of OpenAI's goal but growing faster. Their bet is that winning in the enterprise isn't about being the biggest name in the room. It's about being the most trusted one.

Google

Google has the distribution advantage that should make everyone else nervous. Gemini models are increasingly competitive on benchmarks, and Google can embed AI into products that billions of people already use: Search, Gmail, Workspace, Android, Cloud.

Where Google struggles is focus. They're a conglomerate running an AI lab, not an AI company. Their enterprise market share sits around 20%, which feels like underperformance given their resources. Competitive pricing helps (Gemini undercuts GPT-4 by roughly 5x on input costs), and Gemini 3.0 could change the calculus. But historically, Google has had trouble turning research breakthroughs into coherent product experiences.

The bull case for Google is that distribution wins eventually. The bear case is that "eventually" isn't a strategy when the market is being carved up right now.

The open source question

If those three are the major commercial players, the next question every executive asks is: "Do we even need to pay for this? Can't we just use open source?"

The honest answer: it depends. And the "it depends" is doing real work in that sentence.

Meta's Llama family is the most widely deployed open model. It's genuinely capable, free for commercial use, and the foundation for thousands of fine-tuned variants. Llama 4 is competitive with commercial models on many tasks. Mistral, out of France, publishes open-weight models with no licensing restrictions. DeepSeek, out of China, made waves with R1, a reasoning model built on a fraction of the budget that Western labs spend.

The appeal is real: you control your data, you avoid vendor lock-in, you can fine-tune for your specific use case, and you don't pay per token. For companies with strict data residency requirements or highly specialized workflows, open source can be the right call.

But the trade-offs are equally real. Running your own models means managing infrastructure, hiring ML engineers who can keep things running, handling security and updates, and accepting that your model won't automatically improve the way a commercial API does. The talent alone can cost more than the API spend you're trying to avoid.

Here's the number that surprised us: despite all the excitement, open source models account for about 13% of enterprise AI workloads. That's actually down from 19% six months ago. The gap in frontier capabilities has narrowed, but for most companies, total cost of ownership still tips toward commercial APIs.

The practical guidance: use open source for specific, well-defined tasks where you have the engineering talent to support them. Use commercial APIs for everything else. Most mid-market companies will end up using both.

We cover the open source side in depth in our companion piece: the major models, the licensing reality, the geopolitics, and the economics of self-hosting.

The rest of the field

A few other players worth knowing about.

xAI (Grok). Elon Musk's AI company. Grok 3 is technically capable, but the product is tied to the X platform, which limits its enterprise utility. Watch from a distance.

Chinese AI labs. Alibaba's Qwen, Moonshot's Kimi, and others are producing models that compete on global benchmarks. For most Western mid-market companies, data sovereignty concerns and integration friction limit the practical relevance. But they're worth tracking because they put real competitive pressure on pricing across the entire market.

Microsoft. Not a model builder, but the most important AI distributor. Copilot products embed AI into the Office and Azure tools that many mid-market companies already live in. If you're a Microsoft shop, Copilot may be the least-friction entry point into AI adoption. The models underneath are from OpenAI, but the packaging and integration are Microsoft's.

Amazon (Bedrock). AWS's play is access, not models. Bedrock lets you use Claude, Llama, and others through a single API with AWS security and billing. If you're deep in AWS, it reduces integration overhead significantly.

What's actually changing

The players matter, but the structural shifts matter more. Four trends should be on your radar.

The pricing race

AI is getting dramatically cheaper, fast. Two years ago, the cost of running a capable model was measured in dollars per conversation. Now it's fractions of a cent. GPT-5 nano processes a million input tokens for five cents. Gemini Flash is similarly cheap.

This changes the math on which use cases are viable. Tasks that were too expensive to automate last year are now trivially cheap. If you evaluated AI for a specific workflow twelve months ago and rejected it on cost, run the numbers again.

The flip side: the cheapest models aren't the most capable. There's a real quality-cost trade-off, and the right answer depends on the task. A customer service chatbot and a financial analysis tool have very different requirements. The pricing race benefits companies that know what they need. It traps companies that default to "the cheapest option."

Reasoning models

The biggest technical shift in the past year: models that actually think before they answer. Reasoning models (OpenAI's o-series, DeepSeek R1, Claude's extended thinking) break complex problems into steps rather than generating a response in one pass.

For enterprise use, this is significant. Reasoning models are measurably better at tasks that require multi-step logic: financial analysis, code generation, legal document review, strategic planning. They're slower and more expensive per query, but the accuracy improvement often justifies the cost.

If your AI use cases involve anything more complex than search and summarization, you should be evaluating reasoning models specifically. The difference in output quality is not subtle.

Agentic AI

The industry's current obsession, and for once the hype has substance behind it. Agentic AI refers to systems that don't just generate text when prompted. They pursue goals: breaking tasks into steps, using tools, making decisions, adapting based on results.

By the end of this year, an estimated 40% of enterprise applications will include some form of AI agent, up from single digits two years ago. The shift is from "AI answers questions" to "AI does work."

What this looks like in practice: an AI agent that doesn't just draft an email but monitors a project management tool, identifies blockers, drafts status updates, and sends them to the right people. Or one that reviews incoming contracts, flags unusual terms, and routes them to the appropriate reviewer with a summary.

The technology is real. The challenge is organizational. Agentic systems need clear boundaries, oversight mechanisms, and integration with existing workflows. Most failures aren't technical. They're "nobody defined what the agent should and shouldn't be allowed to do" failures.

Vendor consolidation

This is the trend that will hit mid-market budgets hardest. 2026 is the year enterprises stop experimenting with seven different AI tools and start consolidating around one or two providers.

The logic is sound: experimentation was necessary when nobody knew which tools worked. Now the proof points exist, and the cost of maintaining multiple vendors (different APIs, different security reviews, different billing, different training for your team) is real. Companies are expected to spend more on AI this year, but through fewer vendors.

If you haven't picked your primary AI provider yet, this is the year. If you have, this is the year to negotiate from a position of understanding rather than urgency.

What this means for your company

Here's where the survey becomes practical. If you're trying to figure out what to do with all of this, the answer isn't "adopt AI." That's like saying "use electricity." The answer is specific and sequential.

Start with the problem, not the provider. The most expensive mistake in AI adoption is choosing a tool before understanding the workflow. Which processes in your business are bottlenecked by human attention? Where do your people spend time on repetitive, structured tasks that don't require judgment? Those are your first AI use cases. The provider choice comes after.

Pick a primary platform and commit. The consolidation trend exists because fragmentation is expensive. Choose one major provider based on which tools you already use and build your initial AI capabilities there. You can always add specialized tools later, but trying to evaluate everything simultaneously is how companies end up eighteen months into an "AI strategy" with nothing in production.

Budget for people, not just software. The data is consistent: 39% of mid-market companies cite lack of in-house expertise as their top AI challenge. Only 25% have fully integrated AI into their workflows. The gap isn't the technology. It's having someone who knows how to apply it. Whether that's hiring, upskilling your team, or bringing in an external partner, the human investment is where most companies under-spend.

Don't wait for the "right" model. A new model ships every few weeks. Waiting for the perfect one is a form of paralysis dressed up as diligence. The models available today are capable enough for the vast majority of enterprise use cases. The companies pulling ahead aren't the ones with the best AI. They're the ones that started building, learned from what worked, and compounded those learnings into the next iteration.

The bottom line

The commercial AI market is consolidating around a handful of serious players. Prices are falling fast. The technology is shifting from "answers questions" to "does work." None of this is speculative. It's happening right now, in measurable ways, with real dollars behind it.

The companies that get this right won't be the ones who picked the "best" AI provider or waited for the latest model release. They'll be the ones who started with a real problem, chose a capable tool, and learned faster than their competitors.

That's not a technology challenge. It's a clarity challenge. And clarity is where the real advantage lives.

Share this journal entry

Get The Pepper Report

The short list of what we're writing and what we're reading.

Published monthly
Thoughts, guides, headlines
Light editorial, zero fluff