How to identify which parts of your business AI will disrupt first
You're looking at the wrong part of the stack
Every industry has a version of the same conversation. Construction: "Software can't swing a sledgehammer." Manufacturing: "Robots can't do custom fabrication." Healthcare: "You can't automate bedside manner." The argument changes shape but the conclusion is always the same. We're safe.
The conclusion is wrong. Not because AI will swing sledgehammers anytime soon. It won't. But because the sledgehammer isn't where the disruption starts.
AI enters an industry through its knowledge work. Not the jobs that happen at a desk all day. The knowledge tasks embedded inside every role, including roles nobody thinks of as knowledge work. The cost estimator who pulls historical project data and synthesizes it into a bid. The insurance adjuster who reviews photos, cross-references damage patterns, and generates a repair estimate. The scheduling coordinator who searches across crew availability, materials lead times, and site constraints to assemble a project timeline.
Search. Synthesis. Estimation. These three categories of task are where AI capabilities are sharpest right now. They exist in every industry. The question isn't whether your industry will be affected. It's which roles in your company are primarily doing these things today.
How AI actually enters an industry
Construction: the $2 trillion blind spot
The US construction industry generates over $2 trillion in annual revenue. It needs roughly 500,000 new workers every year just to keep pace with demand. Physical labor is in genuine shortage. And the industry's default assumption is that AI is irrelevant because the work is physical.
Meanwhile, robotic painting machines are already operating on job sites. AI cost estimation tools can analyze blueprints, identify structural elements, and generate material takeoffs in minutes rather than the days it takes experienced human estimators. Companies that adopted digital construction tools are reporting productivity improvements of up to 50% and cost reductions of 10-20% on large projects.
But the robots are a sideshow. The real story is what's happening to the knowledge work inside construction.
An experienced estimator spends most of their time on two activities: reading plans and quantifying materials. Both are search and synthesis tasks. AI can already identify walls, floors, ceilings, and roof planes from drawings. It can recognize electrical symbols, calculate square footage, and cross-reference material databases. The mechanical work of estimation, the part that used to take decades to master, is compressing into minutes.
What remains? Judgment. Understanding construction sequencing, interpreting ambiguous architect notes, assessing risk, knowing which drawings conflict and why it matters. Those tasks require experience that AI can't replicate. But they represent a fraction of the estimator's total workload.
The pattern: AI doesn't replace the role. It compresses it. The role that used to require 40 hours a week of search, synthesis, and estimation with 5 hours of judgment becomes 5 hours of judgment with AI handling the rest. That's a fundamentally different job. And most of the people in that role aren't being prepared for the transition.
Auto insurance: the completed pattern
Construction is watching the disruption arrive. Auto insurance already lived through it.
A decade ago, insurance claims adjustment was a skilled profession built on field experience. Adjusters inspected vehicles in person, assessed damage based on years of pattern recognition, and wrote repair estimates drawing on deep knowledge of parts, labor rates, and structural integrity. It was expertise-heavy work that seemed immune to automation.
Then AI learned to read photos.
The progression was systematic. First, computer vision tools started processing damage photos, identifying dented panels, broken glass, and structural deformation. The accuracy was rough initially but it improved fast. Adjusters who once performed full assessments started spending their time reviewing AI-generated estimates rather than creating their own. The field work that defined the profession narrowed to edge cases the AI couldn't confidently classify.
Then the review step compressed too. AI-generated repair estimates now arrive more than 80% complete in under two minutes from a photo submission. Computer vision achieves over 95% accuracy in estimating damage severity, matching or exceeding human adjusters. Lemonade, a digital-first insurer, resolved 30% of claims without any human involvement by 2023. Geico reduced entry-level adjuster positions by 25% the following year.
The Bureau of Labor Statistics projects an 11% decline in insurance adjuster positions over the next decade. That number sounds manageable until you map the progression: experienced professionals who built careers on search and synthesis (finding the right repair protocol, synthesizing visual damage data with cost databases) watched those tasks get automated layer by layer. Each layer was small enough to feel manageable. The cumulative effect was a fundamental restructuring of the profession.
The people in these roles didn't fail. The task composition of their roles shifted underneath them. The search and synthesis that constituted the bulk of the job was exactly the capability zone where AI was strongest.
The diagnostic filter
The construction and insurance examples share a pattern. Both industries assumed they were safe because the visible work was physical or experiential. Both got disrupted through the invisible knowledge work embedded in every role.
The diagnostic question is simple: which roles in your company are primarily doing search, synthesis, or estimation?
Search
Any task where a person looks through information to find what's relevant. Pulling historical project data. Reviewing compliance documentation. Finding the right vendor spec. Searching customer records for precedent. Scanning market data for patterns.
AI is already strong here. McKinsey deployed 25,000 AI agents across its own operations. Those agents saved 1.5 million hours last year on search and synthesis tasks alone. That's not a projection. That's a measured result inside one of the world's most information-intensive organizations.
Search tasks are high-exposure because they're mostly mechanical. The person doing the searching applies judgment in deciding what's relevant, but the act of searching is the bottleneck. Remove the bottleneck and the role changes shape.
Synthesis
Any task where a person combines information from multiple sources into a coherent output. Writing reports. Summarizing meetings. Compiling competitive analyses. Drafting project status updates. Creating presentations from raw data.
McKinsey's AI agents generated 2.5 million charts in six months. BCG built over 36,000 custom GPTs, including tools trained on hundreds of presentation templates to produce client-ready slides. These are synthesis tasks at industrial scale.
Synthesis is where AI output quality varies most. Simple synthesis (combining structured data into a formatted report) is already automated. Complex synthesis (interpreting what data means in a specific business context) still needs human judgment. But the line between simple and complex moves every few months.
Estimation
Any task where a person uses data and experience to produce a prediction. Cost estimates. Time projections. Demand forecasts. Risk assessments. Pricing models.
This is where the construction story is sharpest. AI cost estimation tools produce outputs that experienced estimators used to take days to create. In auto insurance, AI damage estimates match human accuracy at 95% while completing in minutes rather than hours.
Estimation tasks are deceptive because they feel like pure judgment. Experienced estimators describe their work as "feel" and "intuition." They're right that judgment plays a role. But most of that intuition is pattern recognition built from thousands of prior data points. Pattern recognition from large datasets is exactly what AI does best.
The judgment that remains is real: understanding context, assessing risk, making calls about ambiguous situations. That's genuinely human work. But it represents 10-20% of the estimation task. The other 80% is searchable, synthesizable data processing that happens to live inside someone's head rather than a database.
Where AI capabilities stand right now
Rather than predicting where AI will be in two years (predictions age badly), here's what the technology can do in each category today.
Search: already automated at scale. Enterprise AI search tools process unstructured documents, surface relevant information across databases, and return results with citations. The jump from keyword search to semantic search (understanding what you mean, not just what you typed) is behind us. Research shows that roughly 80% of US workers have at least 10% of their daily tasks exposed to large language models. For roles that are search-heavy, that exposure is much higher.
Synthesis: reliable for structured, improving for complex. Combining data into formatted reports, summarizing documents, generating charts, drafting standardized communications. These are solved problems at enterprise scale. Complex synthesis that requires understanding organizational context and producing novel insight is not solved. The gap between "AI produced a first draft" and "AI produced the final answer" is still wide for anything that requires analytical judgment. The practical line: if the task has a template (even an implicit one), AI can handle it.
Estimation: strongest where data exists. AI estimation performs remarkably well when historical data is rich and structured. Cost estimation, damage assessment, demand forecasting, pricing optimization. In these domains, AI matches or exceeds human accuracy while operating orders of magnitude faster. It struggles when data is sparse, context is novel, or variables interact in ways historical patterns don't capture. McKinsey's CEO identified three skills that remain irreplaceable: aspiration (deciding what to aim for), judgment (deciding what matters), and creativity (seeing what the data doesn't show). These define the boundary of what stays human.
The response framework
Once you've mapped your roles against these three categories, the distribution tells you what to do.
High exposure: restructure around the judgment that remains
Roles where the majority of work is search, synthesis, or estimation. The auto insurance pattern applies: the mechanical work will compress, and the role will shrink to the judgment layer.
Don't panic or eliminate positions preemptively. Restructure. Identify the judgment tasks that remain. Invest in developing those capabilities in your people. Begin integrating AI tools for the mechanical work now so the transition is gradual rather than sudden.
The worst outcome is doing nothing until the economics force a sudden cut. Companies in insurance that started the transition early retained their best people in restructured roles. Companies that waited had to lay off experienced professionals who could have been retrained.
Medium exposure: augment now to build muscle
Roles where a significant portion of work (but not the majority) is search, synthesis, or estimation. These roles won't be restructured out of existence, but AI augmentation will change what "good" looks like. The person who augments well will produce twice the output. The person who doesn't will fall behind.
Deploy AI tools for the search and synthesis components now. In production, not as a pilot. The goal isn't cost reduction yet. The goal is building institutional muscle. Your people need to learn how to work with AI tools before the tools are good enough to replace the work entirely. That learning curve takes months. Start early.
BCG surveyed 1,250 executives and found that only 5% of companies globally are capturing real value from AI. The gap isn't technology. It's readiness. Readiness comes from practice, not planning.
Low exposure: monitor, don't panic
Roles dominated by judgment, relationship management, physical skill, or creative work. These are genuinely safer for now.
Monitor the capability frontier. What AI can't do today isn't necessarily what it can't do in a year. But don't waste resources trying to automate work that doesn't have the right characteristics. That's how companies end up in the pilot purgatory that's burning billions across the industry.
The org-level assessment
Individual role assessments are useful but incomplete. The real value comes from aggregating them into a company-wide map.
Inventory by function. List every role in the organization. For each one, estimate the percentage of time spent on search, synthesis, and estimation versus judgment, relationship, physical, and creative tasks. You don't need precision. Directional accuracy is enough. Most managers can estimate their team's task distribution in 15 minutes per role.
Cluster by exposure. Group roles into high, medium, and low exposure based on the distribution. Look for patterns by function, department, and level. In most organizations, mid-level operational roles cluster toward high exposure because they're the synthesis layer between raw data and executive decisions.
Map the dependencies. Which high-exposure roles feed into other roles? If your estimation team is high-exposure and your project management team depends on their estimates, the restructuring of one affects the other. Don't evaluate in isolation. Map the information flows.
Prioritize by impact. Not every high-exposure role needs immediate attention. Prioritize based on two factors: business impact (what happens if this work gets 3x faster or 50% cheaper?) and talent risk (are the people in these roles likely to leave if they see the change coming?). Start with the intersection of both.
Build the timeline. For each priority role, define three horizons. Now: deploy AI tools for the mechanical work. Next quarter: begin restructuring the role around judgment tasks. This year: measure whether the restructured role produces better outcomes at lower cost.
The question you should be asking
Most companies facing the AI shift ask "where can we use AI?" That question starts with the technology and works backward toward the business.
The better question starts with the work. Which roles in my company are primarily doing search, synthesis, or estimation? Those roles are where AI capabilities are sharpest, regardless of whether your industry looks "safe" from the outside.
Construction thought it was safe because you can't automate a sledgehammer. Insurance thought it was safe because claims adjustment required decades of expertise. Both were right about the physical and experiential work. Both were wrong about the knowledge work hiding inside those roles.
The same knowledge work is hiding inside yours.
More Playbooks
See all playbooksThe AI guide to the galaxy
Most AI strategies fail before they start. A practical guide for the executive who got the mandate, skipped the hype, and needs to know what actually works.
February 11, 2026
Keeping your brand voice alive in the age of AI content
AI can write faster than you. It cannot write like you. A practical guide to using AI for content creation without sounding like everyone else.
February 4, 2026
The AI innovation pipeline
Most companies go too big or too small with AI. A systematic approach to finding, scoring, and sequencing the initiatives that actually reach production.
January 21, 2026
Get The Pepper Report
The short list of what we're writing and what we're reading.