You can't put AI into work designed in 2006
AI doesn't map to job descriptions. It maps to decisions, outputs and tasks. But most organizations only have job-level data—roles, responsibilities, reporting lines. They don't know how work processes actually break down. Which tasks take time? Which require judgment? Which are routine and could be partially automated? Without task-level data, you can't design human-machine workflows. You just bolt AI onto the edges and hope.
The organizations getting value from AI map the flow of work.
AI doesn't map to job descriptions. It maps to decisions, outputs and tasks. A job description says "analyze data, prepare reports, coordinate stakeholders"—that worked when humans did everything. But AI doesn't care about job titles. Within "analyze data" there might be twenty different activities: data cleaning, pattern recognition, interpretation, recommendation. Some are routine outputs AI handles well. Some are judgment calls requiring human expertise. Some are decisions that could be partially automated with human oversight.
You can't just "add AI" to a job. You have to know what the job actually contains. Most organizations don't. They have job descriptions written years ago, org charts showing reporting lines, role levels and salary bands. What they don't have is data on how work processes actually break down. Where does time go? Which outputs are routine? Which decisions require expertise? Which tasks involve judgment that's hard to codify? Most organizations can't answer these questions with data. They'd have to ask people—and even then, people often can't articulate how their work actually flows.
So when AI arrives, there's nothing to design against. Someone says "let's use AI for research"—okay, which parts of the research process? Literature review? Data gathering? Synthesis? Recommending conclusions? These are different. Some are routine outputs, some are judgment-heavy decisions. AI handles them differently. Without understanding the process, you're guessing.
The result is AI at the margins. A bit of efficiency here, some time savings there. The fundamental work processes stay the same, just with AI bolted on. This isn't transformation—it's automation at the edges.
The real opportunity is redesigning work around human-machine collaboration. Humans focus on judgment, relationships, complex decisions. AI handles routine outputs, data processing, pattern recognition. Partially automate what can be, keep humans where they matter. But you can't design this without knowing how work processes break down into decisions, outputs and tasks. The constraint isn't AI capability—it's data about work itself.
Organizations getting real value from AI did something unsexy first: they mapped their work processes. Not job descriptions, not process diagrams showing the ideal state—actual decisions, outputs and tasks. Where time goes, what requires what. Then they could make explicit choices. This decision stays human. That output gets AI support. This task automates fully, this one partially. Designed, not guessed.
Most organizations skip this step. They buy AI tools and hope for transformation. They get incremental efficiency instead. Work processes were designed decades ago for humans doing everything, and AI layered on top of that design will always underperform.
The question isn't whether your AI is good enough. It's whether you understand your work processes well enough to know where AI fits. That's data infrastructure—process-level understanding of how work actually happens. Most companies don't have it. Until they do, they're layering AI onto work designed in 1995 and wondering why it doesn't transform anything.