From annual workforce plans to scenario simulation.
Annual workforce planning assumes history predicts the future. Last year's project took five people, so next year's takes five. AI breaks this—same work might now need three people, or two, or completely different skills.
When historic assumptions fail, you need scenario simulation.But simulation requires rules. How do people get matched to work? What skills for what projects? In most organizations, this logic lives in people's heads, not systems. You can't simulate what you haven't codified.
Most annual workforce planning uses historic assumptions. Last year's audit took four people for three months. Budget for four again. Last year's implementation needed two seniors and three juniors. Plan the same mix. History predicts the future.
AI breaks that logic. The same client work might now need three people with AI tools instead of five without. Or two people with completely different skills. Or one senior doing what a team used to do.The resource equation changed. But planning still assumes last year predicts next year.
This is the real problem. Not that "things move fast." But that historic assumptions no longer hold. Similar work no longer needs similar resources.Planning more frequently doesn't help. Quarterly planning uses the same logic—history extrapolated forward. If the patterns are breaking, faster cycles just produce wrong answers more often.
The alternative is scenario simulation. What if AI cuts research time by 60%? What resources do we need then? What if adoption is slower—30%? What's the model? What if tools work for some engagements but not others?
Scenario simulation lets you model variable futures instead of assuming one predictable path. But most organizations can't do it. Not because they lack software. Because they lack the foundations simulation requires. Simulation relies on codified rules for how people get associated with work.
Think about how staffing actually happens in most organizations. A project needs resources. Someone—a manager, a resource director, a partner—decides who fits. They consider skills, availability, development needs, client relationships, team dynamics. They weigh trade-offs. They make a call.
That logic lives in their head. It's not written down. It's not in a system. You can't simulate what you haven't made explicit. If you want to model "what if AI changes the skills we need"—you first need to codify what skills get matched to what work today. If you want to simulate "what if we lose 20% of senior people"—you need explicit rules for how seniority factors into staffing decisions.
Most organizations can't answer basic questions: How do people get assigned to projects? What criteria matter? How are trade-offs made? The answers are "it depends" and "ask Sarah. That's fine for human decision-making. Humans navigate ambiguity. But simulation needs rules. Explicit logic. If X, then Y. This skill for that work. This experience level for that complexity.
Beyond codified rules, simulation also requires:
Task-level data on how work actually breaks down
Evidence on where AI changes effort and resources
Current skills inventory—what people can actually do now
Systems that connect people, work, and capabilities
Most companies have none of this. Historic assumptions filled the gap. You didn't need explicit logic because you just repeated what worked before.When historic patterns break, the missing infrastructure shows up fast. The shift from annual plans to scenario simulation isn't about planning cadence or better software. It's about making explicit what's always been implicit—how people connect to work, what rules drive decisions, what logic you'd want a system to follow. Until that's codified, simulation is theatre. You're modelling a system you haven't actually defined.