The Task Intelligence Maturity Model: Five levels from ad-hoc to autonomous.
Every company moving to AI-native work passes through the same five levels. Most sit between Level 1 and Level 2. The operating advantage begins at Level 3. Here is the full frame, with a self-diagnostic a CXO can run in ten minutes.
Why a maturity model, why now
Adoption metrics stopped being useful two years ago.
Most AI progress in enterprises gets tracked the wrong way. Seat utilization. Query volume. Number of Copilot licenses deployed. Those are inputs. They say nothing about whether the work has moved. A company can be at 100% adoption of every AI tool on the market and still sit at Level 1 of operational change.
What the board actually wants to know is a different question. Did the work change. Is a task that used to take a person now done by an agent. Is the agentic ratio moving in any function. Did the P&L reshape, or did the company just add a software line item.
A task intelligence maturity model answers those questions. It measures the work itself, not the tooling around it. Five levels. Each one defined by what the company can name, classify, and deploy against. Most enterprises we have mapped sit between Level 1 and Level 2. That is the current industry state in spring 2026.
Ad-hoc. Measured. Classified. Orchestrated. Autonomous.
The company has bought seats. Copilot, Claude, Gemini, maybe one enterprise contract. Adoption is a personal-productivity story, not an operational one. Every team reports different metrics, mostly anecdotal. There is no classification of the work and no owner for the outcome. The CEO cannot answer a board question about what changed in the P&L. The CHRO cannot answer a question about which roles will move. Progress comes from individuals, not systems.
- AI success stories are shared in all-hands; none of them show up in the operating plan.
- Nobody can name the three workflows where AI moved a metric.
- The CFO treats AI spend as IT cost, not line-of-business investment.
- HR and IT are both claiming ownership of AI enablement, and nothing is being enabled.
Name a single owner of task intelligence. A VP-level executive with budget and a direct line to the COO or CTO. That one appointment unlocks every subsequent level.
An AI program office exists. Dashboards track seat utilization, query volume, model spend. This is progress over Level 1 because now there is a number. But the number is an input, not an outcome. Nobody can tell you, task by task, what the AI is actually doing. The work is a black box that has 'AI adoption.' Rank-and-file employees are using AI, but the operating model is unchanged. Same roles, same workflows, same accountability.
- Weekly AI adoption reports show usage curves going up and to the right.
- No dashboard shows which specific tasks moved from human to agent.
- Procurement has a view on AI spend; operations does not.
- AI training is running, but nobody can tell you what the training is for.
Stop measuring usage. Start classifying tasks. Pick one workflow, decompose it into its actual tasks, and label each one automate, augment, or human.
This is the level where the map appears. Every task in the target workflows has been classified against a three-bucket framework: automate, augment, human. Every role has a task-level ratio that is derivable, repeatable, and defensible. HR, operations, and IT are looking at the same classification. The conversation shifts from 'should we use AI here' to 'this task moves, this one stays human, here is the evidence.' Capability planning starts to align with the classification, not the org chart.
- The COO can name the 30/40/30 ratio for any priority workflow.
- Job descriptions for new hires reference task-level classifications, not just skills.
- Capability planning uses the classification as the input, not a hiring spreadsheet.
- When a workflow changes, the classification is the first artifact updated.
Move from classification to orchestration. Make the classification executable. Wire agents into the automate bucket. Build human-plus-agent interfaces for the augment bucket.
Classification is live in the operating model. Agents run a meaningful share of the automate-bucket tasks. Humans have new tools for the augment bucket. The human-only bucket is smaller and sharper, with the people in it having more agent-generated context than they used to. The org chart is starting to look different. Fewer process-executing middle layers. More orchestrators, more specialists at the top, more agent operations at the bottom. The CFO has an agentic-ratio metric, agents per human per function. The P&L shape is shifting.
- A function has moved to a materially different agentic ratio, measured and reported.
- The org has created new roles that did not exist three years ago, such as agent operators or AI workflow owners.
- The middle of the pyramid has compressed without layoffs, through redeployment.
- Monthly business reviews include agent metrics next to human metrics.
Move from single-function orchestration to enterprise-wide operation. Close the feedback loops. Measure agent quality, not just adoption. Build the capability to redesign any workflow in weeks, not quarters.
The company is operating in the new shape. A diamond, not a pyramid. Agents run a majority of the automate layer without incident. Human work is concentrated on judgment, customer intimacy, and the orchestration of complex agent systems. The CEO runs board conversations on agentic ratio by function, on P&L reshape, and on capital reallocation, not on tool adoption. New workflows are born agentic. Task classification is continuous, not a one-time exercise. The operating model is self-updating.
- New workflows are designed agentic-first. No legacy 'before AI' version exists.
- Capital allocation at the board level explicitly tracks agent investment.
- The CFO reports agentic ratio as a standard line in quarterly reviews.
- Onboarding for new employees assumes they will work alongside agents from day one.
You are the reference case other companies are studying. Maintain the classification discipline. Keep measuring. The ratio will keep moving as frontier models advance.
A ten-minute self-diagnostic
Five questions. Answer honestly. Your lowest yes is your level.
Can your COO name, in one sentence, which three workflows AI moved a metric on last quarter?
Is there a single VP-level owner of task classification in your company?
Can you show, for any priority workflow, a task-by-task classification of automate, augment, and human?
Have agents been deployed against a specific automate-bucket task in production, with a measured before-and-after?
Does your CFO report agentic ratio (agents per human per function) alongside revenue per employee?
Your level is the lowest question at which you answered yes. Most enterprises answer yes to Q1 or Q2 and no to Q3. That is Level 2. The honest read is usually one level lower than the leadership narrative suggests.
What to do after you locate yourself
The next move depends on where you are, not where you want to be.
Do not plan a jump. Plan the next level up. Companies that try to move from Level 1 straight to Level 4 spend a lot of money and end up at Level 2 with a large dashboard. The sequence is not optional. Ownership comes before measurement. Measurement comes before classification. Classification comes before orchestration. Orchestration comes before autonomy.
At every level, the single most leveraged action is the next move listed in its card above. Appoint the owner at Level 1. Replace the usage dashboard with a task classification at Level 2. Move from classification to orchestration at Level 3. The moves are specific. That is on purpose.
The companies that reach Level 3 inside one calendar year do three things in common. They name one VP-level owner. They pick one workflow. They classify every task in it to completion, not partially. None of those three is glamorous. All three are necessary.
Common questions about the maturity model
Straight answers from field conversations.