Task Intelligence · TI #5

The Task Intelligence Maturity Model: Five levels from ad-hoc to autonomous.

Every company moving to AI-native work passes through the same five levels. Most sit between Level 1 and Level 2. The operating advantage begins at Level 3. Here is the full frame, with a self-diagnostic a CXO can run in ten minutes.

By Giridhar LV·Founder & CEO, Nuvepro. Author of The Agentic Enterprise.··9 min read

Why a maturity model, why now

Adoption metrics stopped being useful two years ago.

Most AI progress in enterprises gets tracked the wrong way. Seat utilization. Query volume. Number of Copilot licenses deployed. Those are inputs. They say nothing about whether the work has moved. A company can be at 100% adoption of every AI tool on the market and still sit at Level 1 of operational change.

What the board actually wants to know is a different question. Did the work change. Is a task that used to take a person now done by an agent. Is the agentic ratio moving in any function. Did the P&L reshape, or did the company just add a software line item.

A task intelligence maturity model answers those questions. It measures the work itself, not the tooling around it. Five levels. Each one defined by what the company can name, classify, and deploy against. Most enterprises we have mapped sit between Level 1 and Level 2. That is the current industry state in spring 2026.

The Five Levels

Ad-hoc. Measured. Classified. Orchestrated. Autonomous.

Level 1
Ad-hoc
AI is a tool list. Nobody owns the work.

The company has bought seats. Copilot, Claude, Gemini, maybe one enterprise contract. Adoption is a personal-productivity story, not an operational one. Every team reports different metrics, mostly anecdotal. There is no classification of the work and no owner for the outcome. The CEO cannot answer a board question about what changed in the P&L. The CHRO cannot answer a question about which roles will move. Progress comes from individuals, not systems.

Signals you are here
  • AI success stories are shared in all-hands; none of them show up in the operating plan.
  • Nobody can name the three workflows where AI moved a metric.
  • The CFO treats AI spend as IT cost, not line-of-business investment.
  • HR and IT are both claiming ownership of AI enablement, and nothing is being enabled.
The next move

Name a single owner of task intelligence. A VP-level executive with budget and a direct line to the COO or CTO. That one appointment unlocks every subsequent level.

Level 2
Measured
We count usage. We do not classify the work.

An AI program office exists. Dashboards track seat utilization, query volume, model spend. This is progress over Level 1 because now there is a number. But the number is an input, not an outcome. Nobody can tell you, task by task, what the AI is actually doing. The work is a black box that has 'AI adoption.' Rank-and-file employees are using AI, but the operating model is unchanged. Same roles, same workflows, same accountability.

Signals you are here
  • Weekly AI adoption reports show usage curves going up and to the right.
  • No dashboard shows which specific tasks moved from human to agent.
  • Procurement has a view on AI spend; operations does not.
  • AI training is running, but nobody can tell you what the training is for.
The next move

Stop measuring usage. Start classifying tasks. Pick one workflow, decompose it into its actual tasks, and label each one automate, augment, or human.

Level 3
Classified
Every task has a label. Every role has a ratio.

This is the level where the map appears. Every task in the target workflows has been classified against a three-bucket framework: automate, augment, human. Every role has a task-level ratio that is derivable, repeatable, and defensible. HR, operations, and IT are looking at the same classification. The conversation shifts from 'should we use AI here' to 'this task moves, this one stays human, here is the evidence.' Capability planning starts to align with the classification, not the org chart.

Signals you are here
  • The COO can name the 30/40/30 ratio for any priority workflow.
  • Job descriptions for new hires reference task-level classifications, not just skills.
  • Capability planning uses the classification as the input, not a hiring spreadsheet.
  • When a workflow changes, the classification is the first artifact updated.
The next move

Move from classification to orchestration. Make the classification executable. Wire agents into the automate bucket. Build human-plus-agent interfaces for the augment bucket.

Level 4
Orchestrated
Agents do the automate layer. Humans run the orchestration.

Classification is live in the operating model. Agents run a meaningful share of the automate-bucket tasks. Humans have new tools for the augment bucket. The human-only bucket is smaller and sharper, with the people in it having more agent-generated context than they used to. The org chart is starting to look different. Fewer process-executing middle layers. More orchestrators, more specialists at the top, more agent operations at the bottom. The CFO has an agentic-ratio metric, agents per human per function. The P&L shape is shifting.

Signals you are here
  • A function has moved to a materially different agentic ratio, measured and reported.
  • The org has created new roles that did not exist three years ago, such as agent operators or AI workflow owners.
  • The middle of the pyramid has compressed without layoffs, through redeployment.
  • Monthly business reviews include agent metrics next to human metrics.
The next move

Move from single-function orchestration to enterprise-wide operation. Close the feedback loops. Measure agent quality, not just adoption. Build the capability to redesign any workflow in weeks, not quarters.

Level 5
Autonomous
The enterprise is agentic. The shape of the company has changed.

The company is operating in the new shape. A diamond, not a pyramid. Agents run a majority of the automate layer without incident. Human work is concentrated on judgment, customer intimacy, and the orchestration of complex agent systems. The CEO runs board conversations on agentic ratio by function, on P&L reshape, and on capital reallocation, not on tool adoption. New workflows are born agentic. Task classification is continuous, not a one-time exercise. The operating model is self-updating.

Signals you are here
  • New workflows are designed agentic-first. No legacy 'before AI' version exists.
  • Capital allocation at the board level explicitly tracks agent investment.
  • The CFO reports agentic ratio as a standard line in quarterly reviews.
  • Onboarding for new employees assumes they will work alongside agents from day one.
The next move

You are the reference case other companies are studying. Maintain the classification discipline. Keep measuring. The ratio will keep moving as frontier models advance.

A ten-minute self-diagnostic

Five questions. Answer honestly. Your lowest yes is your level.

Q1

Can your COO name, in one sentence, which three workflows AI moved a metric on last quarter?

No → Level 1Yes, with examples → Level 2+
Q2

Is there a single VP-level owner of task classification in your company?

No → Level 1Yes → Level 2+
Q3

Can you show, for any priority workflow, a task-by-task classification of automate, augment, and human?

No → Level 2Yes → Level 3+
Q4

Have agents been deployed against a specific automate-bucket task in production, with a measured before-and-after?

No → Level 3Yes → Level 4+
Q5

Does your CFO report agentic ratio (agents per human per function) alongside revenue per employee?

No → Level 4Yes → Level 5

Your level is the lowest question at which you answered yes. Most enterprises answer yes to Q1 or Q2 and no to Q3. That is Level 2. The honest read is usually one level lower than the leadership narrative suggests.

What to do after you locate yourself

The next move depends on where you are, not where you want to be.

Do not plan a jump. Plan the next level up. Companies that try to move from Level 1 straight to Level 4 spend a lot of money and end up at Level 2 with a large dashboard. The sequence is not optional. Ownership comes before measurement. Measurement comes before classification. Classification comes before orchestration. Orchestration comes before autonomy.

At every level, the single most leveraged action is the next move listed in its card above. Appoint the owner at Level 1. Replace the usage dashboard with a task classification at Level 2. Move from classification to orchestration at Level 3. The moves are specific. That is on purpose.

The companies that reach Level 3 inside one calendar year do three things in common. They name one VP-level owner. They pick one workflow. They classify every task in it to completion, not partially. None of those three is glamorous. All three are necessary.

Common questions about the maturity model

Straight answers from field conversations.

Neither. It is a diagnostic framework a CXO can use to locate their company and choose the next step. Most companies do not need formal certification to move from Level 2 to Level 3. They need a classification exercise, a named owner, and a committed workflow. The maturity model is there to keep the conversation grounded in the same language across HR, operations, and IT.
Level 1 to Level 2, six to twelve weeks if leadership commits. Level 2 to Level 3, one to two quarters of classification work in the target workflows. Level 3 to Level 4, two to four quarters because orchestration requires wiring agents and retraining teams. Level 4 to Level 5 is measured in years, not quarters, because it is a shape change in the company. Skipping levels does not work. You can compress them, but you cannot skip.
From Nuvepro's fieldwork across 2,400+ companies, the modal enterprise sits between Level 1 and Level 2. AI is bought, adoption is measured, but the work is not classified. Less than 10% of the enterprises we have mapped have a defensible task-level classification in a single priority workflow. That is the wedge: Level 3 is where the operating advantage begins, and almost nobody is there yet.
Yes. Most AI maturity models measure inputs: strategy, data readiness, talent, technology, governance. They are useful for diagnostics but they do not tell you whether the work has moved. Task intelligence maturity measures the work itself. Has this task moved from human to agent? Yes or no. The difference is what is being counted.
No single owner of task classification. AI enablement is split between HR, IT, and a program office, which means the classification never gets done. Every company that has reached Level 3 in our fieldwork has a VP-level executive whose job title includes the words 'AI,' 'workforce transformation,' or 'operating model,' and who has real budget and a real mandate. Without that appointment, the company sits at Level 2 forever.
The 30/40/30 pattern is what you see when you reach Level 3. Three buckets, with weights that depend on the function. The Agentic Enterprise is the destination at Level 5, where the company has reorganized around the classification. The maturity model is the path between them. Task Intelligence is the map; the maturity model tells you where on the map you are standing.

Locate your company. Plan the next step.

Nuvepro's platform does the classification, at scale, across the roles and workflows that matter to you. The maturity model tells you where to deploy it first.