Three workflows. 20 tasks. Six departments. The one that broke our customer was the task no one owned.
I want to tell you a story. We make labs for enterprise customers. A while back I sat down to map what actually happens between a customer's first call and a working lab on their users' screens. Three workflows. 20separate tasks. Six teams inside Nuvepro, plus the customer's IT team and the cloud providers on the other side. Half of those tasks were not in anyone's job description. One of them broke a customer last quarter. Here is what happened.
You have probably heard this pitch
A services firm shows up with a deck. They want to automate one of your workflows. Twelve weeks. A bot. A dashboard.
The middle slide says "automate your hire-to-retire workflow" or "automate your AR workflow" or "automate your support workflow." The before-and-after picture looks clean. There is a price. There is a timeline.
A year later, the bot is doing the easy part. The senior person who used to absorb the messy cases is still absorbing them. The handoffs that were not in the diagram are still happening over Slack and email. The pilot is technically a success and nobody noticed.
When I look at why this keeps happening, the answer is always the same. The pitch is for the workflow. The work is in the tasks underneath.
Let me show you what I mean, using our own operation.
The three workflows that move every customer through Nuvepro
We sell task intelligence. I figured we should run it on ourselves first.
Nuvepro builds AI-ready labs for enterprise customers. Sandbox labs for hands-on practice. Learning Solutions labs for guided projects and assessments. The work moves through HubSpot tickets and Freshdesk tickets, four teams inside the company, and the customer's IT team on the other side.
There are three workflows that every customer passes through.
- Lab Feasibility Review. Before any deal closes. Six tasks across Sales, Sandbox, and Learning Solutions.
- Lab Delivery. After the deal closes, until the lab is live. Eight tasks across Sales, Sandbox, Learning Solutions, Support & Delivery, and the cloud providers.
- Lab Support.Every day after the lab is live. Six tasks across Support, Sandbox, Learning Solutions, Engineering, and the customer's IT team.
Twenty tasks. Six teams inside Nuvepro plus the customer's IT team and the cloud providers. 9 of the 20 tasks are hidden, meaning they do not appear in any job description we wrote. Work stops if they do not happen.
Here is the actual list, in order, with a label for each one.
Workflow 1: Lab Feasibility Review
Before any deal closes. Six tasks. Three teams. Two of the tasks are hidden.
A customer asks Sales for a lab. From the outside it looks like one conversation. From the inside it is six tasks across three teams. One of those tasks is the most important decision in the whole flow, and it is not in any job description.
Lab Feasibility Review
Before any deal closesThe decision at task 2 is the one nobody talks about. The salesperson decides whether this is a Sandbox lab or a Learning Solutions lab. That decision picks the team that will scope the next two weeks. We discovered we even had a step like this only by tracing tickets backward.
Task 4 is the one I find funny. When Learning Solutions takes a lab that needs a sandbox underneath, Learning Solutions opens a fresh ticket back to Sandbox. The org chart says these are two parallel teams. The ticket trail says one of them is sometimes the customer of the other. There is no role name for that. Nobody is hired to be the one team that buys from the other team. The work happens anyway.
Workflow 2: Lab Delivery
After the deal closes, until the lab is live. Eight tasks. Five teams, including the cloud providers.
The HubSpot ticket flips to order closed and now the work moves through five teams. Six of the eight tasks are hidden. One of them is the task that broke the customer in the section after this.
Lab Delivery
After the deal closes, until the lab is liveTask 9, the templating step, is interesting. After the lab is built, somebody saves it as a reusable template. That work is not for this customer. It saves time on the next twenty. Nobody is hired to do it. Either the engineer who built the lab decides to do it, or it does not happen, or it happens three weeks later when somebody else has to rebuild the same thing from scratch.
Task 11 is the hinge for the whole delivery. Support & Delivery has to find out how many users will be on the lab, when it starts, when it ends, and what the budget per user is. Some of this came in through the sales conversation. Some of it did not. They check the sales notes. They ask sales. They sometimes ask the customer. There is no job description for this step. If it is missed, the lab gets built for the wrong shape, and the wrong shape is what the customer feels later.
Task 13 leaves the company. If the lab needs more cloud capacity than the default account allows, somebody on the Support side opens a ticket with AWS or Azure. If the cloud provider does not lift the quota in time, the lab cannot launch. That is an external dependency, but the workflow diagram has it as a sub-step of "configure."
Workflow 3: Lab Support
Every day the lab is live. Six tasks. Four channels. Five teams.
Customers do not file issues through one queue. They file through Freshdesk. They WhatsApp the program manager. They email the partner manager. They get on a call with their account person. Whatever channel the issue lands in, support has to spot it, sort it into one of four buckets, and pull in the right team.
Lab Support
Every day after the lab is liveNotice the pattern. Workflow 1 had a hidden routing step at task 2. Workflow 2 had it again at task 7. Workflow 3 has it at task 16. Three workflows. Three hidden routing steps. None of them is in any job description. All three of them decide where the work goes next.
And the four buckets are not four versions of the same job. Bucket 1 is firewall coordination with a customer's IT team. Bucket 2 is internal routing back to engineering. Bucket 3 is teaching a person live on a video call. Bucket 4 is bug triage with our own engineering team. Four very different things hiding under one label called "issue resolution." A services firm that promises to "automate support" cannot do all four with one product.
The phone call
Last quarter, this is what happened.
A customer rolled out a lab to a global cohort. They told us how many users. They told us when it had to be live. They told us the budget per user. They did not tell us that a third of the cohort was based in India and a third in Singapore.
We provisioned everything in the US default region.
A week in, my phone rings. Customer's program lead.
"What's happening? My users are saying the experience is suffering. Pages are slow. The labs feel frozen. We have a session tomorrow."
I open Freshdesk. India users are stacking tickets. "Lab not loading." "Pages take five seconds." "Cannot run the exercise." Support has triaged them as Bucket 1, lab not accessible. They are looping in the customer's IT team. Firewalls get checked. None of it is the problem.
The problem was not a firewall. The problem was that somewhere upstream in our flow, nobody had a task that read "ask the customer where their users actually live, and provision regions accordingly."
Sales did not ask. User-geography is not a Sales qualification field. Support & Delivery did not check. The rollout-details task at workflow 2, task 11, does not include geography. Build did not branch into multi-region. Nobody had flagged regional spread upstream. The customer only found out we got it wrong when their users started complaining about lag.
The fix was small. We added two new tasks. A sales qualification question on user geography distribution. A build-step branch that auto-suggests multi-region provisioning when users span more than one region. Both of them are deployable to AI today. A simple agent reads the qualification answer and suggests the region split.
The fix was not visible from the workflow diagram. The diagram says "configure lab." That is one box. The tasks underneath it are several. One of them did not exist until a customer call forced it into existence.
The shape of the work, summed up
20 tasks. Six teams. Half of the tasks are not in any job description.
The number that surprised me when I drew this up was 9. Half of the tasks that move a customer through Nuvepro are not in anyone's job description. They run on the senior people who absorb whatever the system cannot label. A services proposal pricing "automate the workflow" would not have priced for any of them.
So what does this mean for an AI deployment?
The decision is different for every task. That is the whole point.
Look at what a quick scan of those 20 tasks tells me.
Task 1, raise a HubSpot ticket from a customer ask, can be an agent. Task 2, decide what kind of lab this is, can be a suggestion from an AI plus a final human read. Task 6, close the deal with the customer, stays human, because that is a relationship. Task 19, teach a stuck user through a lab exercise on a video call, stays human, because that is teaching.
None of these decisions are visible at the workflow level. If somebody walks in and asks "can you automate the lab feasibility workflow?" the only honest answer is "parts of it, depending on which part." If they ask "can you automate task 2, the lab-type decision in Sales?" the answer is much sharper. Yes, with a human review on the close calls, here is what the agent looks like, here is when it ships.
That is also why the workflow-level pitch keeps stalling. The services firm has to deliver an answer for all six support tasks at once. Watching four channels. Sorting into four buckets. Coordinating with the customer's IT team. Sending issues back to engineering. Teaching a stuck user live. Triaging a platform bug. Six different things, only some of them are even bot-shaped. One pilot does not ship against all of them.
The way out is the smaller move. Pick one task. Decide automate, augment, or human. Ship the agent for the automate one. Train the team for the augment one. Protect the human one. Move to the next task.
What we do for customers
The same thing I just did with our own workflows. Three steps.
Map the work
Pull tasks from every source that knows them. Job descriptions, SOPs, ticket trails, real job postings, workflow frameworks, and structured conversations with the people doing the work. The documented and the lived, side by side.
Label every task
Each task gets one of three labels. Automate. Augment. Human-only. Each with a reason. Each with hours saved per week and an annual impact at the role and workflow level.
Redesign and ship
Build the AI agents for the automate tasks. Train the team for the augment ones. Keep the human ones. Ship the redesigned role or workflow live. Measure.
What CXOs usually ask me at this point
Questions from COOs, CHROs, and CFOs who have been pitched workflow automation more times than they can count.