Skip to main content
All field notes
16 min read

10 Steps to AI Maturity

Pete Norcross

Pete Norcross

Co-Founder, Epistemion LLC

A large sailing ship crossing open water beneath a luminous night sky, echoing the article's journey metaphor.

"If a man knows not to which port he sails, no wind is favorable."

-- Seneca, Epistulae Morales ad Lucilium, Letter 71

If you've been alive and awake sometime in the past three and a half years since ChatGPT came out, you've heard about AI and how it's going to transform every single aspect of your business. You hear about it every day in the news, in the paper, and on social media. But what you've probably never seen is an organization that's actually made these promises come to life. For all the talk of improving efficiency with artificial intelligence, organizations have operated according to business as usual.

We've seen this bear out in dozens of organizations that we've worked with. Somebody in the organization gets excited about AI. They start prototyping. The prototyping is exciting, but ultimately the initiatives fall on their face. Or the efficiency gains that were promised are never found. But we keep hearing about the promise of gold on the other side of the rainbow. So everybody keeps pursuing this AI technology, certain that if they just implement it right, the dollars and the productivity will follow.

So everyone sets sail on this journey to adopt a mature model for integrating artificial intelligence into their business. But since this journey is one that so few organizations have completed, most people set sail like Christopher Columbus looking for the New World. They're moving through uncharted territory and don't know where they need to go. At Epistemion, we've developed a 10-step maturity model for understanding how businesses need to adopt artificial intelligence in a way that is safe and produces the promised efficiency gains. Whether you contract with Epistemion or not, we hope the model below can serve as your map as you venture into the unknown. If you follow this map step by step, you'll find yourself on the cutting edge of artificial intelligence, with the proficiency gains to match, while not falling victim to the traps that lie in uncertain waters. It's a big journey ahead, but like all journeys, it starts with a single step.

Step 1 - Reading the Paper

Step one is reading the paper. Listen, four years ago, none of us had heard of GPT or any of those AI technologies except for those of us on the absolute bleeding edge. As of November 2022, the whole world became familiar with ChatGPT, and it set us off on the course that we're on today.

At this point, there's no AI adoption. There's no familiarity. You've just heard about AI, you've heard the promises, and you don't know where to go from here. You could say that this wasn't a step on the plan and wonder why it's in an article like this. But the truth is, everyone starts here. If this is where you're at, don't worry, and don't be ashamed. More people are here than you think. But let's continue with the next step and start progressing toward maturity with generative AI.

Step 2 - Familiarization

The second step is familiarization. Even if your company has adopted no AI at the organizational level, unless you work at a company of one, I can almost guarantee that you are in this step. This is the step where employees are using AI on their own initiative. One employee is having ChatGPT summarize their notes. Maybe your programmer is asking Copilot to assist them with a particularly nasty bug they're facing. Your marketing guy is using Nano Banana to generate pitch deck ideas.

This step is really essential. It can be done in parallel with later steps, but what's important is enabling your employees to discover the use cases that are most meaningful to them. This sort of discovery will allow them to become informed participants as you develop the agentic systems that will drive your business forward. Again, if this is the only place you've reached, you're in good company. A lot of organizations are relying on their employees right now to become educated on their own initiative. In the next step, we'll see how to move forward, support those employees, and begin the process of reaching AI maturity.

Step 3 - Centralization and Resourcing

By step three, you've realized that AI is critical. You've probably also started to realize that AI introduces new burdens of management, policy, and security. And so you begin to centralize access to AI and provide resources to your employees so they can use it effectively.

At this point, AI stops being a private productivity hack and starts becoming an organizational capability. You may be setting up premium subscriptions to artificial intelligence services like OpenAI's ChatGPT or providing something like Anthropic's Claude Code to your programmers. The point is that you're now controlling and providing the access, which sets up a foundation for you to start setting policies and providing control over the AI systems. This can also serve as a good way to collect data for later training of your agentic AI systems.

An Epistemion-branded AI workspace window showing centralized team access and managed prompts.
Step 3: AI stops being a private side tool and becomes a managed organizational capability.

Another overlooked part of this step is starting to invest in educating your employees on the effective use of AI. The fact of the matter is that working with artificial intelligence is a radically new paradigm, with traps in more places than you might expect. As your AI adoption becomes more mature, we'll rely on a principle known as the pit of success, where we try to design systems that push users toward the most successful outcomes by the design of the system rather than by user expertise. Before you can develop those systems, however, you typically have to train some of your top performers on the effective use of AI so they can bring that information back to their teams. This is also where Epistemion typically steps in for an organization. We can provide that training for your employees, bringing them as quickly as possible to effective, knowledgeable, and safe use of AI.

Once your access to AI is becoming centralized and you are starting to train your people on how to use it effectively, we start transitioning to the next step. In the next step, we'll start building our own proprietary AI systems that can provide more tailored, specific, and proactive function for our organization.

Step 4 - Basic AI Agents

Once you've reached step four, you're actually building new things specifically for your team, and this is where the leverage starts to kick in. In step four, we're not building autonomous agents that can reason in complex ways and tackle big challenges. Instead, we're building agents that can handle issues that might have been close to being automated before, but still required just a little bit of human influence. Maybe we're doing things like categorizing emails, evaluating customer review sentiment, or simple document cleanup.

Here we are trying to make the piece of the puzzle covered by the artificial intelligence as small as possible and cover everything we can with deterministic software. These are extremely tightly bounded systems that require minimal safety controls because their scope of operation is limited. These types of agents will live forever in our systems. It's not as if, when we move to more mature agent types, these simple bounded agents lose their function. If only for safety and consistency, these bounded agents are ideal. But the truth is that these bounded-scope agents also allow us to run various steps in our AI workflows at a much lower cost and with higher performance.

It doesn't matter if you're hosting these locally or if you have a service provider providing these to you. What's important is whether you have AI agents making small decisions about tightly controlled operations. If so, you're in this step. But it's not always the case that we can provide an agent all the information it needs ahead of time to be able to operate correctly. Categorizing emails may just involve simple rules that can be defined up front. But when you need information from other systems, that's where the next step comes in.

Step 5 - Augmentation

If you need information from another system, or if the information you need to process is more complicated than a single simple AI agent can handle, then you will need to start doing some sort of augmentation. Augmentation involves tuning the model you are using or providing additional context from external systems to aid in its reasoning.

A support-routing agent pulling context from documentation, org charts, and contact data before taking action.
Step 5: Augmentation gives the agent access to the surrounding context it needs to reason correctly.

Imagine you have a support inbox where support queries from your customers come in. You'd like an AI agent to route those to the appropriate department based on the support issue at hand. For example, perhaps you have hardware defects, shipping defects, or software defects. The agent may be told by the prompt you set it up with which sorts of emails go to which department, but over time the managers and stakeholders in those departments will change. What we want is for the agent to be able to access a contact list for the organization, along with a departmental hierarchy, so that it can forward those emails appropriately. You will use the technique known as RAG (retrieval-augmented generation). With this technique, you can provide the agent with a tool that can look up your organizational hierarchy and contacts from your contact list. The agent, after having categorized an email, will perform the relevant searches on your organizational hierarchy and contact list to identify who needs the information and what their email address is so it can forward it appropriately. By using augmentation, we can extend simple models to do more than they could do with prompting alone. This is where systems start becoming knowledgeable about your organization in particular, by providing them augmented context and fine-tuning them according to your organization's needs.

I hope this idea of augmentation is landing, and I hope you're understanding how powerful it is. One of the most popular examples today of this augmentation is providing a chatbot on your website that has access to your documentation for your products and your procedures. You've probably interacted with these agents on websites for organizations that you do business with. More likely than not, you've probably been frustrated by some of those interactions, and sometimes those interactions may have even been dangerous. That's where our next step comes into play.

Step 6 - Governance and Grounding (Don't skip this!)

As I alluded to before, you've probably interacted with AI systems, whether you knew it or not. Let's be real, though: you knew it. You knew it because these systems didn't do a good job of convincing you they were human. They made mistakes that humans wouldn't make. In some cases, those mistakes are benign, relatively speaking. A chatbot agent may refer to a referral code that doesn't actually exist. This is bad for you, and a bad look for your customer, and may even end up in a refund, but ultimately these are not the biggest risks.

A governance dashboard showing citations, policy checks, audit logs, and a human pause control for an AI agent.
Step 6: Before an agent earns real autonomy, a human needs visibility, auditability, and an emergency brake.

The popular AI systems available today have been trained to produce responses that humans like, and humans like systems that agree with them. But not every person is in a position where systems agreeing with them is safe. Consider a person who is suicidal. There have been cases, time and time again in recent years, of AIs encouraging people with suicidal ideation, in some cases leading to their deaths. And here's the thing: you may think, I'm just building a chatbot for my site. It will only have information about my products. But inside the training of those models is capacity far beyond the use case to which you are applying them. There are many, many, many cases of AIs being tricked into providing information that is harmful, defamatory, or malicious. In the case of agents, they can even be tricked into performing malicious actions.

That being said, before agentic systems can be considered usable for production, some form of governance must be established, and grounding in source material must be established along with it. Governance refers to the process by which a human can audit the decisions and actions made by an AI and understand what needs to be changed to change those behaviors. It includes warnings for big deviations from the agent's intended function and may even include emergency shutdown procedures for agents that are operating in a rogue fashion. Even if an AI is acting autonomously, at the end of the day a human being is responsible for the behaviors and decisions of that agent. That person needs to be able to monitor the behavior of the agent, audit individual processes in the case of malicious outputs, and ultimately even pull the plug or pause the agent until remediations can be made.

This governance is a step that I most commonly see missed. It's also the step that differentiates between internal AI systems with minimal capabilities and production agentic AI systems that are autonomous and empowered. If you want more information on that subject, stay tuned for next week's article, where I'll discuss governance for artificial intelligence in greater detail (read that here, now). Skip governance at your own peril. For now, assuming you've set up that governance, we can move on to one of the most exciting steps, where the system starts to come alive.


Production Boundary: Steps one through six are where you prepare, augment, and govern the system. Step seven is where production agents actually begin.


Step 7 - Autonomous Individual Dynamic Agents

Step seven is where you have what most people actually mean when they say the word agent. Up until this point, we may have had AI workflows and AI helpers and even AI systems that make small, bounded decisions. But now we have a system that can take a goal, reason across multiple steps, use tools, inspect intermediate results, and decide what to do next without a human explicitly mapping every branch of the workflow out ahead of time.

For example, imagine an internal operations agent. Somebody reports that an order failed to sync between your storefront and your fulfillment system. Rather than just classify the ticket, this agent can inspect logs, compare order records, check the shipping status, identify the likely point of failure, draft a response, and in routine cases maybe even resolve the issue. The important part is not just that it has multiple tools. The important part is that it can decide which tools to use and in what order based on what it discovers along the way.

This is the first step where the system really starts to feel alive. It's also the first step where bad governance becomes obviously dangerous. A dynamic agent can create much more value than the systems in the earlier steps, but it can also make much bigger mistakes much more quickly. That's why I keep insisting that step six comes before step seven. If you cannot observe, audit, and constrain a single dynamic agent, you have absolutely no business turning several of them loose. But if you can do those things, then you are ready for the next step, where one agent becomes many.

Step 8 - Orchestrated Hierarchical Dynamic Agents

Step eight is where one agent is no longer enough because the work itself contains multiple different kinds of labor. You may have one agent receiving the task and breaking it down. Another might do research. Another might draft. Another might validate outputs against policy or source material. Another might execute actions in your systems. At this point, you are no longer building a single worker. You are building a team.

A coordinator agent delegating work to specialized research, policy, drafting, and execution agents.
Step 8: Multi-agent systems become useful when specialized roles can be coordinated without losing control of the whole.

When this works, it's powerful. You can split responsibilities cleanly. You can give each agent a narrower context. You can improve safety by limiting what each agent is allowed to do. You can even improve quality by having one agent critique or review the work of another. But let's not romanticize this. A multi-agent system is not just a single-agent system times three. The complexity grows much faster than the number of agents does. Now you have to worry about delegation, communication, state sharing, contradictory conclusions, stalled branches, runaway token costs, and all the other fun surprises that appear when you add coordination to the problem.

This is one of those places where organizations can get distracted by what looks impressive instead of what creates value. Many organizations that think they need a multi-agent system really need a better step seven first. But that does not make step eight a vanity milestone. For the problems that genuinely benefit from specialization and coordination, this step can be transformational, and for many organizations it becomes the natural next stage of maturity. Once you've gotten there, though, another question starts to arise. What happens when even the hierarchy starts becoming a bottleneck? That's where step nine comes in.

Step 9 - Decentralized Resilient Agent Swarms

Step nine is where the architecture stops looking like a manager with direct reports and starts looking more like a network. Instead of every decision and every delegation moving through a central orchestrator, agents can coordinate laterally with one another. Work can be rerouted. Responsibilities can shift. Parts of the system can fail without the entire process grinding to a halt. At this point, the system starts resembling a swarm more than a workflow.

One of the biggest advantages of this step is extensibility. In a decentralized architecture, you can add a new specialized agent, retire an old one, or change how responsibilities are distributed without having to redesign the entire system around a single central brain. As your business changes, the agent network can change with it. That makes the system more adaptable, more resilient, and better suited to environments where the work itself is constantly evolving.

This step belongs on the maturity model because eventually resilience and extensibility become architectural requirements, not nice-to-haves. You are designing for a world where workloads spike, capabilities expand, components fail, and the system still needs to keep functioning. That also means governance, observability, and recovery design need to be second nature to your team. Otherwise, what you're calling a swarm is really just chaos with a nicer name. For organizations whose agentic systems keep growing in scope and criticality, this is not an indulgence. It is the architecture that lets the next phase of maturity hold together. And that brings us to the final step, where the system doesn't just operate, but starts improving how it operates.

Step 10 - Self-Improving Agents / Agentic Context Evolution

Step ten is the frontier. At this point, the system is not just doing work. It is improving the way that work gets done. It may refine prompts. It may improve retrieval strategies. It may change how context is assembled. It may learn which tools are most effective in which scenarios. It may even propose changes to its own memory structures or evaluation logic. In other words, the system is starting to evolve the environment that shapes its behavior, not just the behavior itself.

This is where a lot of people get loose with their language. They talk about self-improving agents as if it simply means the AI updates itself and gets smarter every day. That's not maturity. That's drift unless it is governed with extraordinary care. A mature self-improving system does not just change. It changes in ways that are measurable, reviewable, reversible, and aligned with real-world outcomes that matter to the organization.

If step seven is where the system starts to feel alive, step ten is where it starts to feel almost like an organism. And that is exactly why organizations should treat it as an earned objective rather than a slogan. The goal is not to claim step ten before the foundations exist. The goal is to keep building toward the highest level of maturity your organization can safely and usefully sustain. For a lot of businesses, steps four through seven are where most of the practical value will live in the near term. But that should be understood as a sequencing reality, not a ceiling. As governance, confidence, and operational demands grow, steps eight through ten become the path to systems that compound in usefulness over time rather than simply execute the same sort of work faster.

How to use the model

Use this roadmap in order:

  1. Be honest about where you are. Not where your board deck says you are. Not where your AI vendor says you are. Where you actually are. If your employees are experimenting with ChatGPT on their own and maybe you've bought a couple licenses, you're not at step seven, and that is completely fine.
  2. Keep the final direction in view, but build the next step. Maturity models become dangerous when people use them as aspirational branding instead of operational sequencing. You do not need a swarm because you saw a viral thread on X about swarms. You do need to keep climbing toward the capabilities your business will eventually need, one prerequisite at a time.
  3. Build the prerequisites for that next step before you try to claim it. The organizations that are getting durable value from AI are almost never the ones with the flashiest demos. They are the ones who matched their ambition to their maturity, built the boring foundations first, and climbed the staircase in order.
  4. Climb the staircase in order. That may not be the sexiest answer in the age of AI hype, but it is the answer that works.

Need help charting the course?

If you're trying to figure out where your organization actually sits on this maturity model, what your next step should be, or how to move forward without creating unnecessary risk, contact us. These are exactly the kinds of conversations we have with teams every week.

We can help you assess your current level of maturity, identify the most valuable next capability to build, and develop a practical path forward for your agentic AI needs. Whether that means governance, augmentation, orchestration, or simply getting the foundations right, we would be glad to help you move up the model in a way that is safe, useful, and grounded in real business outcomes.

Have an automation or agentic AI project in mind?

Tell us about your operation. We start every engagement with a free 60-minute consultation.

Schedule a Consultation