Before AI runs the work, notice what the work really is.

You know the meeting. Someone shows up with the “responsible person” kit: the slide deck, the tracker, the timeline, the risks, the tidy bullet list of next steps. It looks sharp. It sounds confident. For a minute, everyone relaxes because the work feels organized. Then one question lands and the whole room tilts. “What are we actually trying to accomplish?” Or, “If we hit this date, what are we giving up?” Or the quieter version, “Who’s really deciding this?” Suddenly the conversation isn’t about the plan anymore. It’s about priorities that don’t match, constraints nobody named, and stakeholders who were nodding but weren’t aligned. The documents didn’t fail. They just hit their limit.

That’s because most work has two layers, and we mix them up all the time. Layer one is the repeatable layer, the paperwork and coordination that helps people see the work and follow it: plans, status updates, meeting notes, task lists, forecasts, dashboards, risk lists. This layer is real work. It reduces confusion and keeps teams moving. Layer two is the uncertainty layer, the part that decides whether the paperwork means anything: choosing what matters when goals conflict, making tradeoffs when you can’t have everything, aligning people who don’t fully agree, and getting commitment when the answer isn’t obvious. Layer one makes work visible. Layer two makes work true.

AI is getting very strong at layer one because layer one is pattern-based. Drafting. Summarizing. Formatting. Turning messy notes into clean language. Building a first-pass plan from a few inputs. That means the repeatable layer becomes faster and cheaper to produce, and it stops being the clearest proof of value on its own. The proof moves up a layer. The people who stand out will be the ones who can walk into that moment when the room tilts, name what’s really happening, and turn uncertainty into a decision the team can actually execute. AI will increasingly generate the paperwork that describes the work. Humans will still be needed to do the judgment that makes the paperwork real.

Before AI gives us the answers, learn how you decide.

Once you notice the two layers, the repeatable layer starts to feel… exposed. It’s the layer we’ve relied on to make work legible. The deck. The tracker. The status narrative. The risk list. The neat “here are the next steps” summary that helps everyone breathe again. It’s also the layer AI is best suited to absorb, because it’s built on patterns. Give it inputs, it gives you clean outputs. Fast. Confident. Polished. The kind of polished that makes a room stop asking questions.

That’s the uncomfortable aha. If the repeatable layer becomes cheap to generate, then the thing that used to prove competence starts losing its power. Not because the work disappears, but because it stops being rare. And when what’s rare changes, the professional ground shifts under your feet. People don’t say it out loud, but you can feel it in your body when you see an AI produce something that looks like what you spent years learning to produce. It creates a quiet fear that’s hard to name. Not just “Will I be replaced?” More like… “If this is no longer how value gets recognized, what is?”

This is where credentials and degrees come in, and why they’re tied to the pinch. Most of the formal systems we use to validate expertise are built to test the repeatable layer. They certify that you know the language, the methods, the steps. PMP. PRINCE2. SAFe. Scrum. ITIL. Six Sigma. Business degrees. Technical degrees. Industry training. Different fields, different labels, same basic function. They teach you how to structure work, document it, communicate it, and run it through a set of repeatable practices. They produce professionals who can reliably generate the artifacts an organization depends on to coordinate itself.

But here’s the twist that raises the pressure. In most companies, a huge portion of roles exist to help leaders make decisions by producing those artifacts. Analysts gather the data. Managers synthesize the story. PMs coordinate the dependencies. Ops leaders translate chaos into a plan. Chiefs of staff turn scattered inputs into a crisp set of options. For years the bottleneck has been, “Do we have enough information and structure to make a decision?” The repeatable layer was how organizations built decision confidence.

When AI commoditizes that layer, the bottleneck moves. The question shifts from “Do I have the information to decide?” to “Am I making the right decision with the information I have?” That shift is heavier. It doesn’t just threaten tasks. It threatens identity. Because now the value isn’t in producing the plan. It’s in choosing the direction, naming the tradeoff, and carrying the consequences when the plan runs into reality.

And that’s where job loss and job change start to show up. Not as a dramatic overnight event, but as a quiet rebalancing. Fewer people are needed to generate and maintain the repeatable layer, because one person with AI support can do what used to take a small team. Meanwhile, the expectations for the remaining people rise. They’re pulled upward into the uncertainty layer, where the work is less about producing outputs and more about making sure the outputs lead to the right decisions. That’s why this moment feels sharp. The paperwork used to be the proof. Now the proof is whether you can help people choose well when the stakes are real and the room is not aligned.

Before AI gives you the plan, learn how you choose.

If AI can hand us the answers, then the hard part isn’t getting information anymore. The hard part is choosing well. That’s the shift we just walked into. And it can feel like pressure because it changes what gets rewarded. Here’s the release though… you don’t need to become someone else. You need a simple way to decide with intention when the room is uncertain.

Think back to the meeting. You walk in with a clean plan, a crisp update, a tracker that makes everything feel under control. For a moment, everyone relaxes. Then one question changes the air. “What are we actually trying to accomplish?” Or, “If we hit this deadline, what are we giving up?” Or the quiet one nobody wants to say out loud, “Who’s really deciding this?” Suddenly the plan isn’t the center of the conversation anymore. The decision is.

Now imagine a version of that same meeting that’s a few months closer to the future. The plan, the update, the risk list, the options, the summary… it’s already been generated. It looks polished enough that nobody argues with the wording. Then your leader says, “Pick one and tell me why by 3:00.” That’s the moment the ground shifts. You can’t hide behind more paperwork. You can’t win by producing a better deck. You have to make a call, explain it, and live with the consequences.

So here’s the answer, in a form you can use immediately. In any high-uncertainty moment, do three things: name the decision, name the tradeoff, name the owner and time. “What are we deciding right now?” “What are we trading off to get it?” “Who owns the next move, and by when?” These questions cut through polish and pull the real work to the surface. They turn information into motion, which is what most meetings are secretly failing to do.

This is where understanding yourself becomes the advantage. Because when you’re forced to decide, your default pattern shows up. With the same information, two smart people can make two very different choices. One pushes for momentum. Another protects against risk. One creates clarity by being direct. Another creates clarity by building alignment first. None of that is wrong. But if you don’t know your default, you operate on autopilot. And autopilot is where good intentions turn into predictable mistakes… rushing when you should slow down, over-protecting when you should commit, mistaking silence for agreement, mistaking a polished plan for a real decision.

That’s what the Signature Intelligence Model is for. SIM isn’t about fixing you. It’s about making your operating pattern visible, so you can rely on it with more certainty. It gives you language for how you tend to move under uncertainty, how you build alignment, and how you make tradeoffs when stakes are real. When AI makes the paperwork cheap, your value comes from the part that can’t be outsourced as easily… your judgment, your consistency, and your ability to turn unclear situations into decisions people can execute.

Day one, you don’t need a deep assessment to use it. Before your next meeting, take a quiet moment and ask yourself, “Under pressure, do I naturally push progress or protect against risk? Do I create alignment through direction or through connection?” Just naming that changes how you show up. Then walk into the room and run the three decision moves: decision, tradeoff, owner and time. AI can generate the package. You bring the part that makes the package true.

Brandon Matthews

Brandon Matthews is a researcher at Mintelan focused on how decisions get made when work is complex, uncertain, and resistant to clean playbooks. His research sits at the intersection of AI, organizational design, and modern delivery, with a particular emphasis on non-deterministic work where judgment matters more than process. As the creator of the Signature Intelligence Model (SIM) and PMIQ, he explores how decision signatures, modes, and cognitive tilts shape real outcomes, and how AI can be designed to support better thinking rather than replace it. He brings a practitioner’s lens from years of leading enterprise transformation and portfolio governance, working closely with leaders who are navigating ambiguity, pressure, and change.