AI Systems Design Decision Readiness Workflow Orchestration Reusable Prompt Architecture

SAP Product AI
Development System

Enterprise AI deployments require more than a general reasoning chain — they require domain-specific governance skills that encode the platform's constraints, design language, and authority model. I extended the 9-skill product development system into an 11-skill enterprise variant for SAP: adding a Fiori-aligned UX specification skill and a Joule AI experience design skill. The Joule skill is not a UX exercise. It is a governance design exercise — defining what the AI can decide, what it must escalate, and how suppliers experience authority handoffs in a live operational workflow.

My Role System Designer · Prompt Architect · AI Workflow Engineer
Context SAP enterprise adaptation · Supplier workflow automation · Joule AI integration
Output Type Executable AI system — skill modules, orchestration logic, output contracts, Claude skill package
Connects To AI Decision Architecture · Governance Layer · Decision Readiness Engine
What This Is
  • A structured AI system, not a prompt collection. Nine discrete skill modules — Define, Diagnose, Goal, Generate, Challenge, Refine, Self-Study, Self-Test, Execution Readiness — each with defined inputs, outputs, and handoff contracts. They compose into an end-to-end reasoning chain that moves from raw problem to execution-ready output.
  • Decision quality is an architecture problem. Teams don't produce weak plans because they lack intelligence — they produce them because the process lacks structure. No one defines the real problem before jumping to solutions. No one challenges assumptions before committing to scope. This system enforces that structure through AI, so teams can't skip the hard steps.
  • The SAP variant adds two domain-specific skills to the base chain. Skill 10 (UX Spec / Fiori) translates decisions into SAP Fiori-aligned design specifications; Skill 11 (Joule Experience Design) defines AI autonomy thresholds, escalation paths, and authority boundaries for Joule-assisted supplier workflows. Skill 11 is governance infrastructure, not interaction design.
01 / 05

The Problem This Solves

Product teams, regardless of experience level, repeat the same structural failures across every planning cycle: they skip problem definition and go straight to solutions, they generate ideas without challenging assumptions, they write specs that are too vague to build, and they commit to execution before key decisions are made. AI assistants make this worse — they generate confident-sounding outputs at every stage, regardless of whether the underlying reasoning is sound.

The core failure is not effort or intelligence. It is the absence of a structured reasoning chain that forces the right questions in the right sequence. Without it, AI assistance amplifies speed at the cost of quality — producing faster outputs with the same structural gaps.

Gap 01
Skipped Problem Definition
Teams start with solutions before the problem is clearly defined. Root cause, affected users, and consequences are assumed rather than stated. AI accelerates this — it will generate solutions for any prompt, regardless of whether the problem is real.
Gap 02
Unchallenged Assumptions
Solutions get refined without ever being challenged. Weak points, edge cases, and missing decisions accumulate invisibly until they surface as rework or production failures. AI assistants will polish a flawed solution indefinitely without flagging that it is flawed.
Gap 03
Vague Execution Outputs
Plans are written at the level of intentions, not tasks. Ownership is unnamed. Acceptance criteria are absent. The gap between "we decided to do X" and "here is a task you can start without a meeting" is never closed.
Gap 04
No Decision Governance
Decisions made during planning are not logged, rationale is not captured, and open questions have no owners. When questions resurface later — and they always do — there is no record of what was decided and why.
Gap 05
Generic AI Outputs
Without a structured system, AI responses are generic — plausible-sounding but not grounded in the specific problem, constraints, or organizational context. Teams spend more time editing AI output than they would have spent thinking themselves.

The insight: These are not individual skill gaps — they are structural absences. The solution is not better prompting. It is a system that enforces the right structure at every stage, making it impossible to skip the hard steps.

02 / 05

The System Architecture

The AI-Native Product Development System is a nine-skill reasoning chain, each skill responsible for a discrete stage of product thinking. Skills are not suggestions — they are contracts. Each has defined inputs, structured outputs, and explicit handoff conditions. The chain cannot short-circuit; skipping a skill means the next skill's inputs are undefined.

Skill 01
Define Issue
Extract the real problem behind the request. Separate symptoms from root causes. Produce a structured problem statement with named users, consequences, and unknowns.
Problem statement Target users Root cause Assumptions flagged
Skill 02
Diagnose
Understand why the problem exists at a system level. Identify structural causes, workflow gaps, constraints, and dependencies. Ask "why?" at least once per root cause.
Root cause table Constraints System gaps Risk table
Skill 03
Define Goal
Turn the problem into measurable outcomes. Every goal must have a metric. Separate user outcomes from business outcomes. Name what this effort will not address.
User value Business value Success metrics Non-goals
Skill 04
Generate Solution
Create a realistic V1 solution grounded in the defined problem. Name specific behaviors, not generic UX language. Flag every open decision with an owner.
Solution summary User flow V1 scope Open decisions
Skill 05 — Critical Governance Layer
Challenge Solution
Assume the solution has problems. Challenge every assumption. Find edge cases, undefined states, and failure paths. Minimum: 3 weak points, 3 missing decisions, 4 critical questions. Do not soften findings.
Weak points Hidden risks UX concerns Technical concerns Critical questions
Decision Quality Gate — solution must survive challenge before refinement begins
Skill 06
Refine Solution
Resolve every weak point from the challenge. Log every decision made. If an item cannot be resolved, document why and state the fallback. Only change what the challenge flagged.
Refined solution Updated scope Decision log Remaining open questions
Skills 07–08
Self-Study + Self-Test
Self-Study reviews the full body of work for recurring weaknesses and drift from defined goals. Self-Test simulates review from User, PM, Designer, and Engineer perspectives — each held separately, each producing a Pass / Fail / Conditional verdict.
Pattern analysis 4-perspective test Conditional verdicts Consolidated findings
Skill 09
Execution Readiness
Break into epics and tasks specific enough to start without a meeting. Every task has one owner and one outcome. Define observable acceptance criteria. Produce a readiness score 0–100. If below 50, return to Skill 06.
Epics + tasks Acceptance criteria Readiness score Launch checklist
Fig. 01 Nine-skill AI reasoning chain. Skill 05 (Challenge) is the governance-critical layer — it is the point where the system is most likely to be skipped under pressure, and the point where the most structural failures originate. The decision quality gate enforces that challenge findings are resolved before refinement begins.

Why Skill 05 is the system's load-bearing element

Skills 01–04 produce a solution. Skills 06–09 refine and ship it. Skill 05 is the governance layer in between: it assumes the solution is wrong and looks for evidence. This is the step most product processes skip under deadline pressure, and it is the step where the most costly failures originate — weak assumptions that compound into rework, undefined states that become production bugs, missing decisions that cause scope to expand at sprint start.

The system enforces Skill 05 structurally — the refine step cannot begin until challenge findings exist. The AI is instructed to assume the solution has problems, not to evaluate whether it might. That asymmetry is intentional.

The goal is not faster output. It is better decisions at the stage when they are cheapest to change — before build begins.

03 / 05

Applied in Practice

SAP supplier workflows operate inside strict authority constraints — who can confirm an order, who can flag an exception, what the AI can act on without human review. The general 9-skill chain doesn't encode those constraints. Skills 10 and 11 do. Skill 10 translates decisions into SAP Fiori-aligned specifications that engineering can build without a handoff meeting. Skill 11 defines the governance model for Joule AI assistance — what it says, what it defers, and how it signals uncertainty to the supplier.

SAP — Enterprise Adaptation
Extended the SyncoPro system into a structured AI framework for SAP designers, PMs, and engineers — designed for reuse across products and domains within the enterprise context.
What Was Built
Enterprise variant with two additional skills: Skill 10 (UX Spec / Fiori) for translating decisions into SAP Fiori-aligned design specifications, and Skill 11 (Joule Experience Design) for structuring Joule AI conversation flows, confidence signaling, and human-in-the-loop handoffs. Applied to the supplier order confirmation workflow as the first execution target.
Decision Architecture Connection
Skill 11 directly implements Layer 3 of the AI Decision Architecture framework — defining autonomy thresholds, escalation paths, and authority boundaries for Joule-assisted supplier workflows. The Joule experience design skill is a governance design exercise, not a UX exercise.
Impact
Enables enterprise teams to apply structured AI reasoning to product problems without needing to design the reasoning chain themselves. Scales the governance model from a founder tool to an organizational operating model — reusable across SAP Business Network, Supply Chain, and any product domain that relies on AI-assisted decision-making.

What makes this an enterprise governance tool, not just a design aid: Skill 11 (Joule Experience Design) is not a UX exercise — it is a governance design exercise. It defines autonomy thresholds, escalation paths, and authority boundaries for AI-assisted supplier workflows. The output is a governance contract, not a wireframe.

04 / 05

Concepts and Design Thinking

The system is built on a set of principles that are not obvious from the feature list. Each one represents a deliberate architectural choice — a decision about what the system should enforce, what it should leave to human judgment, and what failure mode it is designed to prevent.

Concept What It Means in Practice Failure It Prevents Architectural Response
Skills are contracts, not prompts Each skill has defined inputs, structured output sections, and explicit handoff conditions to the next skill Freeform AI outputs that are plausible but structurally incomplete — missing ownership, missing metrics, missing edge cases Output contracts specify every required section; missing sections are errors, not omissions
Challenge before refinement Skill 05 assumes the solution is wrong. The system is instructed to find problems, not assess whether problems might exist Solutions refined in the direction of their flaws — made more polished but not more sound Enforce challenge as a mandatory gate; refine only acts on challenge findings, not on the original solution
Decision log as governance infrastructure Every decision made during the reasoning chain is logged with options considered, rationale, and owner Decisions made verbally in planning, forgotten when questions resurface at sprint start Decision log is a permanent artifact — not a summary, not a note; a traceable record that travels with the output
Readiness score as authority gate Skill 09 produces a 0–100 readiness score with defined bands. Below 50: return to refinement. No exceptions. Teams committing to build on plans that have unresolved blockers — treating planning completion as build authorization Readiness score is not a recommendation; it is an authority threshold. Low score means execution authority has not been earned
AI as reasoning partner, not generator The system does not produce outputs for the human to edit. It produces structured reasoning that the human advances, challenges, or escalates AI outputs that accelerate production of structurally weak content — faster arrival at the same bad place Every skill output ends with Confidence level, Unresolved Gaps, and Next Recommendation — the system models its own uncertainty
Fig. 02 Core design principles. Each is a structural choice about what the system enforces versus what it leaves to judgment. The system's value is not in the quality of any single AI output — it is in the structure that prevents the human-AI collaboration from producing confident but structurally incomplete work.

The connection to AI Decision Architecture

The AI-Native Product Development System is itself an instance of the AI Decision Architecture framework. It defines exactly what AI can decide autonomously (generate a draft, structure an output, flag a risk), what requires human confirmation (accept a challenge finding, log a decision, advance past the readiness gate), and what feedback loop closes the system (the decision log feeds back into future skill invocations as context).

The parallel is deliberate: the same governance principles that prevent silent failure in enterprise AI deployments — authority boundaries, escalation paths, feedback loops — apply equally to AI-assisted product planning. Both are systems where AI acts with partial information in high-consequence contexts. Both require the same structural answer.

AI does not improve product decisions by being smarter. It improves them by enforcing the structure that prevents the shortcuts that cause most planning failures.

05 / 05

What This Demonstrates

Building this system required holding two capabilities simultaneously: understanding what AI can and cannot reliably do in a reasoning chain, and understanding what organizational processes fail when structure is absent. The system design is an answer to both — it extends AI's reasoning capability while constraining the contexts in which it acts without human governance.

Systems-level AI thinking applied to process design

Most AI product work is about what AI should produce. This work is about what AI should be authorized to do — and what structure must surround it for the outputs to be trustworthy. That reframe — from AI as output producer to AI as governed reasoning agent — is the same reframe the AI Decision Architecture framework applies to enterprise operational systems. The consistency is the point.

From design tool to organizational operating model

A general reasoning chain becomes an enterprise operating model when it encodes the specific constraints of a platform — the design tokens, the authority model, the escalation paths. That encoding is what Skills 10 and 11 represent. They are not add-ons. They are the difference between a framework that any product team could use and a system that a SAP designer can invoke on the first day of a new workflow.

Planning Friction
Reduced
Structured skill chain replaces unstructured ideation — teams move faster because the process is defined, not because steps are skipped.
Decision Quality
Improved
Challenge and refinement enforce that solutions are stress-tested before commitment. Weak assumptions surface during planning, not during sprint.
Execution Outputs
Actionable
Every cycle ends with a readiness score, task list with named owners, and acceptance criteria — not a document, a build-ready artifact.
Reusability
Scalable
Packaged as an executable Claude skill — any team member can invoke the full reasoning chain with one command, on any product problem.

The organizations that use AI most effectively are not those with the most capable models — they are those with the most rigorous structures surrounding how AI is authorized to reason on their behalf.