Governance Architecture Organizational Systems Design Cross-Functional Planning Infrastructure

Re-Architecting UX Governance
Across Product Planning

UX was structurally excluded from the systems that govern product investment decisions. This is the operating model I designed to change that — separating planning intent from execution reality, and giving leadership a reliable view of design capacity for the first time.

My Role UX Governance Architect · Cross-Functional Strategist
Intervention Type Systemic organizational redesign — not a tool improvement
Systems Redesigned Aha (Planning Intent) · Jira (Execution Tracking)
Organizational Span UX · PM · Engineering · Leadership
Executive Thesis
  • What was structurally broken: Both planning systems — Aha and Jira — were built entirely around PM and Engineering. UX had no structured presence in either: no demand fields, no capacity model, no ownership layer. Design work existed as an untracked assumption until it became a sprint-level crisis.
  • What risk that created: Leadership could not assess UX readiness against roadmap commitments. PM could not plan around design capacity. Scope conflicts surfaced under sprint pressure — when options were already exhausted. The organization was making roadmap decisions without modeling roughly 30–40% of the work required to execute them.
  • The leadership gap I stepped into: No one had claimed authority over this structural problem. I partnered with PM to redesign the planning architecture, defined governance boundaries, established required fields and ownership rules, and built the reporting infrastructure leadership needed — without a mandate, through cross-functional influence alone.
01 / 06

Systemic Failure Diagnosis

The failure was not a process gap. It was a structural one. Product planning systems had been configured to model PM and Engineering work with precision — roadmap features, engineering stories, sprint velocity. UX effort was simply not in the model. It didn't appear as a field. It didn't have an owner. It had no planning weight. The result was an organization running product planning with a significant category of work invisible at the decision layer.

When a function has no structural representation in planning systems, it cannot exert planning authority — regardless of the talent, judgment, or effort of the people in that function. UX was operating reactively not by choice, but by infrastructure design. The planning tools made it structurally impossible to operate any other way.

Failure 01
UX Invisible at Roadmap Level
Aha contained no UX demand fields. PM was committing features to roadmaps — and implicitly to design investment — without any structured signal of what those commitments required from the design org. UX workload was invisible to the people responsible for planning it.
Failure 02
Capacity Blind Spots at Leadership Level
Leadership had no mechanism to assess aggregate UX demand against roadmap commitments. Reporting was informal and reactive — surfaced when a delivery was at risk, not before capacity was overcommitted. There was no forecast, only a post-hoc accounting of what went wrong.
Failure 03
PM–Engineering Alignment Excluding UX
Planning conversations between PM and Engineering happened inside Jira — at the sprint level. UX entered those conversations as a late, reactive input. By the time design scope was negotiated, engineering commitments had already been made. UX was consistently operating downstream of decisions it should have shaped.

If nothing changed: The pattern would compound. As the product roadmap scaled, UX demand would grow without becoming more visible. Sprint escalations would increase in frequency. PM would continue committing to features without design investment data. Leadership would continue flying blind on design capacity. And the org would continue treating a structural problem as a personnel or process one — blaming coordination instead of fixing the infrastructure.

02 / 06

Governance Architecture

Governance separates intent from execution to protect planning integrity. The architecture I designed does not describe how UX work gets done — it defines the organizational conditions under which good planning decisions can be made in the first place. It operates across five distinct layers, each answering a different question at a different altitude.

Layer 1 — Product Intent
PM · Aha · Strategic Roadmap
What are we building, and what organizational investment does it require across all functions?
Feature prioritization Cross-functional investment signals Roadmap sequencing Quarterly planning horizon
Layer 2 — UX Demand Definition
UX Lead · Structured Planning Fields
What design investment does each feature require, at what phase, owned by whom?
UX involvement flag Effort sizing by phase (T-shirt) Named design owner Phase classification
Layer 3 — Capacity Review & Ownership
UX Lead + PM · Quarterly Planning Gate
Does the roadmap's implied UX demand fit within current design capacity? Where are the conflicts?
Aggregate demand review Conflict surfacing Scope sequencing Ownership confirmation
Governance Boundary — Planning Authority does not transfer to Execution without complete UX fields
Layer 4 — Execution Tracking
Engineering + Designer · Jira · Sprint Level
What is being built this sprint? How is it progressing? Where are the blockers?
Story decomposition Sprint assignment Daily progress tracking Velocity measurement
Layer 5 — Forecast Correction
UX Lead · Post-Delivery Calibration
How did actual effort compare to planning estimates? What do those variances tell us about our sizing model?
Actual vs. estimated comparison Pattern identification Benchmark calibration Leadership forecast update
Fig. 01 Five-layer governance architecture. Each layer answers a distinct organizational question at the appropriate planning altitude. The governance boundary between Layers 3 and 4 is the critical structural separation — it ensures that planning authority is exercised before execution begins.

Why the Aha–Jira separation is the critical design decision

Most product organizations let planning and execution blur into each other. When that happens, the sprint becomes the de facto planning environment — which means planning happens under time pressure, with limited options, and inside a system (Jira) that was never designed to hold intent. The result is reactive capacity management: discovering what was missing after it was already too late to change it.

The separation between Aha and Jira is not a process preference. It is a structural assertion that planning questions require planning infrastructure, and execution questions require execution infrastructure — and that answering planning questions inside execution tools guarantees that planning will consistently lose. The governance boundary between these two systems is the mechanism that keeps that separation real under organizational pressure.

Governance separates intent from execution to protect planning integrity. Without that separation, execution urgency always displaces planning discipline.

03 / 06

Governance Guardrails

Guardrails are not tool settings. They are organizational disciplines — codified responses to specific failure modes that the organization has already experienced. Each guardrail below names the dysfunction it prevents, the cultural habit it interrupts, and the structural discipline it introduces in its place.

Guardrail Dysfunction It Prevents Cultural Habit It Interrupts Discipline It Introduces
UX fields required before Aha feature advances Features entering execution without design investment declared Treating UX scope as discoverable at sprint, not plannable at roadmap UX demand is a planning prerequisite, not an execution variable
Jira stories require upstream Aha record Execution outrunning planning; scope defined inside the sprint tool Starting engineering work before design intent is declared Planning authority precedes execution authority — always
T-shirt sizing reviewed at planning cadence, not sprint grooming Effort estimates revised under sprint pressure, losing planning signal Re-opening sizing decisions in grooming as a negotiation mechanism Sizing is a planning decision; decomposition is an execution decision — these are different conversations
Scope changes re-enter at Aha before Jira is updated Silent scope inflation accumulating sprint over sprint without leadership visibility Patching scope changes in Jira without updating the planning record Every scope change is a planning event; it must be recorded at the planning layer to preserve forecast integrity
Designer ownership assigned at planning, inherited by execution Ownership ambiguity at sprint start; last-minute assignment under delivery pressure Treating ownership as a grooming output rather than a planning input Accountability is established before commitment — not negotiated after it
Fig. 02 Governance guardrails framed as organizational disciplines. Each rule targets a specific failure mode and cultural habit — not a tool configuration. The guardrails are the mechanism by which architectural decisions persist through organizational entropy.

Decision-layer authority map

Governance confusion is ownership confusion. The authority map below defines which function holds decision authority at each layer — not as a RACI, but as a record of what each function is responsible for protecting.

Decision Domain Layer Authority Cadence What This Protects
UX demand field completeness Planning UX Lead Pre-planning review Roadmap cannot advance without design investment declared
Effort sizing by UX phase Planning UX Lead + PM Quarterly planning Sizing reflects planning judgment, not sprint negotiation
Designer ownership assignment Planning UX Lead Feature intake Accountability established before engineering commitment
Sprint story creation and sequencing Execution Engineering + PM Sprint planning Execution decisions are downstream of, not independent from, planning
Design task decomposition Execution Designer Sprint grooming Decomposition does not re-open effort sizing
Scope change processing Shared PM + UX Lead As triggered Scope changes surface at planning layer before execution updates
Estimation accuracy review Feedback UX Lead Post-delivery Planning model improves with each cycle — it doesn't calcify
UX capacity forecasting Planning UX Lead + EM Monthly / Quarterly Leadership forecast derived from planning data, not sprint velocity
Fig. 03 Decision-layer authority map. Each row names a governance decision, the function that owns it, and what organizational integrity that ownership protects. This is not a coordination matrix — it is a record of structural accountability.

When ownership is ambiguous, urgency decides. Governance makes ownership explicit before urgency arrives.

04 / 06

Leading Adoption & Alignment

Designing the governance architecture was the analytical challenge. Earning adoption across PM, Engineering, and leadership — without positional authority, on the strength of cross-functional argument alone — was the leadership one. The two challenges require different skills, and the second is harder.

Starting with PM — reframing the value proposition

I started with PM because they held the most leverage and felt the problem most acutely. My approach was not to present a UX need. It was to name a PM problem: PMs were making roadmap commitments they couldn't fully evaluate, because design investment wasn't in the planning data. The governance architecture gave PM something they didn't have — the ability to see UX capacity requirements before locking sprint scope.

That reframe was the difference between adoption and resistance. PM became co-owners of the model because it addressed their planning confidence problem — not because they were asked to accommodate UX. We negotiated field definitions together, stress-tested the sizing model against the existing roadmap, and jointly defined the governance boundary between Aha and Jira. By the time the model was finalized, PM was already advocating for it internally.

Aligning Engineering — connecting governance to sprint reliability

Engineering's concern was predictability, not process. The guardrail requiring complete Aha UX fields before Jira story creation initially read as friction — an additional gate before work could start. I reframed it as the opposite: design scope ambiguity at sprint start is what creates friction. When UX ownership and effort are unclear entering a sprint, engineering delivery absorbs the cost. The governance model didn't add overhead — it moved a conversation that was already happening from sprint pressure to planning cadence, where it could be resolved with options intact.

Defining the minimum viable field set

One of the highest-stakes design decisions in governance work is knowing how much structure to impose. Too little, and the model has no teeth. Too much, and adoption collapses under administrative weight. My criterion for required fields was precise: a field earns a mandatory status only if its absence would change a planning decision. That test eliminated everything decorative and retained only what carried planning weight — UX involvement flag, effort sizing by phase, phase classification, and named designer. Four fields. Enough to make UX demand plannable. Not enough to create compliance fatigue.

Building reporting leadership could actually use

Leadership visibility was the final adoption challenge — and the most consequential one. The structured Aha fields enabled reporting that derived UX capacity forecasts directly from roadmap data, not from sprint velocity or informal updates. For the first time, leadership could ask structural questions and receive data-backed answers: Which product lines carry the highest design investment this quarter? Where does capacity break if we accelerate two additional features? Does current headcount sustain the roadmap as planned?

Framing those reporting outputs in leadership terms — as resource and risk questions, not UX status updates — was what made them relevant to the conversations leadership was already having. The governance model didn't create new reporting overhead. It made the planning data that now existed visible at the right organizational altitude.

05 / 06

Organizational Impact

Impact in governance work is measured in structural shifts, not project outcomes. The question is not whether a deliverable shipped — it is whether the organization now makes different decisions, with better information, at the right time. These are the structural shifts this architecture produced.

Roadmap Predictability
Improved
PM could commit features with full visibility into UX investment. Undiscovered design scope stopped entering sprint.
Reactive Escalation
Replaced
Mid-sprint renegotiation replaced by early scope alignment. Capacity conflicts moved from execution to planning cadence.
UX Capacity Forecast
Established
Aggregate design demand visible in Aha for the first time — forecastable by quarter, product line, and phase type.
Leadership Reporting
Structured
Moved from anecdotal updates to structured forecasting derived directly from roadmap planning data.

Team level

Designers entered sprints with declared scope and confirmed ownership. UX leads had a shared vocabulary with PM for communicating effort — T-shirt sizing gave design work the same planning legibility that engineering story points had always provided. The team stopped spending sprint time relitigating decisions that should have been made at planning.

Program level

PM gained a reliable signal of design investment at the roadmap stage — before engineering commitments were made. The governance boundary between Aha and Jira meant that scope, sizing, and ownership were established before execution began, not discovered during it. Cross-functional planning conversations shifted from reactive scope negotiation to proactive capacity management. The org gained the ability to see UX demand coming, rather than absorbing it as it arrived.

Leadership visibility level

Leadership moved from informal, risk-triggered updates to structured capacity forecasting against the roadmap. The questions they could now ask — and receive data-backed answers to — were fundamentally different in character: not "what happened to the design schedule" but "what does this roadmap require from UX, and do we have the organizational capacity to deliver it." That is a strategic resource question. It requires infrastructure to ask consistently, and that infrastructure now exists.

06 / 06

What This Demonstrates

A Principal-level designer is not defined by what they design. They are defined by the organizational problems they can see, name, and resolve — including problems that have no obvious owner and no clear brief. This case demonstrates three capabilities that operate above the craft layer.

Operating model thinking

I diagnosed the problem as a structural one — a planning infrastructure that excluded UX — and designed a solution at the structural level. Not a better process. Not a new meeting. An operating model: a defined set of layers, boundaries, ownership rules, and cadences that changes how the organization makes planning decisions as a system. That kind of intervention requires the ability to see organizational dysfunction in architectural terms, not behavioral ones.

System-level intervention

The governance architecture did not solve a UX problem. It solved a planning problem that happened to have UX invisibility as a primary symptom. The system I designed gives PM better data, gives Engineering clearer scope, gives leadership a reliable forecast, and gives UX structural planning authority. A system-level intervention produces value for every function it touches — not just the function that initiated it.

Cross-functional governance without authority

Nothing here was mandated. Every governance boundary, every required field, every reporting structure was adopted through cross-functional alignment — by demonstrating to each function what the model offered them specifically. That required understanding PM's planning confidence problem, Engineering's sprint predictability concern, and leadership's forecasting gap well enough to solve all three simultaneously. Governance built through alignment is durable. Governance imposed through authority is fragile. The architecture I designed was built to outlast the conversation that created it.

The measure of a governance architecture is not whether it looks right on a diagram. It is whether, six months later, the organization is making better decisions — at the right altitude — without requiring the original architect in the room.

UX invisibility in planning systems is not a communication problem. It is an infrastructure problem. You cannot solve it by asking more loudly — only by building the structures that make it architecturally impossible to ignore.

Addendum

Vibe Coding: The Delivery-Layer Complement

The governance architecture described in this case study solves a planning-layer problem: UX capacity is invisible in the systems that govern investment decisions. There is a parallel problem at the delivery layer — design intent loses fidelity as it moves from designer to engineer, and PM, designer, and engineer have no shared artifact type that all three can evaluate simultaneously.

Vibe coding — AI-assisted local development using Claude Code and Figma MCP — is the structural answer to that delivery-layer problem. Rather than passing a static Figma frame to engineering and absorbing interpretation loss at the handoff, a designer working in the vibe coding environment can generate running SAPUI5 code in the same session the Figma frame is open. The design becomes a testable artifact before engineering begins.

What I Built

Live AI demos: I built two fully functional AI assistant demos — OC Joule (Order Collaboration) and ASN Joule (Advanced Shipping Notification) — as running SAPUI5 applications. Stakeholders experienced the actual conversation flow, interaction latency, and visual treatment in a browser rather than evaluating a static Figma mockup. Each demo was built within a single working session using a custom skill library matched to SAP Fiori Horizon design tokens.

Custom skill library for the UX team: I engineered a set of Claude Code skills encoding SAP Fiori design conventions, Joule panel architecture, demo scaffolding patterns, and a Figma-to-code bridge. These skills are shared with the UX team — every designer can reproduce the same workflow without rebuilding the company design system context from scratch in each session. The skills improve through use: each feedback correction I make during a session gets encoded back into the skill file, compounding quality across the team over time.

UX team local environment setup: I documented and tested the full setup workflow for designers with no prior terminal experience — Node.js, Claude Code CLI, Figma MCP configuration, and skill installation — so the team could begin vibe coding sessions independently.

Connection to Governance Architecture

The governance infrastructure in this case study makes UX work structurally visible at the planning layer. Vibe coding makes UX output structurally legible to all functions at the delivery layer. The two interventions are complementary: governance ensures UX capacity is modeled before commitments are made; vibe coding ensures the artifact that crosses the design–engineering boundary carries enough fidelity to eliminate interpretive ambiguity at handoff.

Together, they close the gap at both ends of the product development chain — from roadmap intent to running implementation — without requiring new organizational process or additional overhead. The infrastructure changes the default behavior of the system.