Designing a Decision
Intelligence System
SyncoPro is not a feature set — it is a unified system. Three engines work together: a Decision Engine that turns ambiguous ideas into execution-ready briefs, an Evolution Engine that learns from every cycle and improves automatically, and a Language System that lets users think in their strongest language while collaborating in everyone else's. This case documents how those three components were designed, why they belong together, and what the system can do that none of its parts could do alone.
- A unified decision intelligence system, not a collection of features. SyncoPro turns a rough idea into a structured, execution-ready brief — scored for decision quality, refined by AI, available in the user's thinking language and in business-ready English simultaneously.
- It learns with every cycle. The Evolution Engine scans the product continuously, classifies issues automatically (P0/P1/P2), auto-fixes high-confidence problems, and persists that knowledge across sessions. The system becomes more accurate over time without being reconfigured.
- It removes language as a barrier to decision quality. Non-English users generate their brief in their native language — the language where their thinking is most precise. A business-ready English version is produced simultaneously for global collaboration. No translation step. No re-generation. No friction for English users.
The Problem
AI tools that help with product development share a common failure mode: they generate outputs but do not improve the system producing them. They assume English as the thinking language. They treat feedback as passive data. They restart from zero with every session.
The result is a predictable set of gaps:
These are not separate problems. They are symptoms of the same missing architecture: a system that structures output, learns from feedback, improves automatically, and serves users regardless of what language they think in.
The System
Three components. One unified loop.
The user inputs an idea. The Decision Engine structures it into an execution-ready brief. The Language System generates that brief in the user's thinking language and in English simultaneously. The Evolution Engine scans the result, learns from it, and improves the next cycle. The loop closes.
Decision Engine
Most AI tools generate text. The Decision Engine generates structure. There is a difference: structure is actionable.
When a user submits an idea, the engine produces a complete project brief: a specific problem statement, defined users, scoped V1 features, explicit exclusions, key decisions with rationale, blockers, execution phases with effort estimates, and immediate next actions. Each field has a purpose — together they form a document a team can act on without translation into another format.
Decision Quality Score
Every brief is scored 0–100 across five dimensions: problem clarity, solution specificity, scope discipline, decision coverage, and execution readiness. The score is a live signal — it tells the user whether the brief is ready for a team or needs another refinement cycle.
- → 80+ — Decision-Ready. Brief can be handed to a team.
- → 60–79 — Strong. One refinement cycle will close remaining gaps.
- → 40–59 — Developing. Key decisions or scope are still undefined.
- → Below 40 — Needs Work. Idea needs more specificity before structure holds.
Refine Loop
When a brief scores below the readiness threshold, the Refine agent challenges it — identifying the specific fields that are weakest (vague problem statement, over-broad scope, missing blockers) and rewriting them. Each refinement cycle updates the score. The loop stops when the brief is genuinely ready, not when the user clicks accept.
The goal is not a completed document. It is a brief that a team can act on without asking clarifying questions.
Language System — Thinking vs Collaboration
Forcing non-English users to input in English doesn't change the language — it reduces the thinking. When a founder writes in their second language, they compress ideas to fit available words. The AI responds to the compressed version. In a decision support tool, that gap has consequences.
The insight
- Users have two language needs: think freely in their strongest language, collaborate clearly in a shared one.
- Language affects decision quality, not just UI — the precision of a problem statement depends on how fluently the user can articulate it.
- Treating language as a display setting misses the point. It needs to be a generation parameter.
The three-language model
Three roles — always kept separate. Collapsing any two of them breaks the system.
| Role | What it does | Controlled by |
|---|---|---|
| Thinking thinkingLanguage |
Drives AI generation — the language the user reasons in. A Chinese user gets a Chinese brief. If you write in German, the AI structures the problem in German. | Input detection (automatic — no configuration required) |
| Display displayLanguage |
The app UI language. Independent of what language the user inputs their ideas in. | Device preference / user setting |
| Collaboration collaborationLanguage |
Always English. Optimized for global sharing — not a literal translation but a rewrite for business clarity, active voice, and global readability. | System constant — always present |
How it works
Detection happens before the API call — the system identifies the user's language from the input text using Unicode range matching (for script-based languages) and keyword frequency scoring (for Latin-script languages like German, French, Spanish). The correct generation mode is sent on the first request.
Both versions are generated in parallel and stored immediately in a versions map:
English: { ...full brief, business-ready },
Chinese: { ...full brief, thinking-language },
}
English is always present — the global collaboration layer. The toggle, export, and share functions all read from this map. No re-generation. No latency on interaction.
Key decisions
Challenges solved
Three specific bugs surfaced and were resolved during development:
- Latin language detection was fragile. Short, jargon-heavy inputs from German or Spanish users scored zero keyword matches — falling through to English-only generation. Fixed by lowering the match threshold to one. A false positive (one foreign word in an English sentence) gives the user an unexpected dual brief — minor. A missed detection gives a non-English user no native version — real cost.
- Stale native version after refine. The refine agent rewrites English fields but not the native version. A Chinese user who refined would see an updated English brief alongside a Chinese panel describing the old scope. Fixed by resetting
versionsto English-only after any refine — the native panel disappears rather than showing contradictory content. - Toggle discoverability. Non-English users landing in the result view saw the English brief with no clear signal that a native version existed. Fixed with a single inline hint that dismisses on first toggle interaction.
For English users, the entire language system is invisible. No toggle, no export options they didn't ask for, no UI change. That's the design.
Evolution Engine
A product that cannot detect its own problems relies entirely on human bandwidth to improve. The Evolution Engine removes that dependency for the class of problems that are structurally detectable — and builds memory so they never need to be detected again.
Five-layer architecture
- Issues discovered reactively — user complaints, support tickets
- Improvements depend on human prioritization
- Feedback collected but not systematically learned from
- Same problems repeat across features
- No persistent memory of past issues or resolutions
- Continuously scans across structured domains
- Detects issues before users report them
- Automatically prioritizes — P0 / P1 / P2
- Auto-fixes high-confidence issues without manual intervention
- SCAN-MEMORY.json — every resolution persists across cycles
Layers 1–3 operate within each cycle. Layers 4–5 operate across cycles. That boundary — the persistence layer — is what makes the system self-evolving rather than just automated.
System Architecture — End to End
The three components are not modular features that could exist independently. Each one depends on the others.
| Stage | What happens | Component responsible |
|---|---|---|
| Input | User submits an idea in any language | — |
| Language detection | Client-side detection fires before the API call — identifies thinking language, sets generation mode | Language System |
| Brief generation | AI generates a structured brief in the thinking language, then translates and optimizes to English — both stored in versions map | Decision Engine + Language System |
| Quality scoring | Brief scored 0–100 across five dimensions — readiness signal computed | Decision Engine |
| Refinement | If score is below threshold, Refine agent identifies weak fields and rewrites them — versions map reset to English-only after refine | Decision Engine + Language System |
| Export | User exports English, native, or bilingual markdown — reads from versions map, no API call | Language System |
| Scan | Evolution Engine scans product, classifies issues, auto-fixes P0s, writes resolutions to SCAN-MEMORY | Evolution Engine |
| Learning | Patterns extracted from recurring issues — scan logic refined for next cycle | Evolution Engine |
Key Design Decisions
The decisions that shaped the architecture — and why each one was made the way it was.
| Decision | Why this approach | What was rejected |
|---|---|---|
| English is canonical, not the thinking default | English as the collaboration layer makes sharing, scoring, and search consistent across all users. It does not mean English is better — it means English is the agreed handshake between the user's thinking and the world. | Native-language canonical: would make cross-user quality scoring and search inconsistent |
| Dual output (generate in native + translate) instead of translate-after | Generating in the thinking language first preserves the specificity of the original idea. Translating a brief generates a genuinely different document from translating an English brief into the native language. | Generate in English, translate to native: produces a native-language brief that is structurally thinner — the nuance was stripped at input time |
| versions map — both outputs stored on generation | Toggle, export, and share must be instant. Any architecture that requires re-generation on toggle adds 10–30 seconds of latency to a cognitive task. | On-demand translation: technically simpler but introduces per-interaction latency |
| Reset versions to English-only after refine | The refine agent updates English fields only. A stale native panel that contradicts the current English brief is worse than no native panel — it shows wrong information confidently. | Keep native version after refine: safe-looking but silently incorrect |
| P0/P1/P2 automatic classification in Evolution Engine | Prioritization should be a system output, not a planning meeting. The classification model makes high-impact issues automatically visible without requiring human triage at every cycle. | Manual prioritization: makes iteration speed a function of individual bandwidth, not system capability |
| Persistent memory (SCAN-MEMORY.json) across cycles | Without memory, the system re-discovers the same issues every cycle. Memory is the structural prerequisite for self-evolution — not an optimization. | Stateless scanning: faster to implement but produces no compounding improvement |
Impact & What This Demonstrates
Three concrete improvements — one for each component of the system.
What this demonstrates
This project explores what it means to design AI systems that don't just generate outputs — but improve the quality of the decisions those outputs are based on. The architecture required solving three interlocking problems simultaneously:
- Structure without rigidity — the Decision Engine must be specific enough to produce actionable output but flexible enough to handle genuinely novel ideas
- Language without assumption — the Language System must serve users who think in Chinese, German, or Arabic as naturally as it serves English speakers — without requiring configuration
- Learning without configuration — the Evolution Engine must improve by operating, not by being trained or reconfigured — the system learns from its own behavior
The same design principles that govern this system — authority boundaries, feedback loops, persistent memory, and structured output — connect directly to the AI Decision Architecture framework documented elsewhere in this portfolio. Both require the same architecture to function reliably over time.
Products should not rely on humans alone to improve. They should detect their own problems, learn from their behavior, and serve users regardless of what language they think in. This is what it means to design systems that don't just work — but evolve.
What's next
Three directions for V2, grounded in architectural gaps rather than feature requests:
- Org-level language preference — teams could set a default collaboration language; the versions map already supports this; it's a question of which key is canonical
- Per-language quality scoring — the current score runs against the English brief; a brief that scores 85 in English may have lost nuance in translation; scoring native versions independently would surface that gap
- Event-driven scan triggers — currently manual; V2 fires scans on deploys, feedback spikes, or pattern anomalies — moving from scheduled detection to responsive detection