Project Compass Solution Overview
Collect-analyze-display architecture for turning portfolio evidence into one snapshot-driven answer object.
Source Summary
--- title: Solution Overview description: "How Portfolio Command (Project Compass) implements the strategy: data ingestion, AI synthesis, scoring, snapshots, UI surfaces, and observability. " status: evolving lastUpdated: "2026-02-13 07:23 ET (America/New_York)" owner: Engineering --- # Solution Overview ## How The Project Solves The Problem The solution is
Imported Context
title: Solution Overview description: "How Portfolio Command (Project Compass) implements the strategy: data ingestion, AI synthesis, scoring, snapshots, UI surfaces, and observability." status: evolving lastUpdated: "2026-02-13 07:23 ET (America/New_York)" owner: Engineering
Solution Overview
How The Project Solves The Problem
The solution is a pipeline that turns fragmented portfolio evidence into a single, queryable "answer object":
- Collect portfolio evidence (GitHub + docs + user notes)
- Synthesize intent (AI-generated repo goals, aligned to KPIs)
- Score and classify (focus score + health)
- Publish a snapshot (one JSON blob powering the dashboard)
- Make the AI auditable (logs, costs, prompt visibility)
The Core Loop: Collect -> Analyze -> Display
flowchart LR
U["User"] -->|"Defines KPI tree"| KPI["user_kpis (Vision -> Objectives -> KRs -> Tasks)"]
GH["GitHub (issues / PRs / commits / releases)"] --> ING["Ingest"]
DOC["Repo docs (README, CHANGELOG, ROADMAP, docs/)"] --> ING
NOTE["Manual logs (notes / tasks / conversations)"] --> ING
ING --> DB["Postgres tables (activities, repo_docs, manual_logs, goals, ai_logs, snapshots, sync_log)"]
KPI --> ANA["Analyze (AI goal synthesis)"]
DB --> ANA
ANA --> DB
DB --> SNAP["Snapshot (scoring + portfolio AI)"]
SNAP --> DB
DB --> UI["Dashboard + KPI UI + Workflows + AI Ops"]
Step 1: Evidence Collection
Inputs:
- GitHub activity: issues, PRs, commits, releases
- Repository documentation: README, CHANGELOG, ROADMAP, and
docs/content - Manual logs: user-entered context the system cannot infer from GitHub
Why this matters: the system's downstream outputs (goals, scores, recommendations) are only as good as the evidence it sees.
Step 2: AI Goal Synthesis (Repo-Level Intent)
For each included repo, the system uses AI to produce 1-5 goals that represent what the work indicates the repo is trying to achieve.
Key properties:
- KPI-aligned: the user's KPI hierarchy is injected into the analysis context so the model can judge alignment and drift.
- Evidence referenced: outputs include references back to issues/PRs/commits/docs.
- Stability guardrails: completed goals are preserved (the system avoids "re-inventing" finished work).
- Structured output: tool calling is used to reliably extract goal JSON.
Deeper reference: docs/development/AI_COMPONENTS.md (Pipeline 1: analyze).
Step 3: Focus Score + Health Classification (Repo Triage)
The snapshot step computes a focus score (0-100) that measures discipline and progress quality rather than raw output.
The algorithm combines signals like:
- Goal completion
- PR throughput
- Documentation coverage
- Shipping cadence
- Recency of activity
- Penalties for scope creep and stale work
Repos are also bucketed into healthy / at_risk / critical / inactive to support fast triage.
Deeper reference: docs/development/AI_COMPONENTS.md (Focus score + health classification).
Step 4: Portfolio Snapshot (One Read, One Answer)
Instead of rendering the dashboard from many live queries, the system publishes a single snapshot JSON blob (stored in the database) that includes:
- Portfolio KPIs (counts, averages)
- Per-repo summaries (goals, scores, breakdowns)
- Recent activity list
- 7-day activity timeline
- Portfolio AI analysis (risks, recommendations, summary)
This "snapshot-driven UI" is a deliberate solution choice:
- Predictable rendering (the UI is only as fresh as the latest snapshot)
- Fast reads (one object powers most screens)
- Clear debugging boundary (if the snapshot is wrong, the pipeline is wrong)
Deeper reference: docs/development/APP_STRUCTURE.md and docs/development/ARCHITECTURE.md.
Step 5: Portfolio AI Analysis (Cross-Repo Strategy View)
In addition to deterministic scoring, the snapshot step calls AI to summarize the portfolio:
- Risks (severity-tagged, repo-scoped when relevant)
- Recommendations (priority-tagged with rationale)
- A narrative summary tying the portfolio together
This is intentionally separate from the focus score: the score gives stable heuristics, while the AI provides interpretation.
Deeper reference: docs/development/AI_COMPONENTS.md (Pipeline 4: snapshot).
Product Surfaces (Where The "Answer" Appears)
- Dashboard: portfolio KPIs, repo health cards, focus breakdown, activity chart/feed, goals board, portfolio insights
- KPIs: strategy definition (tree) plus AI evaluation of progress
- Workflows: pipeline run history and step status (sync log)
- AI Ops: prompt/response visibility, token usage, estimated cost, latency, per-task model config, playground
Deeper reference: docs/features/FEATURES.md.
Observability and Trust (BYOK-Friendly)
Because users pay for AI calls (BYOK), the system makes AI behavior inspectable:
- Logs every ops pipeline AI call with system/user prompts, raw responses, parsed results, tokens, cost, and latency
- Provides a UI to review and rate outputs
- Allows per-task tuning via configuration (provider/model/temperature)
Deeper reference: docs/development/AI_COMPONENTS.md (Telemetry & cost tracking).
Deployment Posture (Current and Target)
- Current baseline: Supabase (DB + Auth + Edge Functions) with a Vite frontend
- Migration target (Project 1001): standalone Fastify API on Render + Neon Postgres, with temporary bearer-token gating and optional Supabase bridge fallback
Deeper reference:
docs/deployment/SELF_HOSTING.mddocs/api/standalone-api.mddocs/api/contracts.md
Provenance
- Source file:
project-compass/docs/SOLUTION.md - Source URL: https://github.com/maggielerman/project-compass/blob/main/docs/SOLUTION.md