GoHighLevelRoadmap Radar/Documentation
Overview

Roadmap Radar

An AI-powered signal intelligence layer that ingests customer feedback from Canny, deduplicates and scores grouped requests, and generates sprint-ready product artifacts โ€” PRDs, story-pointed tasks, and Cursor implementation plans.

The system has two independent layers: an analysis pipeline that runs on-demand to process Canny boards, and a web application that serves the results, manages pipeline jobs, and triggers the AI agent chain on any grouped request.

๐Ÿ”
Signal Intelligence

Collapses 1,194 raw posts into 548 grouped requests. Detects 134 already-shipped features automatically.

๐Ÿค–
Agent Pipeline

4-agent chain generates PRD โ†’ tasks โ†’ spec โ†’ Cursor build plan in under 2 minutes from any group.

๐Ÿ”’
Developer Control

AI writes the plan, not the code. Plans repo is isolated with a fine-grained PAT โ€” zero product codebase access.


How It Works

The Analysis Pipeline

Every analysis run executes nine sequential stages. Runs are enqueued in Redis โ€” one job processes at a time. Each stage checks a cancellation flag, so jobs can be stopped cleanly at any stage boundary.

Pipeline Stages

01
Ingest

Fetch posts, comments, and changelogs from the Canny API. Normalize vote scores to a 0โ€“10 range. Output is stored per-board in an isolated subdirectory.

02
Summarize

GPT-4o generates a structured summary per post that incorporates the title, description, and all community comments. Captures the true request intent even when the original description is sparse.

03
Embed

OpenAI text-embedding-3-large produces a 3072-dimension vector for each summary. Embeddings are persisted to disk to avoid re-computation on re-runs.

04
Deduplicate

Hybrid search (FAISS cosine similarity + BM25 term matching) retrieves candidate pairs. Reciprocal Rank Fusion merges both result sets. GPT-4o makes the final deduplication decision on each candidate pair.

05
Extract Insights

GPT-4o generates per-post structured intelligence: customer pain points, customer needs, an impact score (0โ€“10), impact reasoning, and a business impact assessment.

06
Summarize Changelogs

GPT-4o summarizes each published changelog entry filtered to the relevant product label. Changelog summaries are used as the reference corpus for completion detection.

07
Detect Completed Features

Hybrid search matches changelog summaries against all open requests. GPT-4o confirms which requests have already shipped, flagging them as completed.

08
Classify UI/UX Wins

GPT-4o evaluates each request to identify quick UI/UX improvements โ€” small changes with high user impact that can be shipped in days, not sprints.

09
Merge

Combines groups, insights, completion flags, and UI/UX classifications into the final scored output (groups_with_insights.json), ranked by aggregate impact score.

Deduplication Engine

The hybrid approach is the core technical differentiator. Neither semantic search nor keyword matching is sufficient on its own โ€” the combination with LLM verification is meaningfully more accurate than either method alone.

Step 1 โ€” Retrieve
FAISS + BM25

Vector cosine similarity finds semantically similar posts. BM25 term matching finds lexically similar posts. Both candidate lists are combined with Reciprocal Rank Fusion.

Step 2 โ€” Verify
GPT-4o

Each candidate pair is passed to GPT-4o for final confirmation. A pair is only marked as duplicate if the model agrees the two posts describe the same underlying need.

Step 3 โ€” Merge
Group + Score

Confirmed duplicates are merged into a single group. Vote counts, pain points, and needs are aggregated across all posts in the group. Impact score is recalculated on the merged set.


Agent Pipeline

From Group to Sprint-Ready Task

Triggered on-demand from any group detail page. Runs a four-agent chain that produces a full PRD, story-pointed sub-tasks, a feature spec, and a Cursor-executable implementation plan โ€” all in under two minutes. Results are persisted and auto-loaded on every subsequent visit.

4-Agent Chain

AgentModelRoleTiming
#1gpt-4oGenerates a full PRD from the group's aggregated pain points, customer needs, and impact data. Includes problem statement, proposed solution, success metrics, and open questions.~15s
#2gpt-4o-miniBreaks the PRD into dev / QA / design sub-tasks with Fibonacci story point estimates (1โ€“13). Uses the smaller model intentionally โ€” this is structured extraction, not reasoning.< 3s
#3gpt-4oGenerates a feature spec file per dev sub-task. Agents 3 and 4 run in parallel via asyncio โ€” wall-clock time is ~15s regardless of task count.~15s โ€–
#4gpt-5.4Generates a Cursor-executable implementation plan per dev task. Includes TDD steps, exact file paths, test commands, and Cursor skill invocations. Pushed to the isolated plans repo on GitHub.~15s โ€–

โ€– = runs in parallel with the other agent

Outputs

ClickUp Epic
  • โœ“ Full PRD as Epic description
  • โœ“ Dev / QA / Design sub-tasks with [labels]
  • โœ“ Fibonacci story points embedded per sub-task
  • โœ“ QA card auto-moved to "ready for qa"
  • โœ“ Scoped to per-board list in "Roadmap Radar" space
GitHub Build Plans
  • โœ“ One .md plan per dev sub-task
  • โœ“ TDD steps with exact file paths and test commands
  • โœ“ Cursor skill invocations included
  • โœ“ Pushed in parallel via asyncio
  • โœ“ Plans repo has zero access to the product codebase
Developer Workflow
  • โœ“ ClickUp sub-task gets a GitHub comment with plan link
  • โœ“ Developer opens plan in Cursor on their local checkout
  • โœ“ Developer reviews, adapts, and executes
  • โœ“ Developer owns the commit
  • โœ“ AI never touches the product codebase

Architecture

System Design

System Overview

The system is composed of a Next.js frontend, a FastAPI backend, Redis for job state and queue management, and filesystem-backed data storage. External integrations (ClickUp, GitHub) are called only from the agent pipeline, never from the analysis pipeline.

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Next.js Frontend (localhost:3000 / Vercel)              โ”‚
โ”‚  Dashboard ยท Groups ยท Group Detail ยท Jobs ยท Login        โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                        โ”‚ HTTP
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  FastAPI Backend (localhost:8000 / Cloud Run)            โ”‚
โ”‚                                                          โ”‚
โ”‚  /api/jobs       Job CRUD, cancel, cleanup               โ”‚
โ”‚  /api/groups     Search, filter, paginate                โ”‚
โ”‚  /api/boards     Live Canny board fetch                  โ”‚
โ”‚  /api/agents     PRD stream, result, config              โ”‚
โ”‚  /api/auth       Google OAuth / demo mode                โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
           โ”‚                       โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Redis          โ”‚   โ”‚  Filesystem                      โ”‚
โ”‚  job hashes     โ”‚   โ”‚  data/<board>/groups_with_...    โ”‚
โ”‚  job queue      โ”‚   โ”‚  data/prd_results/<group_id>/    โ”‚
โ”‚  pipeline lock  โ”‚   โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                        โ”‚ Agent pipeline only
         โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
         โ–ผ              โ–ผ              โ–ผ
    ClickUp API    GitHub Plans    OpenAI API
    Epic + tasks   Cursor plans    GPT-4o / mini

Job State Machine

Every pipeline run is a job in Redis. The state machine ensures exactly one job runs at a time, cancellation is cooperative, and stale locks are cleared on server restart.

pending

Created, waiting for the queue lock

queued

Lock held by another job; will process next

processing

Lock acquired; pipeline stages executing

completed

All 9 stages succeeded; lock released

cancelling

Cancel requested; pipeline checks flag at stage boundary

cancelled

Stopped cleanly; lock released

failed

Unhandled error; lock released; job marked failed

stale

Detected on startup; auto-marked failed; lock cleared

โ„นAutomatic recovery: On startup, the backend scans Redis for any job stuck in processingstate, marks it failed, and releases the queue lock. A manual endpoint POST /api/jobs/cleanup does the same without a restart.

Tech Stack

Technologies

LayerTechnologies
Analysis PipelinePython 3.11, OpenAI GPT-4o, text-embedding-3-large, FAISS, rank_bm25
Agent PipelineGPT-4o (PRD + spec), gpt-4o-mini (task breakdown), gpt-5.4 (Cursor plans), httpx async, asyncio for parallelism
Backend APIFastAPI, Redis (async via aioredis), Pydantic v2, Google OAuth2 (google-auth)
FrontendNext.js 16, TypeScript, Tailwind CSS v4, NextAuth.js, Inter typeface
External IntegrationsClickUp REST API v2, GitHub REST API v3 (plans repo, fine-grained PAT)
Infrastructure (local)Docker Compose โ€” backend, frontend, Redis all containerized
Infrastructure (prod)GCP Cloud Run (backend), Cloud Memorystore (Redis), GCS (artifacts), Artifact Registry, Vercel (frontend)

Configuration

Environment Variables

All configuration lives in backend/.env. The system starts in demo and mock mode with no required env vars โ€” explore the UI and pre-computed data without any API keys.

Core Variables

VariableDefaultDescription
DEMO_MODEtrueAccepts any request as demo@gohighlevel.com. Set to false and configure Google OAuth for production.
MOCK_PIPELINEtrueSimulates all 9 pipeline stages with realistic delays and no API calls. Safe for demos.
OPENAI_API_KEYโ€”Required for live pipeline runs and agent pipeline. Not needed in mock mode.
CANNY_API_KEYโ€”Required for live pipeline runs to fetch posts, comments, and changelogs.
CANNY_BOARD_IDโ€”Optional fallback board ID used when no board is specified in the job trigger.
REDIS_URLredis://redis:6379Redis connection URL. Uses the Docker Compose service name by default.
GOOGLE_CLIENT_IDโ€”Required when DEMO_MODE=false. Google OAuth client ID restricted to @gohighlevel.com.
GOOGLE_CLIENT_SECRETโ€”Required when DEMO_MODE=false.

Agent Model Overrides

Each agent in the chain has an independently configurable model. Defaults are shown below.

# ClickUp โ€” required for Live mode
CLICKUP_API_TOKEN=pk_...

# GitHub โ€” optional, independent of ClickUp
# Fine-grained PAT: Contents Read+Write on plans repo only
GITHUB_TOKEN=github_pat_...
GITHUB_PLANS_REPO=your-org/roadmap-radar-plans

# Model overrides (optional โ€” defaults shown)
AGENT_PRD_MODEL=gpt-4o
AGENT_TASKS_MODEL=gpt-4o-mini
AGENT_SPEC_MODEL=gpt-4o
AGENT_CODE_MODEL=gpt-5.4

Quick Start

Running Locally

The entire stack โ€” backend, frontend, and Redis โ€” runs via Docker Compose. No env vars are required to start. Pre-computed analysis results from the Automations, Voice AI, and Call Tracking boards are included.

Start everything

docker compose up --build
ServiceURL
Frontendhttp://localhost:3000
Backend APIhttp://localhost:8000
API Docs (Swagger)http://localhost:8000/docs

Enable live pipeline

To run the real pipeline against your Canny board, add to backend/.env:

MOCK_PIPELINE=false
CANNY_API_KEY=your-canny-api-key
OPENAI_API_KEY=your-openai-api-key
CANNY_BOARD_ID=your-default-board-id   # optional fallback

Then restart: docker compose restart backend

โœ“Use Mock mode in the Jobs page to demo the full UI โ€” all 9 stages simulate with realistic timing and no API calls. Select Mock when creating a new job.

Deployment

Deploying to Production

Production targets: GCP Cloud Run (backend), GCS (pipeline artifacts), Cloud Memorystore (Redis), Vercel (frontend). GCP infrastructure is already provisioned on staging.

Scripts Reference

All scripts live in scripts/. Each sources scripts/provision.envfor configuration. Run config.sh once to generate it.

ScriptPurposeWhen
config.shInteractive setup โ€” writes scripts/provision.env with all config valuesOnce
provision.shCreates all GCP resources: Artifact Registry, GCS bucket, Memorystore instance, VPC connectorOnce/env
build.shBuilds and pushes the backend Docker image to Artifact RegistryEvery release
deploy.shDeploys backend to Cloud Run and frontend to VercelEvery release
cleanup.shTears down all GCP resources. Supports --keep-data and --keep-images flagsDecommission

Deployment workflow

# First time only
bash scripts/config.sh
source scripts/provision.env
bash scripts/provision.sh

# Every release
source scripts/provision.env
bash scripts/build.sh
bash scripts/deploy.sh

# Tear down
source scripts/provision.env
bash scripts/cleanup.sh [--keep-data] [--keep-images]

Production Checklist

GCP infrastructure is provisioned. Remaining work is approximately 4โ€“6 dev days plus one admin IAM action.

GCP infrastructureDone

Artifact Registry, GCS, Memorystore, VPC connector provisioned

Deployment scriptsDone

config.sh โ†’ provision.sh โ†’ build.sh โ†’ deploy.sh workflow

Production server configDone

Dockerfile runs uvicorn without --reload

GCS artifact storage1โ€“2 days

storage.py reads local filesystem. Needs GCS R/W so output persists across Cloud Run instances.

HTTPS + custom domain0.5 days

Cloud Run provides TLS on *.run.app. Custom domain needs DNS config.

Production auth1 day

Set DEMO_MODE=false, configure Google OAuth client ID + secret.

Secrets management0.5 days

Move API keys from plain env vars to GCP Secret Manager.

IAM permissions0 dev days (admin action)

github-actions-sa needs 5 project-level IAM roles. Blocked on org access.

Health checks + observability1.5 days

/health endpoint exists. Add readiness probe, Cloud Logging, Sentry.

Rate limiting0.5 days

No per-IP or per-user rate limits on jobs or groups APIs.

CI/CD0 dev days

GitHub Actions workflow exists. Blocked on same IAM action above.


Integrations

External Services

ClickUp

The ClickUp integration creates a full Epic with sub-tasks when the agent pipeline runs in Live mode. The integration auto-discovers the team ID from the API token, creates a "Roadmap Radar" space if one doesn't exist, and scopes all work to a per-board list inside it.

# Required for agent pipeline Live mode
CLICKUP_API_TOKEN=pk_...
CreatedDetails
EpicFull PRD as description. Group title as Epic name.
Dev sub-tasks[dev] label, Fibonacci story points, GitHub plan link in comment.
QA cardAuto-created and moved to "ready for qa" status.
Design tasks[design] label, story points, created in the same Epic.

GitHub Plans

Each dev sub-task gets a Cursor-executable plan pushed to an isolated GitHub repository. The token is a fine-grained PAT scoped exclusively to the plans repo โ€” zero access to the product codebase. This is a deliberate architectural choice, not a limitation.

# Fine-grained PAT: Contents Read+Write on plans repo only
GITHUB_TOKEN=github_pat_...
GITHUB_PLANS_REPO=your-org/roadmap-radar-plans
๐Ÿ”’Security model: The fine-grained PAT has no access to your product codebase. Even if the AI generates a flawed plan, nothing harmful can be committed anywhere. Developers are always the gate between the plan and the code.

Plan structure

Each plan file includes:

# Feature: [Sub-task name]

## Context
Problem statement, user issues, and proposed solution
derived from the PRD and group insights.

## Implementation Plan

### Step 1 โ€” [File path]
TDD approach: write the test first.
```
// test/path/to/spec.test.ts
describe('feature', () => { ... })
```
Command: `npm test path/to/spec`

### Step 2 โ€” [File path]
Implementation guidance with exact file paths.

## Cursor Skills
@skill brainstorming
@skill test-driven-development

Roadmap

Future Enhancements

The architecture is designed to accommodate these additions without structural changes.

Pipeline worker isolation

Move the real pipeline out of the FastAPI thread pool into Celery, ARQ, or a dedicated worker process.

Cross-board deduplication

Detect and merge duplicate requests that appear across multiple boards.

Automatic Canny post merging

Once a group is validated, merge the duplicate posts directly in Canny so demand is consolidated at the source, not just in the internal tool.

Automatic user replies and updates

Draft and send consistent updates back to customers on grouped requests โ€” acknowledgements, status changes, and completion notifications.

Scheduled pipeline runs

Automatically re-run the pipeline on a schedule (e.g., weekly) without manual triggering.

Execution tracking

Track which Cursor plans have been completed by developers, linked back to ClickUp sub-task status.

Sprint allocation

Auto-assign generated sub-tasks to an active sprint in ClickUp based on available capacity.

Additional signal sources

Reddit discussions, YouTube comments, support tickets, Pendo analytics, GitHub issues โ€” source adapter takes ~hours to add.

Roadmap Radar ยท GoHighLevel Internal Tool ยท Dev Nitro 2026