Add pf_spec.md — application specification
Covers architecture, data model, API routes, SQL patterns, and UI design. Includes baseline workbench design with multi-segment additive loads, filter role for col_meta, and date offset for projecting actuals into the forecast period. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
6d8b052eb6
commit
e9f37e09f2
670
pf_spec.md
Normal file
670
pf_spec.md
Normal file
@ -0,0 +1,670 @@
|
|||||||
|
# Pivot Forecast — Application Spec
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
A web application for building named forecast scenarios against any PostgreSQL table. The core workflow is: load known historical actuals as a baseline, shift those dates forward by a specified interval into the forecast period to establish a no-change starting point, then apply incremental adjustments (scale, recode, clone) to build the plan. An admin configures a source table, generates a baseline, and opens it for users to make adjustments. Users interact with a pivot table to select slices of data and apply forecast operations. All changes are incremental (append-only), fully audited, and reversible.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tech Stack
|
||||||
|
|
||||||
|
- **Backend:** Node.js / Express
|
||||||
|
- **Database:** PostgreSQL — isolated `pf` schema, installs into any existing DB
|
||||||
|
- **Frontend:** Vanilla JS + AG Grid (pivot mode)
|
||||||
|
- **Pattern:** Follows fc_webapp (shell) + pivot_forecast (operations)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Database Schema: `pf`
|
||||||
|
|
||||||
|
Everything lives in the `pf` schema. Install via sequential SQL scripts.
|
||||||
|
|
||||||
|
### `pf.source`
|
||||||
|
Registered source tables available for forecasting.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE pf.source (
|
||||||
|
id serial PRIMARY KEY,
|
||||||
|
schema text NOT NULL,
|
||||||
|
tname text NOT NULL,
|
||||||
|
label text, -- friendly display name
|
||||||
|
status text DEFAULT 'active', -- active | archived
|
||||||
|
created_at timestamptz DEFAULT now(),
|
||||||
|
created_by text,
|
||||||
|
UNIQUE (schema, tname)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### `pf.col_meta`
|
||||||
|
Column configuration for each registered source table. Determines how the app treats each column.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE pf.col_meta (
|
||||||
|
id serial PRIMARY KEY,
|
||||||
|
source_id integer REFERENCES pf.source(id),
|
||||||
|
cname text NOT NULL, -- column name in source table
|
||||||
|
label text, -- friendly display name
|
||||||
|
role text NOT NULL, -- 'dimension' | 'value' | 'units' | 'date' | 'ignore'
|
||||||
|
is_key boolean DEFAULT false, -- true = part of natural key (used in WHERE slice)
|
||||||
|
opos integer, -- ordinal position (for ordering)
|
||||||
|
UNIQUE (source_id, cname)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Roles:**
|
||||||
|
- `dimension` — categorical field (customer, part, channel, rep, geography, etc.) — appears as pivot rows/cols, used in WHERE filters for operations
|
||||||
|
- `value` — the money/revenue field to scale
|
||||||
|
- `units` — the quantity field to scale
|
||||||
|
- `date` — the primary date field; used for baseline/reference date range and stored in the forecast table
|
||||||
|
- `filter` — columns available as filter conditions in the Baseline Workbench (e.g. order status, ship date, open flag); used in baseline WHERE clauses but **not stored** in the forecast table
|
||||||
|
- `ignore` — exclude from forecast table entirely
|
||||||
|
|
||||||
|
### `pf.version`
|
||||||
|
Named forecast scenarios. One forecast table (`pf.fc_{tname}_{version_id}`) is created per version.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE pf.version (
|
||||||
|
id serial PRIMARY KEY,
|
||||||
|
source_id integer REFERENCES pf.source(id),
|
||||||
|
name text NOT NULL,
|
||||||
|
description text,
|
||||||
|
status text DEFAULT 'open', -- open | closed
|
||||||
|
exclude_iters jsonb DEFAULT '["reference"]', -- iter values excluded from all operations
|
||||||
|
created_at timestamptz DEFAULT now(),
|
||||||
|
created_by text,
|
||||||
|
closed_at timestamptz,
|
||||||
|
closed_by text,
|
||||||
|
UNIQUE (source_id, name)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**`exclude_iters`:** jsonb array of `iter` values that are excluded from operation WHERE clauses. Defaults to `["reference"]`. Reference rows are still returned by `get_data` (visible in pivot) but are never touched by scale/recode/clone. Additional iters can be added to lock them from further adjustment.
|
||||||
|
|
||||||
|
**Forecast table naming:** `pf.fc_{tname}_{version_id}` — e.g., `pf.fc_sales_3`. One table per version, physically isolated. Contains both operational rows and reference rows.
|
||||||
|
|
||||||
|
Creating a version → `CREATE TABLE pf.fc_{tname}_{version_id} (...)`
|
||||||
|
Deleting a version → `DROP TABLE pf.fc_{tname}_{version_id}` + delete from `pf.version` + delete from `pf.log`
|
||||||
|
|
||||||
|
### `pf.log`
|
||||||
|
Audit log. Every write operation gets one entry here.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE pf.log (
|
||||||
|
id bigserial PRIMARY KEY,
|
||||||
|
version_id integer REFERENCES pf.version(id),
|
||||||
|
pf_user text NOT NULL,
|
||||||
|
stamp timestamptz DEFAULT now(),
|
||||||
|
operation text NOT NULL, -- 'baseline' | 'reference' | 'scale' | 'recode' | 'clone'
|
||||||
|
slice jsonb, -- the WHERE conditions that defined the selection
|
||||||
|
params jsonb, -- operation parameters (increments, new values, scale factor, etc.)
|
||||||
|
note text -- user-provided comment
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### `pf.fc_{tname}_{version_id}` (dynamic, one per version)
|
||||||
|
Created when a version is created. Mirrors source table dimension/value/units/date columns plus forecast metadata. Contains both operational rows (`iter = 'baseline' | 'scale' | 'recode' | 'clone'`) and reference rows (`iter = 'reference'`).
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Example: source table "sales", version id 3 → pf.fc_sales_3
|
||||||
|
CREATE TABLE pf.fc_sales_3 (
|
||||||
|
id bigserial PRIMARY KEY,
|
||||||
|
|
||||||
|
-- mirrored from source (role = dimension | value | units | date only):
|
||||||
|
customer text,
|
||||||
|
channel text,
|
||||||
|
part text,
|
||||||
|
geography text,
|
||||||
|
order_date date,
|
||||||
|
units numeric,
|
||||||
|
value numeric,
|
||||||
|
|
||||||
|
-- forecast metadata:
|
||||||
|
iter text, -- 'baseline' | 'reference' | 'scale' | 'recode' | 'clone'
|
||||||
|
logid bigint REFERENCES pf.log(id),
|
||||||
|
pf_user text,
|
||||||
|
created_at timestamptz DEFAULT now()
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: no `version_id` column on the forecast table — it's implied by the table itself.
|
||||||
|
|
||||||
|
### `pf.sql`
|
||||||
|
Generated SQL stored per source and operation. Built once when col_meta is finalized, fetched at request time.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE pf.sql (
|
||||||
|
id serial PRIMARY KEY,
|
||||||
|
source_id integer REFERENCES pf.source(id),
|
||||||
|
operation text NOT NULL, -- 'baseline' | 'reference' | 'scale' | 'recode' | 'clone' | 'get_data' | 'undo'
|
||||||
|
sql text NOT NULL,
|
||||||
|
generated_at timestamptz DEFAULT now(),
|
||||||
|
UNIQUE (source_id, operation)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Column names are baked in at generation time.** Runtime substitution tokens:
|
||||||
|
|
||||||
|
| Token | Resolved from |
|
||||||
|
|-------|--------------|
|
||||||
|
| `{{fc_table}}` | `pf.fc_{tname}_{version_id}` — derived at request time |
|
||||||
|
| `{{where_clause}}` | built from `slice` JSON by `build_where()` in JS |
|
||||||
|
| `{{exclude_clause}}` | built from `version.exclude_iters` — e.g. `AND iter NOT IN ('reference')` |
|
||||||
|
| `{{logid}}` | newly inserted `pf.log` id |
|
||||||
|
| `{{pf_user}}` | from request body |
|
||||||
|
| `{{date_from}}` / `{{date_to}}` | baseline/reference date range (source period) |
|
||||||
|
| `{{date_offset}}` | PostgreSQL interval string to shift dates into the forecast period — e.g. `1 year`, `6 months`, `2 years 3 months` (baseline only; empty string = no shift) |
|
||||||
|
| `{{value_incr}}` / `{{units_incr}}` | scale operation increments |
|
||||||
|
| `{{pct}}` | scale mode: absolute or percentage |
|
||||||
|
| `{{set_clause}}` | recode/clone dimension overrides |
|
||||||
|
| `{{scale_factor}}` | clone multiplier |
|
||||||
|
|
||||||
|
**Request-time flow:**
|
||||||
|
1. Fetch SQL from `pf.sql` for `source_id` + `operation`
|
||||||
|
2. Fetch `version.exclude_iters`, build `{{exclude_clause}}`
|
||||||
|
3. Build `{{where_clause}}` from `slice` JSON via `build_where()`
|
||||||
|
4. Substitute all tokens
|
||||||
|
5. Execute — single round trip
|
||||||
|
|
||||||
|
**WHERE clause safety:** `build_where()` validates every key in the slice against col_meta (only `role = 'dimension'` columns are permitted). Values are sanitized (escaped single quotes). No parameterization — consistent with existing projects, debuggable in Postgres logs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Setup / Install Scripts
|
||||||
|
|
||||||
|
```
|
||||||
|
setup_sql/
|
||||||
|
01_schema.sql -- CREATE SCHEMA pf; create all metadata tables (source, col_meta, version, log, sql)
|
||||||
|
```
|
||||||
|
|
||||||
|
Source registration, col_meta configuration, SQL generation, version creation, and forecast table DDL all happen via API.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## API Routes
|
||||||
|
|
||||||
|
### DB Browser
|
||||||
|
|
||||||
|
| Method | Route | Description |
|
||||||
|
|--------|-------|-------------|
|
||||||
|
| GET | `/api/tables` | List all tables in the DB with row counts |
|
||||||
|
| GET | `/api/tables/:schema/:tname/preview` | Preview columns + sample rows |
|
||||||
|
|
||||||
|
### Source Management
|
||||||
|
|
||||||
|
| Method | Route | Description |
|
||||||
|
|--------|-------|-------------|
|
||||||
|
| GET | `/api/sources` | List registered sources |
|
||||||
|
| POST | `/api/sources` | Register a source table |
|
||||||
|
| GET | `/api/sources/:id/cols` | Get col_meta for a source |
|
||||||
|
| PUT | `/api/sources/:id/cols` | Save col_meta configuration |
|
||||||
|
| POST | `/api/sources/:id/generate-sql` | Generate/regenerate all operation SQL into `pf.sql` |
|
||||||
|
| GET | `/api/sources/:id/sql` | View generated SQL for a source (inspection/debug) |
|
||||||
|
| DELETE | `/api/sources/:id` | Deregister a source (does not affect existing forecast tables) |
|
||||||
|
|
||||||
|
### Forecast Versions
|
||||||
|
|
||||||
|
| Method | Route | Description |
|
||||||
|
|--------|-------|-------------|
|
||||||
|
| GET | `/api/sources/:id/versions` | List versions for a source |
|
||||||
|
| POST | `/api/sources/:id/versions` | Create a new version (CREATE TABLE for forecast table) |
|
||||||
|
| PUT | `/api/versions/:id` | Update version (name, description, exclude_iters) |
|
||||||
|
| POST | `/api/versions/:id/close` | Close a version (blocks further edits) |
|
||||||
|
| POST | `/api/versions/:id/reopen` | Reopen a closed version |
|
||||||
|
| DELETE | `/api/versions/:id` | Delete a version (DROP TABLE + delete log entries) |
|
||||||
|
|
||||||
|
### Baseline & Reference Data
|
||||||
|
|
||||||
|
| Method | Route | Description |
|
||||||
|
|--------|-------|-------------|
|
||||||
|
| POST | `/api/versions/:id/baseline` | Load one baseline segment (additive — does not clear existing baseline rows) |
|
||||||
|
| DELETE | `/api/versions/:id/baseline` | Clear all baseline rows and baseline log entries for this version |
|
||||||
|
| POST | `/api/versions/:id/reference` | Load reference rows from source table for a date range (additive) |
|
||||||
|
|
||||||
|
**Baseline load request body:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"date_from": "2024-01-01",
|
||||||
|
"date_to": "2024-12-31",
|
||||||
|
"date_col": "order_date",
|
||||||
|
"date_offset": "1 year",
|
||||||
|
"filters": [
|
||||||
|
{ "col": "order_status", "op": "IN", "values": ["OPEN", "PENDING"] },
|
||||||
|
{ "col": "ship_date", "op": "BETWEEN", "values": ["2025-04-01", "2025-05-31"] }
|
||||||
|
],
|
||||||
|
"pf_user": "admin",
|
||||||
|
"note": "Open orders regardless of order date",
|
||||||
|
"replay": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `date_from` / `date_to` — optional when `filters` constrain the result sufficiently; if omitted the date range clause is skipped
|
||||||
|
- `date_col` — which date column to apply the range to; defaults to the source's primary `role = 'date'` column; can be any `role = 'date'` or `role = 'filter'` column with a date type
|
||||||
|
- `date_offset` — PostgreSQL interval string applied to the primary `role = 'date'` column when inserting (not to filter columns). Examples: `"1 year"`, `"6 months"`, `"2 years 3 months"`. Defaults to `"0 days"`.
|
||||||
|
- `filters` — zero or more additional filter conditions. Each has:
|
||||||
|
- `col` — must be `role = 'filter'` or `role = 'date'` in col_meta
|
||||||
|
- `op` — one of `=`, `!=`, `IN`, `NOT IN`, `BETWEEN`, `IS NULL`, `IS NOT NULL`
|
||||||
|
- `values` — array of strings; two elements for `BETWEEN`, multiple for `IN`/`NOT IN`, omitted for `IS NULL`/`IS NOT NULL`
|
||||||
|
- Baseline loads are **additive** — existing `iter = 'baseline'` rows are not touched. Each load is its own log entry and is independently undoable.
|
||||||
|
|
||||||
|
`replay` controls behavior when incremental rows exist (applies to Clear + reload, not individual segments):
|
||||||
|
|
||||||
|
- `replay: false` (default) — after clearing, re-load baseline segments, leave incremental rows untouched
|
||||||
|
- `replay: true` — after clearing, re-load baseline, then re-execute each incremental log entry in chronological order
|
||||||
|
|
||||||
|
**v1 note:** `replay: true` returns `501 Not Implemented` until the replay engine is built.
|
||||||
|
|
||||||
|
**Clear baseline (`DELETE /api/versions/:id/baseline`)** — deletes all rows where `iter = 'baseline'` and all `operation = 'baseline'` log entries. Irreversible (no undo). Returns `{ rows_deleted, log_entries_deleted }`.
|
||||||
|
|
||||||
|
**Reference request body:** same shape as baseline load without `replay`. Reference dates land verbatim (no offset). Additive — multiple reference loads stack independently, each undoable by logid.
|
||||||
|
|
||||||
|
### Forecast Data
|
||||||
|
|
||||||
|
| Method | Route | Description |
|
||||||
|
|--------|-------|-------------|
|
||||||
|
| GET | `/api/versions/:id/data` | Return all rows for this version (all iters including reference) |
|
||||||
|
|
||||||
|
Returns flat array. AG Grid pivot runs client-side on this data.
|
||||||
|
|
||||||
|
### Forecast Operations
|
||||||
|
|
||||||
|
All operations share a common request envelope:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"pf_user": "paul.trowbridge",
|
||||||
|
"note": "optional comment",
|
||||||
|
"slice": {
|
||||||
|
"channel": "WHS",
|
||||||
|
"geography": "WEST"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`slice` keys must be `role = 'dimension'` columns per col_meta. Stored in `pf.log` as the implicit link to affected rows.
|
||||||
|
|
||||||
|
#### Scale
|
||||||
|
`POST /api/versions/:id/scale`
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"pf_user": "paul.trowbridge",
|
||||||
|
"note": "10% volume lift Q3 West",
|
||||||
|
"slice": { "channel": "WHS", "geography": "WEST" },
|
||||||
|
"value_incr": null,
|
||||||
|
"units_incr": 5000,
|
||||||
|
"pct": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `value_incr` / `units_incr` — absolute amounts to add (positive or negative). Either can be null.
|
||||||
|
- `pct: true` — treat as percentage of current slice total instead of absolute
|
||||||
|
- Excludes `exclude_iters` rows from the source selection
|
||||||
|
- Distributes increment proportionally across rows in the slice
|
||||||
|
- Inserts rows tagged `iter = 'scale'`
|
||||||
|
|
||||||
|
#### Recode
|
||||||
|
`POST /api/versions/:id/recode`
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"pf_user": "paul.trowbridge",
|
||||||
|
"note": "Part discontinued, replaced by new SKU",
|
||||||
|
"slice": { "part": "OLD-SKU-001" },
|
||||||
|
"set": { "part": "NEW-SKU-002" }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `set` — one or more dimension fields to replace (can swap multiple at once)
|
||||||
|
- Inserts negative rows to zero out the original slice
|
||||||
|
- Inserts positive rows with replaced dimension values
|
||||||
|
- Both sets of rows share the same `logid` — undone together
|
||||||
|
- Inserts rows tagged `iter = 'recode'`
|
||||||
|
|
||||||
|
#### Clone
|
||||||
|
`POST /api/versions/:id/clone`
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"pf_user": "paul.trowbridge",
|
||||||
|
"note": "New customer win, similar profile to existing",
|
||||||
|
"slice": { "customer": "EXISTING CO", "channel": "DIR" },
|
||||||
|
"set": { "customer": "NEW CO" },
|
||||||
|
"scale": 0.75
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `set` — dimension values to override on cloned rows
|
||||||
|
- `scale` — optional multiplier on value/units (default 1.0)
|
||||||
|
- Does not offset original slice
|
||||||
|
- Inserts rows tagged `iter = 'clone'`
|
||||||
|
|
||||||
|
### Audit & Undo
|
||||||
|
|
||||||
|
| Method | Route | Description |
|
||||||
|
|--------|-------|-------------|
|
||||||
|
| GET | `/api/versions/:id/log` | List all log entries for a version, newest first |
|
||||||
|
| DELETE | `/api/log/:logid` | Undo: delete all forecast rows with this logid, then delete log entry |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Frontend (Web UI)
|
||||||
|
|
||||||
|
### Navigation (sidebar)
|
||||||
|
|
||||||
|
1. **Sources** — browse DB tables, register sources, configure col_meta, generate SQL
|
||||||
|
2. **Versions** — list forecast versions per source, create/close/reopen/delete
|
||||||
|
3. **Baseline** — baseline workbench for the selected version
|
||||||
|
4. **Forecast** — main working view (pivot + operation panel)
|
||||||
|
5. **Log** — change history with undo
|
||||||
|
|
||||||
|
### Sources View
|
||||||
|
|
||||||
|
- Left: DB table browser (like fc_webapp) — all tables with row counts, preview on click
|
||||||
|
- Right: Registered sources list — click to open col_meta editor
|
||||||
|
- Col_meta editor: AG Grid editable table — set role per column, toggle is_key, set label
|
||||||
|
- "Generate SQL" button — triggers generate-sql route, shows confirmation
|
||||||
|
- Must generate SQL before versions can be created against this source
|
||||||
|
|
||||||
|
### Versions View
|
||||||
|
|
||||||
|
- List of versions for selected source — name, status (open/closed), created date, row count
|
||||||
|
- Create version form — name, description, exclude_iters (defaults to `["reference"]`)
|
||||||
|
- Per-version actions: open forecast, load baseline, load reference, close, reopen, delete
|
||||||
|
|
||||||
|
**Load Baseline modal:**
|
||||||
|
- Source date range (date_from / date_to) — the actuals period to pull from
|
||||||
|
- Date offset (years + months spinners) — how far forward to project the dates
|
||||||
|
- Before/after preview: left side shows source months, right side shows where they land after the offset
|
||||||
|
- Note field
|
||||||
|
- On submit: shows row count; grid reloads
|
||||||
|
|
||||||
|
**Load Reference modal:**
|
||||||
|
- Source date range only — no offset
|
||||||
|
- Month chip preview of the period being loaded
|
||||||
|
- Note field
|
||||||
|
|
||||||
|
### Baseline Workbench
|
||||||
|
|
||||||
|
A dedicated view for constructing the baseline for the selected version. The baseline is built from one or more **segments** — each segment is an independent query against the source table that appends rows to `iter = 'baseline'`. Segments are additive; clearing is explicit.
|
||||||
|
|
||||||
|
**Layout:**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
|
│ Baseline — [Version name] [Clear Baseline] │
|
||||||
|
├─────────────────────────────────────────────────────────────┤
|
||||||
|
│ Segments loaded (from log): │
|
||||||
|
│ ┌──────┬────────────────┬──────────┬───────┬──────────┐ │
|
||||||
|
│ │ ID │ Description │ Rows │ By │ [Undo] │ │
|
||||||
|
│ └──────┴────────────────┴──────────┴───────┴──────────┘ │
|
||||||
|
├─────────────────────────────────────────────────────────────┤
|
||||||
|
│ Add Segment │
|
||||||
|
│ │
|
||||||
|
│ Description [_______________________________________] │
|
||||||
|
│ │
|
||||||
|
│ Date range [date_from] to [date_to] on [date col ▾] │
|
||||||
|
│ Date offset [0] years [0] months │
|
||||||
|
│ │
|
||||||
|
│ Additional filters: │
|
||||||
|
│ [ + Add filter ] │
|
||||||
|
│ ┌──────────────────┬──────────┬──────────────┬───────┐ │
|
||||||
|
│ │ Column │ Op │ Value(s) │ [ x ]│ │
|
||||||
|
│ └──────────────────┴──────────┴──────────────┴───────┘ │
|
||||||
|
│ │
|
||||||
|
│ Preview: [projected month chips] │
|
||||||
|
│ │
|
||||||
|
│ Note [___________] [Load Segment] │
|
||||||
|
└─────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Segments list** — shows all `operation = 'baseline'` log entries for this version, newest first. Each has an Undo button. Undo removes only that segment's rows (by logid), leaving other segments intact.
|
||||||
|
|
||||||
|
**Clear Baseline** — deletes ALL `iter = 'baseline'` rows and all `operation = 'baseline'` log entries for this version. Prompts for confirmation. Used when starting over from scratch.
|
||||||
|
|
||||||
|
**Add Segment form:**
|
||||||
|
|
||||||
|
- **Description** — free text label stored as the log `note`, shown in the segments list
|
||||||
|
- **Date range** — date_from / date_to, applied to the selected date column (defaults to the primary `role = 'date'` column, but can be any `role = 'filter'` column with a date type)
|
||||||
|
- **Date offset** — years + months spinners; shifts loaded dates into the forecast period
|
||||||
|
- **Additional filters** — zero or more filter conditions, each specifying:
|
||||||
|
- Column — any `role = 'filter'` column
|
||||||
|
- Operator — `=`, `!=`, `IN`, `NOT IN`, `BETWEEN`, `IS NULL`, `IS NOT NULL`
|
||||||
|
- Value(s) — text input; for `IN`/`NOT IN` a comma-separated list; for `BETWEEN` two inputs
|
||||||
|
- **Preview** — once dates and offset are set, shows source months → projected months (same as current baseline modal)
|
||||||
|
- **Load Segment** — submits the segment; appends rows, does not clear existing baseline rows
|
||||||
|
|
||||||
|
**Example — three-segment baseline:**
|
||||||
|
|
||||||
|
| # | Description | Date col | Range | Filters | Offset |
|
||||||
|
|---|-------------|----------|-------|---------|--------|
|
||||||
|
| 1 | All orders taken 6/1/25–3/31/26 | order_date | 6/1/25–3/31/26 | — | 0 |
|
||||||
|
| 2 | All open/unshipped orders | order_date | (none — omit date filter) | status IN (OPEN, PENDING) | 0 |
|
||||||
|
| 3 | Prior year book-and-ship 4/1/25–5/31/25 | order_date | 4/1/25–5/31/25 | ship_date BETWEEN 4/1/25 AND 5/31/25 | 0 |
|
||||||
|
|
||||||
|
Note: segment 2 omits the date range entirely — date_from/date_to are optional when additional filters are present. The SQL omits the date BETWEEN clause if no dates are provided.
|
||||||
|
|
||||||
|
### Forecast View
|
||||||
|
|
||||||
|
**Layout:**
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────────────────────┐
|
||||||
|
│ [Source: sales] [Version: FY2024 v1 — open] [Refresh] │
|
||||||
|
├────────────────────────┬─────────────────────────────────┤
|
||||||
|
│ │ │
|
||||||
|
│ Pivot Grid │ Operation Panel │
|
||||||
|
│ (AG Grid pivot mode) │ (active when slice selected) │
|
||||||
|
│ │ │
|
||||||
|
│ │ Slice: │
|
||||||
|
│ │ channel = WHS │
|
||||||
|
│ │ geography = WEST │
|
||||||
|
│ │ │
|
||||||
|
│ │ [ Scale ] [ Recode ] [ Clone ] │
|
||||||
|
│ │ │
|
||||||
|
│ │ ... operation form ... │
|
||||||
|
│ │ │
|
||||||
|
│ │ [ Submit ] │
|
||||||
|
│ │ │
|
||||||
|
└────────────────────────┴─────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Interaction flow:**
|
||||||
|
1. Select cells in pivot — selected dimension values populate Operation Panel as slice
|
||||||
|
2. Pick operation tab, fill in parameters
|
||||||
|
3. Submit → POST to API → response shows rows affected
|
||||||
|
4. Grid refreshes (re-fetch `get_data`)
|
||||||
|
|
||||||
|
**Reference rows** shown in pivot (for context) but visually distinguished (e.g., muted color). Operations never affect them.
|
||||||
|
|
||||||
|
### Log View
|
||||||
|
|
||||||
|
AG Grid list of log entries — user, timestamp, operation, slice, note, rows affected.
|
||||||
|
"Undo" button per row → `DELETE /api/log/:logid` → grid and pivot refresh.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Forecast SQL Patterns
|
||||||
|
|
||||||
|
Column names baked in at generation time. Tokens substituted at request time.
|
||||||
|
|
||||||
|
### Baseline Load (one segment)
|
||||||
|
|
||||||
|
```sql
|
||||||
|
WITH ilog AS (
|
||||||
|
INSERT INTO pf.log (version_id, pf_user, operation, slice, params, note)
|
||||||
|
VALUES ({{version_id}}, '{{pf_user}}', 'baseline', NULL, '{{params}}'::jsonb, '{{note}}')
|
||||||
|
RETURNING id
|
||||||
|
)
|
||||||
|
INSERT INTO {{fc_table}} (
|
||||||
|
{dimension_cols}, {value_col}, {units_col}, {date_col},
|
||||||
|
iter, logid, pf_user, created_at
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
{dimension_cols}, {value_col}, {units_col},
|
||||||
|
({date_col} + '{{date_offset}}'::interval)::date,
|
||||||
|
'baseline', (SELECT id FROM ilog), '{{pf_user}}', now()
|
||||||
|
FROM
|
||||||
|
{schema}.{tname}
|
||||||
|
WHERE
|
||||||
|
{{date_range_clause}}
|
||||||
|
{{filter_clause}}
|
||||||
|
```
|
||||||
|
|
||||||
|
Baseline loads are **additive** — no DELETE before INSERT. Each segment appends independently.
|
||||||
|
|
||||||
|
Token details:
|
||||||
|
- `{{date_offset}}` — PostgreSQL interval string (e.g. `1 year`); defaults to `0 days`; applied only to the primary `role = 'date'` column on insert
|
||||||
|
- `{{date_range_clause}}` — built from `date_from`/`date_to`/`date_col` by the route; omitted entirely (replaced with `TRUE`) if no dates provided
|
||||||
|
- `{{filter_clause}}` — zero or more `AND` conditions built from the `filters` array; each validated against col_meta (column must be `role = 'filter'` or `role = 'date'`); operators: `=`, `!=`, `IN`, `NOT IN`, `BETWEEN`, `IS NULL`, `IS NOT NULL`
|
||||||
|
|
||||||
|
Both clauses are built at request time (not baked into stored SQL) since they vary per segment load.
|
||||||
|
|
||||||
|
### Clear Baseline
|
||||||
|
|
||||||
|
Two queries, run in a transaction:
|
||||||
|
```sql
|
||||||
|
DELETE FROM {{fc_table}} WHERE iter = 'baseline';
|
||||||
|
DELETE FROM pf.log WHERE version_id = {{version_id}} AND operation = 'baseline';
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reference Load
|
||||||
|
|
||||||
|
```sql
|
||||||
|
WITH ilog AS (
|
||||||
|
INSERT INTO pf.log (version_id, pf_user, operation, slice, params, note)
|
||||||
|
VALUES ({{version_id}}, '{{pf_user}}', 'reference', NULL, '{{params}}'::jsonb, '{{note}}')
|
||||||
|
RETURNING id
|
||||||
|
)
|
||||||
|
INSERT INTO {{fc_table}} (
|
||||||
|
{dimension_cols}, {value_col}, {units_col}, {date_col},
|
||||||
|
iter, logid, pf_user, created_at
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
{dimension_cols}, {value_col}, {units_col}, {date_col},
|
||||||
|
'reference', (SELECT id FROM ilog), '{{pf_user}}', now()
|
||||||
|
FROM
|
||||||
|
{schema}.{tname}
|
||||||
|
WHERE
|
||||||
|
{date_col} BETWEEN '{{date_from}}' AND '{{date_to}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
No date offset — reference rows land at their original dates for prior-period comparison.
|
||||||
|
|
||||||
|
### Scale
|
||||||
|
|
||||||
|
```sql
|
||||||
|
WITH ilog AS (
|
||||||
|
INSERT INTO pf.log (version_id, pf_user, operation, slice, params, note)
|
||||||
|
VALUES ({{version_id}}, '{{pf_user}}', 'scale', '{{slice}}'::jsonb, '{{params}}'::jsonb, '{{note}}')
|
||||||
|
RETURNING id
|
||||||
|
)
|
||||||
|
,base AS (
|
||||||
|
SELECT
|
||||||
|
{dimension_cols}, {date_col},
|
||||||
|
{value_col}, {units_col},
|
||||||
|
sum({value_col}) OVER () AS total_value,
|
||||||
|
sum({units_col}) OVER () AS total_units
|
||||||
|
FROM {{fc_table}}
|
||||||
|
WHERE {{where_clause}}
|
||||||
|
{{exclude_clause}}
|
||||||
|
)
|
||||||
|
INSERT INTO {{fc_table}} (
|
||||||
|
{dimension_cols}, {date_col}, {value_col}, {units_col},
|
||||||
|
iter, logid, pf_user, created_at
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
{dimension_cols}, {date_col},
|
||||||
|
round(({value_col} / NULLIF(total_value, 0)) * {{value_incr}}, 2),
|
||||||
|
round(({units_col} / NULLIF(total_units, 0)) * {{units_incr}}, 5),
|
||||||
|
'scale', (SELECT id FROM ilog), '{{pf_user}}', now()
|
||||||
|
FROM base
|
||||||
|
```
|
||||||
|
|
||||||
|
`{{value_incr}}` / `{{units_incr}}` are pre-computed in JS when `pct: true` (multiply slice total by pct).
|
||||||
|
|
||||||
|
### Recode
|
||||||
|
|
||||||
|
```sql
|
||||||
|
WITH ilog AS (
|
||||||
|
INSERT INTO pf.log (version_id, pf_user, operation, slice, params, note)
|
||||||
|
VALUES ({{version_id}}, '{{pf_user}}', 'recode', '{{slice}}'::jsonb, '{{params}}'::jsonb, '{{note}}')
|
||||||
|
RETURNING id
|
||||||
|
)
|
||||||
|
,src AS (
|
||||||
|
SELECT {dimension_cols}, {date_col}, {value_col}, {units_col}
|
||||||
|
FROM {{fc_table}}
|
||||||
|
WHERE {{where_clause}}
|
||||||
|
{{exclude_clause}}
|
||||||
|
)
|
||||||
|
,negatives AS (
|
||||||
|
INSERT INTO {{fc_table}} ({dimension_cols}, {date_col}, {value_col}, {units_col}, iter, logid, pf_user, created_at)
|
||||||
|
SELECT {dimension_cols}, {date_col}, -{value_col}, -{units_col}, 'recode', (SELECT id FROM ilog), '{{pf_user}}', now()
|
||||||
|
FROM src
|
||||||
|
)
|
||||||
|
INSERT INTO {{fc_table}} ({dimension_cols}, {date_col}, {value_col}, {units_col}, iter, logid, pf_user, created_at)
|
||||||
|
SELECT {{set_clause}}, {date_col}, {value_col}, {units_col}, 'recode', (SELECT id FROM ilog), '{{pf_user}}', now()
|
||||||
|
FROM src
|
||||||
|
```
|
||||||
|
|
||||||
|
`{{set_clause}}` replaces the listed dimension columns with new values, passes others through unchanged.
|
||||||
|
|
||||||
|
### Clone
|
||||||
|
|
||||||
|
```sql
|
||||||
|
WITH ilog AS (
|
||||||
|
INSERT INTO pf.log (version_id, pf_user, operation, slice, params, note)
|
||||||
|
VALUES ({{version_id}}, '{{pf_user}}', 'clone', '{{slice}}'::jsonb, '{{params}}'::jsonb, '{{note}}')
|
||||||
|
RETURNING id
|
||||||
|
)
|
||||||
|
INSERT INTO {{fc_table}} ({dimension_cols}, {date_col}, {value_col}, {units_col}, iter, logid, pf_user, created_at)
|
||||||
|
SELECT
|
||||||
|
{{set_clause}}, {date_col},
|
||||||
|
round({value_col} * {{scale_factor}}, 2),
|
||||||
|
round({units_col} * {{scale_factor}}, 5),
|
||||||
|
'clone', (SELECT id FROM ilog), '{{pf_user}}', now()
|
||||||
|
FROM {{fc_table}}
|
||||||
|
WHERE {{where_clause}}
|
||||||
|
{{exclude_clause}}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Undo
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DELETE FROM {{fc_table}} WHERE logid = {{logid}};
|
||||||
|
DELETE FROM pf.log WHERE id = {{logid}};
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Admin Setup Flow (end-to-end)
|
||||||
|
|
||||||
|
1. Open **Sources** view → browse DB tables → register source table
|
||||||
|
2. Open col_meta editor → assign roles to columns (`dimension`, `value`, `units`, `date`, `filter`, `ignore`), mark is_key dimensions, set labels
|
||||||
|
3. Click **Generate SQL** → app writes operation SQL to `pf.sql`
|
||||||
|
4. Open **Versions** view → create a named version (sets `exclude_iters`, creates forecast table)
|
||||||
|
5. Open **Baseline Workbench** → build the baseline from one or more segments:
|
||||||
|
- Each segment specifies a date range (on any date/filter column), date offset, and optional additional filter conditions
|
||||||
|
- Add segments until the baseline is complete; each is independently undoable
|
||||||
|
- Use "Clear Baseline" to start over if needed
|
||||||
|
6. Optionally load **Reference** → pick prior period date range → inserts `iter = 'reference'` rows at their original dates (for comparison in the pivot)
|
||||||
|
7. Open **Forecast** view → share with users
|
||||||
|
|
||||||
|
## User Forecast Flow (end-to-end)
|
||||||
|
|
||||||
|
1. Open **Forecast** view → select version
|
||||||
|
2. Pivot loads — explore data, identify slice to adjust
|
||||||
|
3. Select cells → Operation Panel populates with slice
|
||||||
|
4. Choose operation → fill in parameters → Submit
|
||||||
|
5. Grid refreshes — adjustment visible immediately
|
||||||
|
6. Repeat as needed
|
||||||
|
7. Admin closes version when forecasting is complete
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Open Questions / Future Scope
|
||||||
|
|
||||||
|
- **Baseline replay** — re-execute change log against a restated baseline (`replay: true`); v1 returns 501
|
||||||
|
- **Approval workflow** — user submits, admin approves before changes are visible to others (deferred)
|
||||||
|
- **Territory filtering** — restrict what a user can see/edit by dimension value (deferred)
|
||||||
|
- **Export** — download forecast as CSV or push results to a reporting table
|
||||||
|
- **Version comparison** — side-by-side view of two versions (facilitated by isolated tables via UNION)
|
||||||
|
- **Multi-DB sources** — currently assumes same DB; cross-DB would need connection config per source
|
||||||
|
|
||||||
Loading…
Reference in New Issue
Block a user