Annotates each item with the design choice or open question. Notes where existing spec coverage already addresses items 3 and 4 (structured filter groups and raw_where escape hatch) and where the remaining work is wiring vs. greenfield. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
5.6 KiB
-
should be able to edit and revise forecase segments that constitute baseline or reference. if you edit, maybe a warning that your forecast values wont mean a lot, and have an option to delete them.
Notes: A baseline/reference segment is a
pf.logrow plus the forecast rows it produced (joined by pf_logid). Editing has the shape of a delete-then-replay: drop the rows by pf_logid, drop the log entry, re-run the segment with the new params (offset, filter, iter type), insert the new log entry. New endpoint:PUT /versions/:id/baseline/:logid(and the same for reference). UI: an Edit button on each segment in Baseline view, populating the form with the originalparams.Cascade warning: if any scale/recode/clone log entries exist after this segment was added, those operations were calibrated against the old totals and will no longer reconcile cleanly. Show a banner like "3 forecast operations applied after this segment may be invalidated. View / Delete / Continue." Probably want a CASCADE option that deletes downstream forecast entries too, plus a plain "edit only" option for the user who knows what they're doing.
Implementation order: API + cascade detection first (compare pf.log.stamp ordering); UI second.
-
be able to copy an existing forecast and it's segments to adjust some parameters without having to start from scrath.
Notes: A version is the unit of copy. Need a
POST /versions/:id/copyendpoint that creates a new pf.version row with the same source/ col_meta, creates the new fc__ table via the same DDL path, and replays each pf.log entry's INSERT against the new table (preserving stamp ordering). Each log entry gets re-inserted pointing at the new version_id; the new pf_logid feeds the row inserts. Notes/users come along.UI: "Copy" button next to each version in Baseline. Copy modal asks for a new name and optional description, then runs the API call (likely 5–30s for a 350k-row version since every segment is re-evaluated). Show progress.
Two design questions worth deciding up front:
- Copy as-of-now (re-fetch source data, so freshly-arrived rows show up in baseline)? Or freeze (replay from existing forecast rows, i.e. clone the forecast table directly)? Different semantics, different SQL — pick one before building.
- Should the copy track its origin? A
parent_version_idcolumn on pf.version makes "show me variants of FY2026 Plan" easy.
-
need the list of filters to have an and/or specification
Notes: Spec already covers this in
pf_spec.md:245—filtersis an array of groups; conditions within a group are AND-ed, groups OR-ed. Backend hasbuildFilterClauseinlib/sql_generator.js:247but it's not wired into the routes (baseline currently takes rawwhere_clause). Wiring + UI is the remaining work.UI: each group is a card with a header ("Group 1", "Group 2 — OR"), rows of
column / operator / values, a+ Add conditionlink, and a+ Add OR groupbutton at the bottom. The Baseline view already has a single-group filter builder; extend it to wrap the current rows in a group container and allow adding more groups. -
the filters should have the option to just write the WHERE clause SQL
Notes: Spec covers this too (
pf_spec.md:251,:454) as theraw_whereadmin-only escape hatch. The current baseline endpoint already takeswhere_clauseas a raw string — so the API is effectively in "raw only" mode today; it's the structured side that's missing. Two things to add:- Once structured
filtersis wired in, gateraw_wherebehind an admin check (pf_userin admin list — needs admin list config) and reject 400 if both are sent. - UI toggle: a "Switch to manual SQL" link in the Baseline filter
builder swaps the structured rows for a
<textarea>; warning banner: "Raw SQL is not validated. You are responsible for correctness and security."
- Once structured
-
load status bar is super jittery and the numbers wildly change
Notes:
setLoadProgressfires per chunk in the body-stream reader (Forecast.jsx:155–161). On localhost or fast connections the reader yields chunks in tight bursts and React re-renders the overlay text on each — the visible bytes value flickers because paints land out of order with respect to setState batching.Fix: throttle to ~10 updates/sec. Either
if (now - lastUpdate > 100)beforesetLoadProgress, or accumulate received bytes into a ref and flush onrequestAnimationFrame. Five-line change. -
default layout for the pivot should be sales_usd group by pending_rep, split by pf_iter
Notes: Default-layout logic lives in
initViewer(Forecast.jsx ~line 240) and currently picksgroup_by = first 2 dimensions,split_by = date column.sales_usd/pending_repare source-specific, so hardcoding them in the view would break for any other source.Two paths:
- Quick: hardcode for the current source. Cheap, but rots the moment a second source comes along.
- Right: store a default-layout config on
pf.source(e.g., a JSONdefault_viewcolumn with{ group_by, split_by, columns }) and let initViewer read it. Setup view gets a small "Default pivot" editor — pick a value column, group_by columns, split_by column.
Suggest the right version since the spec already implies per-source customization in col_meta, and you're going to want this for any future source you register. The col_meta path is even tighter: extend col_meta with role flags like
default_value,default_group,default_splitthat initViewer reads.