Track todo with implementation notes

Annotates each item with the design choice or open question. Notes
where existing spec coverage already addresses items 3 and 4
(structured filter groups and raw_where escape hatch) and where the
remaining work is wiring vs. greenfield.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
This commit is contained in:
Paul Trowbridge 2026-04-29 01:31:52 -04:00
parent 3a6062d723
commit 74ff97400b

110
todo.md Normal file
View File

@ -0,0 +1,110 @@
- [ ] should be able to edit and revise forecase segments that constitute baseline or reference. if you edit, maybe a warning that your forecast values wont mean a lot, and have an option to delete them.
Notes: A baseline/reference segment is a `pf.log` row plus the
forecast rows it produced (joined by pf_logid). Editing has the
shape of a delete-then-replay: drop the rows by pf_logid, drop the
log entry, re-run the segment with the new params (offset, filter,
iter type), insert the new log entry. New endpoint:
`PUT /versions/:id/baseline/:logid` (and the same for reference).
UI: an Edit button on each segment in Baseline view, populating the
form with the original `params`.
Cascade warning: if any scale/recode/clone log entries exist *after*
this segment was added, those operations were calibrated against
the old totals and will no longer reconcile cleanly. Show a banner
like "3 forecast operations applied after this segment may be
invalidated. View / Delete / Continue." Probably want a CASCADE
option that deletes downstream forecast entries too, plus a plain
"edit only" option for the user who knows what they're doing.
Implementation order: API + cascade detection first (compare
pf.log.stamp ordering); UI second.
- [ ] be able to copy an existing forecast and it's segments to adjust some parameters without having to start from scrath.
Notes: A version is the unit of copy. Need a `POST /versions/:id/copy`
endpoint that creates a new pf.version row with the same source/
col_meta, creates the new fc_<tname>_<id> table via the same DDL
path, and replays each pf.log entry's INSERT against the new table
(preserving stamp ordering). Each log entry gets re-inserted
pointing at the new version_id; the new pf_logid feeds the row
inserts. Notes/users come along.
UI: "Copy" button next to each version in Baseline. Copy modal
asks for a new name and optional description, then runs the API
call (likely 530s for a 350k-row version since every segment is
re-evaluated). Show progress.
Two design questions worth deciding up front:
- Copy as-of-now (re-fetch source data, so freshly-arrived rows
show up in baseline)? Or freeze (replay from existing forecast
rows, i.e. clone the forecast table directly)? Different
semantics, different SQL — pick one before building.
- Should the copy track its origin? A `parent_version_id` column
on pf.version makes "show me variants of FY2026 Plan" easy.
- [ ] need the list of filters to have an and/or specification
Notes: Spec already covers this in `pf_spec.md:245``filters` is
an array of groups; conditions within a group are AND-ed, groups
OR-ed. Backend has `buildFilterClause` in
`lib/sql_generator.js:247` but it's not wired into the routes
(baseline currently takes raw `where_clause`). Wiring + UI is the
remaining work.
UI: each group is a card with a header ("Group 1", "Group 2 — OR"),
rows of `column / operator / values`, a `+ Add condition` link,
and a `+ Add OR group` button at the bottom. The Baseline view
already has a single-group filter builder; extend it to wrap the
current rows in a group container and allow adding more groups.
- [ ] the filters should have the option to just write the WHERE clause SQL
Notes: Spec covers this too (`pf_spec.md:251`, `:454`) as the
`raw_where` admin-only escape hatch. The current baseline endpoint
*already* takes `where_clause` as a raw string — so the API is
effectively in "raw only" mode today; it's the structured side
that's missing. Two things to add:
- Once structured `filters` is wired in, gate `raw_where` behind
an admin check (`pf_user` in admin list — needs admin list
config) and reject 400 if both are sent.
- UI toggle: a "Switch to manual SQL" link in the Baseline filter
builder swaps the structured rows for a `<textarea>`; warning
banner: "Raw SQL is not validated. You are responsible for
correctness and security."
- [ ] load status bar is super jittery and the numbers wildly change
Notes: `setLoadProgress` fires per chunk in the body-stream reader
(Forecast.jsx:155161). On localhost or fast connections the
reader yields chunks in tight bursts and React re-renders the
overlay text on each — the visible bytes value flickers because
paints land out of order with respect to setState batching.
Fix: throttle to ~10 updates/sec. Either `if (now - lastUpdate >
100)` before `setLoadProgress`, or accumulate received bytes into
a ref and flush on `requestAnimationFrame`. Five-line change.
- [ ] default layout for the pivot should be sales_usd group by pending_rep, split by pf_iter
Notes: Default-layout logic lives in `initViewer` (Forecast.jsx
~line 240) and currently picks `group_by = first 2 dimensions`,
`split_by = date column`. `sales_usd` / `pending_rep` are
source-specific, so hardcoding them in the view would break for
any other source.
Two paths:
- **Quick**: hardcode for the current source. Cheap, but rots
the moment a second source comes along.
- **Right**: store a default-layout config on `pf.source` (e.g., a
JSON `default_view` column with `{ group_by, split_by, columns }`)
and let initViewer read it. Setup view gets a small "Default
pivot" editor — pick a value column, group_by columns, split_by
column.
Suggest the right version since the spec already implies
per-source customization in col_meta, and you're going to want
this for any future source you register. The col_meta path is
even tighter: extend col_meta with role flags like `default_value`,
`default_group`, `default_split` that initViewer reads.