diff --git a/todo.md b/todo.md new file mode 100644 index 0000000..db3085b --- /dev/null +++ b/todo.md @@ -0,0 +1,110 @@ +- [ ] should be able to edit and revise forecase segments that constitute baseline or reference. if you edit, maybe a warning that your forecast values wont mean a lot, and have an option to delete them. + + Notes: A baseline/reference segment is a `pf.log` row plus the + forecast rows it produced (joined by pf_logid). Editing has the + shape of a delete-then-replay: drop the rows by pf_logid, drop the + log entry, re-run the segment with the new params (offset, filter, + iter type), insert the new log entry. New endpoint: + `PUT /versions/:id/baseline/:logid` (and the same for reference). + UI: an Edit button on each segment in Baseline view, populating the + form with the original `params`. + + Cascade warning: if any scale/recode/clone log entries exist *after* + this segment was added, those operations were calibrated against + the old totals and will no longer reconcile cleanly. Show a banner + like "3 forecast operations applied after this segment may be + invalidated. View / Delete / Continue." Probably want a CASCADE + option that deletes downstream forecast entries too, plus a plain + "edit only" option for the user who knows what they're doing. + + Implementation order: API + cascade detection first (compare + pf.log.stamp ordering); UI second. + +- [ ] be able to copy an existing forecast and it's segments to adjust some parameters without having to start from scrath. + + Notes: A version is the unit of copy. Need a `POST /versions/:id/copy` + endpoint that creates a new pf.version row with the same source/ + col_meta, creates the new fc__ table via the same DDL + path, and replays each pf.log entry's INSERT against the new table + (preserving stamp ordering). Each log entry gets re-inserted + pointing at the new version_id; the new pf_logid feeds the row + inserts. Notes/users come along. + + UI: "Copy" button next to each version in Baseline. Copy modal + asks for a new name and optional description, then runs the API + call (likely 5–30s for a 350k-row version since every segment is + re-evaluated). Show progress. + + Two design questions worth deciding up front: + - Copy as-of-now (re-fetch source data, so freshly-arrived rows + show up in baseline)? Or freeze (replay from existing forecast + rows, i.e. clone the forecast table directly)? Different + semantics, different SQL — pick one before building. + - Should the copy track its origin? A `parent_version_id` column + on pf.version makes "show me variants of FY2026 Plan" easy. + +- [ ] need the list of filters to have an and/or specification + + Notes: Spec already covers this in `pf_spec.md:245` — `filters` is + an array of groups; conditions within a group are AND-ed, groups + OR-ed. Backend has `buildFilterClause` in + `lib/sql_generator.js:247` but it's not wired into the routes + (baseline currently takes raw `where_clause`). Wiring + UI is the + remaining work. + + UI: each group is a card with a header ("Group 1", "Group 2 — OR"), + rows of `column / operator / values`, a `+ Add condition` link, + and a `+ Add OR group` button at the bottom. The Baseline view + already has a single-group filter builder; extend it to wrap the + current rows in a group container and allow adding more groups. + +- [ ] the filters should have the option to just write the WHERE clause SQL + + Notes: Spec covers this too (`pf_spec.md:251`, `:454`) as the + `raw_where` admin-only escape hatch. The current baseline endpoint + *already* takes `where_clause` as a raw string — so the API is + effectively in "raw only" mode today; it's the structured side + that's missing. Two things to add: + + - Once structured `filters` is wired in, gate `raw_where` behind + an admin check (`pf_user` in admin list — needs admin list + config) and reject 400 if both are sent. + - UI toggle: a "Switch to manual SQL" link in the Baseline filter + builder swaps the structured rows for a `