Server streams rows from a pg cursor in 10k-row batches, building Arrow record batches incrementally and piping them as chunked HTTP response — Node.js heap stays bounded regardless of dataset size. Client fetches as arrayBuffer() and loads directly into Perspective worker (native Arrow path, no JSON deserialization). X-Row-Count header drives a non-blocking banner for datasets >= 500k rows. validCols now derived from col_meta rather than from row keys. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
22 lines
437 B
JSON
22 lines
437 B
JSON
{
|
|
"name": "pf_app",
|
|
"version": "1.0.0",
|
|
"description": "Pivot Forecast Application",
|
|
"main": "server.js",
|
|
"scripts": {
|
|
"start": "node server.js",
|
|
"dev": "nodemon server.js",
|
|
"build": "cd ui && npm run build"
|
|
},
|
|
"dependencies": {
|
|
"apache-arrow": "^21.1.0",
|
|
"cors": "^2.8.5",
|
|
"dotenv": "^16.0.0",
|
|
"express": "^4.18.2",
|
|
"pg": "^8.11.3"
|
|
},
|
|
"devDependencies": {
|
|
"nodemon": "^3.0.0"
|
|
}
|
|
}
|