One user reported the load of the "countries" table exceed
max_allowed_packet which in some configurations can be as low as 1MB.
Changing the chunk size from 500 to 50 has a small cost on the initial
load of the data (one additional second to the 17 taken previously)
while being more universally usable without changing the configuration
of the mysql server.
The new packet size is estimated to be about 500KB.
The committer has not checked other tables
* Generalize switch between different datasources.
* Fix previous migration since slice model changed
* Fix warm up cache and other small stuff
* Adding modules and datasources through config
* Replace tabs w/ spaces
* Fix other style issues
* Change add method for SliceModelView to pick the first non-empty ds
* Remove tests on slice add redirect
* Change way of db migration
* Fix styling
* Fix create slice
* Small fixes
* Fix code climate check
* Adding notes on how to create new datasource in CONTRIBUTING.md
* Fix last merge
* A commit just to trigger travis build again
* Add migration to merge two heads
* Fix codeclimate
* Simplify source_registry
* Fix codeclimate
* Remove all getter methods
* [SQL Lab] Adding DB options for SQL LAb
each db can be exposed or not in SQL Lab
CTAS is an option
target_schema placeholder (not hooked yet, but would force the CTAS to
target a specific schema)
* Addressing comments
* time format minor features added
* add description for datetime format input
* db version bug walkaround
* removed unecessary comments and fixed minor bug
* fixed code style
* minor fix
* fixed missing time format column in DruidDatasource
* Update models.py
Minor style fix
* Revert "Update models.py"
This reverts commit 6897c388e0.
* removed timestamp_format from druid and removed try catch in migration
* Using spaces, not tabs
* get the most updated migration and add the migration on the head of it
* remove vscode setting file
* use colunm based dttm_format
* modify dttm_converter
* modify datetime viz
* added comments and documents
* fixed some description and removed unnecessary import
* fix migration head
* minor style
* minor style
* deleted empty lines
* delete print statement
* add epoch converter
* error fixed
* fixed epoch parsing issue
* delete unnecessary lines
* fixed typo
* fix minor error
* fix styling issues
* fix styling error
* fixed typo
* support epoch_ms and did some refactoring
* fixed styling error
* fixed styling error
* add one more dataset to test dttm_format and db_expr
* add more slices
* styling
* specified String() lenght
* simple mapbox viz
use react-map-gl
superclustering of long/lat points
Added hook for map style, huge performance boost from bounding box fix, added count text on clusters
variable gradient size based on metric count
Ability to aggregate over any point property
This needed a change in the supercluster npm module, a PR was placed here:
https://github.com/mapbox/supercluster/pull/12
Aggregator function option in explore, tweaked visual defaults
better radius size management
clustering radius, point metric/unit options
scale cluster labels that don't fit, non-numeric labels for points
Minor fixes, label field affects points, text changes
serve mapbox apikey for slice
global opacity, viewport saves (hacky), bug in point labels
fixing mapbox-gl dependency
mapbox_api_key in config
expose row_limit, fix minor bugs
Add renderWhileDragging flag, groupby. Only show numerical columns for point radius
Implicitly group by lng/lat columns and error when label doesn't match groupby
'Fix' radius in miles problem, still some jankiness
derived fields cannot be typed as of now -> reverting numerical number change
better grouping error checking, expose count(*) for labelling
Custom colour for clusters/points + smart text colouring
Fixed bad positioning and overflow in explore view + small bugs + added thumbnail
* landscaping & eslint & use izip
* landscapin'
* address js code review
* add unicode data to tests
* make tests pass on 2.7
* clean up data loading
- remove duplicate keys in slice_data
- reduce line length
* change manager option flag to -t, --load-test-data
* test --> load_test_data