* Use the query_obj as the basis for the cache key
When we recently moved from hashing form_data to define the cache_key
towards using the rendered query instead,
it made is such that non deterministic form
control values like relative times specified in "from" and "until" time
bound resulted in making those miss cache 100% of the time.
Here we move away from using the rendered query and using the query_obj
instead.
* Deprecating using form_data in templates
Filters applied to deck_multi will be passed down to layers as.
If the column isn't set as "filterable", the filter is ignored.
Also note that Dashboard configuration in regards to
"filter_immune_slices" and such will be disregarded in this context as
it isn't the dashboard controller passing down the filter and that
context is not easily accessible here.
Moving from having the user define an interceptor function that operates
on one object at a time.
By passing the entire array, it's possible to do multiple pass where
needed. A common pattern might be to figure out the max value in order
to define a scaler function. That's only possible if dealing with the
whole array.
* conditional check on datatype of results before converting to df
fix type checking
fix conditional checks
remove trailing whitespace and fix df_data fallback def
actually remove trailing whitespace
generalized type check to check all columns for dict
refactor dict col check
* move df conversion to helper and add unit test
add missing newlines
another missing newline
fix quotes
more quote fixes
Currently, even though `get_sqla_engine` calls get memoized, engines are
still short lived since they are attached to an models.Database ORM
object. All engines created through this method have the scope of a web
request.
Knowing that the SQLAlchemy objects are short lived means that
a related connection pool would also be short lived and mostly useless.
I think it's pretty rare that connections get reused within the context
of a view or Celery worker task.
We've noticed on Redshift that Superset was leaving many connections
opened (hundreds). This is probably due to a combination of the current
process not garbage collecting connections properly, and perhaps the
absence of connection timeout on the redshift side of things. This
could also be related to the fact that we experience web requests timeouts
(enforced by gunicorn) and that process-killing may not allow SQLAlchemy
to clean up connections as they occur (which this PR may not help
fixing...)
For all these reasons, it seems like the right thing to do to use
NullPool for external connection (but not for our connection to the metadata
db!).
Opening the PR for conversation. Putting this query into our staging
today to run some tests.
* Working polygon layer for deckGL
* add js controls
* add thumbnail
* better description
* refactor to leverage line_column controls
* templates: open code and documentation on a new tab (#4217)
As they are external resources.
* Fix tutorial doesn't match the current interface #4138 (#4215)
* [bugfix] markup and iframe viz raise 'Empty query' (#4225)
closes https://github.com/apache/incubator-superset/issues/4222
Related to: https://github.com/apache/incubator-superset/pull/4016
* [bugfix] time_pivot entry got missing in merge conflict (#4221)
PR here https://github.com/apache/incubator-superset/pull/3518 missed a
line of code while merging conflicts with time_pivot viz
* Improve deck.gl GeoJSON visualization (#4220)
* Improve geoJSON
* Addressing comments
* lint
* refactor to leverage line_column controls
* refactor to use DeckPathViz
* oops
Funky datatypes in some databases like BLOBs will have the DBAPI return
python types that can't be serialized to JSON out of the box.
Currently, when this happens SQL Lab fails in a bad way with a gigantic
HTML error message.
This allows specifying a pessimistic JSON serializer handler that will
simply show "Unserializable [type]"
* Using JS to customize spatial viz and tooltips
* Add missing deck_multi.png
* Improve GeoJSON layer with JS support and extra controls
* Addressing comments