* Initial test
* Save
* Working version
* Use since/until from payload
* Option to prefix metric name
* Rename LineMultiLayer to MultiLineViz
* Add more styles
* Explicit nulls
* Add more x controls
* Refactor to reuse nvd3_vis
* Fix x ticks
* Fix spacing
* Fix for druid datasource
* Rename file
* Small fixes and cleanup
* Fix margins
* Add proper thumbnails
* Align yaxis1 and yaxis2 ticks
* Improve code
* Trigger tests
* Move file
* Small fixes plus example
* Fix unit test
* Remove SQL and Filter sections
* Fix percent_metrics ZeroDivisionError and can not get metrics with label issue
* convert iterator to list
* get percentage metrics with get_metric_label method
* Adding tests case for expression type metrics
* Simplify expression
* Make port number optional in superset for druid
It appears that urllib throws error with ssl if port number is provided
```
url = "https://example.com:443/druid/v2"
req = urllib.request.Request(url, druid_query_str, headers)
res = urllib.request.urlopen(req)
```
The above call fails with the following error:
```
urllib2.HTTPError: HTTP Error 404: Not Found
```
If url is set to https://example.com/druid/v2 it works, this change
makes the port number optional.
* Rewrite if/else in concisely python way
* [WiP] make MetricsControl the standard across visualizations
This spreads MetricsControl across visualizations.
* Addressing comments
* Fix deepcopy issue using shallow copy
* Fix tests
* [sql lab] a better approach at limiting queries
Currently there are two mechanisms that we use to enforce the row
limiting constraints, depending on the database engine:
1. use dbapi's `cursor.fetchmany()`
2. wrap the SQL into a limiting subquery
Method 1 isn't great as it can result in the database server storing
larger than required result sets in memory expecting another fetch
command while we know we don't need that.
Method 2 has a positive side of working with all database engines,
whether they use LIMIT, ROWNUM, TOP or whatever else since sqlalchemy
does the work as specified for the dialect. On the downside though
the query optimizer might not be able to optimize this as much as an
approach that doesn't use a subquery.
Since most modern DBs use the LIMIT syntax, this adds a regex approach
to modify the query and force a LIMIT clause without using a subquery
for the database that support this syntax and uses method 2 for all
others.
* Fixing build
* Fix lint
* Added more tests
* Fix tests
* Force lowercase column names for Snowflake and Oracle
* Force lowercase column names for Snowflake and Oracle
* Remove lowercasing of DB2 columns
* Remove DB2 lowercasing
* Fix test cases
* use session context manager
* contextlib2 added to requirements.txt
* Fixing error: Import statements are in the wrong order. from contextlib2 import contextmanager should be before import sqlalchemy
* Fixing return inside generator
* fixed C812 missing trailing comma
* E501 line too long
* fixed E127 continuation line over-indented for visual indent
* E722 do not use bare except
* reorganized imports
* added context manager contextlib2.contextmanager
* fixed import ordering
* Changes "Import the dashboards." to "Import dashboards"
* Cleans up the HTML to add quotes, self close tags, etc.
* Adds a class to the `<submit>` button to utilize bootstrap style
* Remove the `<title>` tag in body as it's not vaild HTML and redundant with `{% block %}`
* add extraction fn support for Druid queries
* bump pydruid version to get extraction fn commits
* update and add tests for druid for filters with extraction fns
* conform to flake8 rules
* fix flake8 issues
* bump pyruid version for extraction function features
It appears the officially maintained fork of flask-cache is
flask-caching https://github.com/sh4nks/flask-caching . It is fully
compatible with flask-cache.
fixes https://github.com/apache/incubator-superset/issues/4926
In rare cases where the query is stopped before it is started, SQL Lab
returns an unexpected string payload instead of a normal dictionary.
This aligns the process to handle the error in a homogeneous fashion.