Compare commits

...

36 Commits

Author SHA1 Message Date
M1s4k1 919d96153e
Merge 9154a4f08f into f5843fe588 2024-05-05 02:08:29 -03:00
Maxime Beauchemin f5843fe588
fix: database logos look stretched (#28340) 2024-05-03 17:23:42 -07:00
Maxime Beauchemin 49231da42f
docs: various improvements across the docs (#28285) 2024-05-03 15:27:40 -07:00
Frank Zimper 517f254726
fix(website): links corrected (#28333) 2024-05-03 10:18:05 -07:00
dependabot[bot] f95d9cde40
build(deps): bump ws from 8.16.0 to 8.17.0 in /superset-websocket (#28288)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-02 22:31:30 -06:00
Lily Kuang 49992dd9d2
docs: add npm publish steps to release/readme (#23730)
Co-authored-by: Nicolas Charpentier <nicolas.charpentier079@gmail.com>
Co-authored-by: Evan Rusackas <evan@preset.io>
2024-05-02 22:28:23 -06:00
Asaf Levy 3e74ff174c
refactor(helm): Allow chart operators to exclude the creation of the secret manifest (#28308) 2024-05-02 22:08:32 -06:00
John Bodley b4c4ab7790
fix: Rename legacy line and area charts (#28113) 2024-05-02 17:04:22 -03:00
Đỗ Trọng Hải 5331dc740a
chore(dev): remove obsolete image reference to `superset-websocket` + fix minor typo (#28321)
Signed-off-by: hainenber <dotronghai96@gmail.com>
2024-05-02 11:42:00 -07:00
John Bodley 27952e7057
fix: Ignore USE SQL keyword when determining SELECT statement (#28279) 2024-05-02 11:25:55 -07:00
John Bodley 0ce5864fc7
chore: Move #26288 from "Database Migration" to "Other" (#28311) 2024-05-02 10:17:46 -07:00
Đỗ Trọng Hải 593c653ab5
fix(docs): prevent browser to download the entire video in first page load + fix empty `controls` attribute (#28319) 2024-05-02 11:15:39 -06:00
John Bodley d36bccdc8c
fix(sql_parse): Add Apache Spark to SQLGlot dialect mapping (#28322) 2024-05-02 09:53:20 -07:00
Maxime Beauchemin 513852b7c3
fix: all_database_access should enable access to all datasets/charts/dashboards (#28205) 2024-05-02 09:25:14 -07:00
John Bodley e94360486e
chore(commands): Remove unnecessary commit (#28154) 2024-05-01 16:09:50 -06:00
dependabot[bot] b17db6d669
build(deps): bump markdown-to-jsx from 7.4.1 to 7.4.7 in /superset-frontend (#28298)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-01 09:22:53 -06:00
dependabot[bot] f4b6c3049b
build(deps): bump clsx from 2.1.0 to 2.1.1 in /docs (#28301)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-01 09:22:09 -06:00
dependabot[bot] 55391bb587
build(deps-dev): bump eslint-plugin-testing-library from 6.2.0 to 6.2.2 in /superset-frontend (#28306)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-01 09:21:07 -06:00
Beto Dealmeida 38e2843b24
chore: clean up DB create command (#28246) 2024-05-01 11:06:26 -04:00
JUST.in DO IT 7c8423a522
fix(explore): cannot reorder dnd of Metrics (#28269) 2024-05-01 10:13:12 -03:00
Maxime Beauchemin ec8351d336
feat: accelerate webpack builds with filesystem cache (#28282) 2024-04-30 23:18:20 -07:00
Evan Rusackas e4f93b293f
chore(docs): video now hosted by ASF instead of GitHub (#28284) 2024-04-30 23:41:16 -06:00
Maxime Beauchemin 2b4b771449
fix: silence docker-compose useless warnings (#28283) 2024-04-30 19:35:02 -07:00
Maxime Beauchemin 538d1bb245
docs: merge database config under Configuration section (#28281) 2024-04-30 18:39:32 -07:00
Maxime Beauchemin 3ac387bb66
chore: enable ruff's isort equivalent (#28267) 2024-04-30 18:29:49 -07:00
Beto Dealmeida fe37d914e5
fix: % replace in `values_for_column` (#28271) 2024-04-30 16:15:56 -07:00
Maxime Beauchemin 51da5adbc7
chore: allow codecov to detect SHA (#28278) 2024-04-30 15:32:33 -07:00
Evan Rusackas 3cc8434c5a
fix(ci): adding codecov token (#28277) 2024-04-30 14:23:35 -06:00
Radek Antoniuk c641bbfb9e
chore: use depth=1 for cloning (#28276) 2024-04-30 14:20:29 -06:00
Đỗ Trọng Hải 2e9cc654ef
docs(intro): embed overview video into Intro document (#28163)
Signed-off-by: hainenber <dotronghai96@gmail.com>
2024-04-30 12:59:35 -06:00
Mathias Bögl f03de27a92
docs(upgrading): clarify upgrade process (#28275) 2024-04-30 12:28:15 -04:00
Ross Mabbett 601896b1fc
chore(superset-ui-core and NoResultsComponent): Migrate to RTL, add RTL modules to the ui-core (#28187) 2024-04-30 16:10:35 +03:00
Ross Mabbett 2e5f3ed851
fix(Dev-Server): Edit ChartPropsConfig reexport to be a type object (#28225) 2024-04-30 16:09:33 +03:00
Ross Mabbett a38dc90abe
fix(Webpack dev-sever warnings): Add ignoreWarning to webpack config for @data-ui error (#28232) 2024-04-30 16:07:55 +03:00
Ross Mabbett efda57e8a5
chore(AlteredSliceTag): Migrate to functional (#27891)
Co-authored-by: Elizabeth Thompson <eschutho@gmail.com>
2024-04-30 14:55:11 +03:00
M1s4k1 9154a4f08f fix #22162 2022-11-18 15:59:54 +08:00
213 changed files with 6698 additions and 8372 deletions

View File

@ -117,12 +117,6 @@ testdata() {
say "::endgroup::"
}
codecov() {
say "::group::Upload code coverage"
bash ".github/workflows/codecov.sh" "$@"
say "::endgroup::"
}
cypress-install() {
cd "$GITHUB_WORKSPACE/superset-frontend/cypress-base"
@ -191,11 +185,6 @@ cypress-run-all() {
cypress-run "sqllab/*" "Backend persist"
# Upload code coverage separately so each page can have separate flags
# -c will clean existing coverage reports, -F means add flags
# || true to prevent CI failure on codecov upload
codecov -c -F "cypress" || true
say "::group::Flask log for backend persist"
cat "$flasklog"
say "::endgroup::"
@ -225,8 +214,6 @@ cypress-run-applitools() {
$cypress --spec "cypress/e2e/*/**/*.applitools.test.ts" --browser "$browser" --headless
codecov -c -F "cypress" || true
say "::group::Flask log for default run"
cat "$flasklog"
say "::endgroup::"

File diff suppressed because it is too large Load Diff

View File

@ -4,6 +4,7 @@ on:
push:
paths:
- "docs/**"
- "README.md"
branches:
- "master"

View File

@ -20,7 +20,6 @@ jobs:
- name: "Checkout ${{ github.ref }} ( ${{ github.sha }} )"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history
persist-credentials: false
submodules: recursive
- name: Check npm lock file version
@ -79,6 +78,8 @@ jobs:
working-directory: ./superset-frontend/packages/generator-superset
run: npx jest
- name: Upload code coverage
if: steps.check.outputs.frontend
working-directory: ./superset-frontend
run: ../.github/workflows/codecov.sh -c -F javascript
uses: codecov/codecov-action@v4
with:
flags: javascript
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@ -66,9 +66,11 @@ jobs:
run: |
./scripts/python_tests.sh
- name: Upload code coverage
if: steps.check.outputs.python
run: |
bash .github/workflows/codecov.sh -c -F python -F mysql
uses: codecov/codecov-action@v4
with:
flags: python,mysql
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
test-postgres:
runs-on: ubuntu-20.04
strategy:
@ -123,9 +125,11 @@ jobs:
run: |
./scripts/python_tests.sh
- name: Upload code coverage
if: steps.check.outputs.python
run: |
bash .github/workflows/codecov.sh -c -F python -F postgres
uses: codecov/codecov-action@v4
with:
flags: python,postgres
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
test-sqlite:
runs-on: ubuntu-20.04
@ -171,6 +175,8 @@ jobs:
run: |
./scripts/python_tests.sh
- name: Upload code coverage
if: steps.check.outputs.python
run: |
bash .github/workflows/codecov.sh -c -F python -F sqlite
uses: codecov/codecov-action@v4
with:
flags: python,sqlite
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@ -75,9 +75,11 @@ jobs:
run: |
./scripts/python_tests.sh -m 'chart_data_flow or sql_json_flow'
- name: Upload code coverage
if: steps.check.outputs.python
run: |
bash .github/workflows/codecov.sh -c -F python -F presto
uses: codecov/codecov-action@v4
with:
flags: python,presto
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
test-postgres-hive:
runs-on: ubuntu-20.04
@ -138,6 +140,8 @@ jobs:
run: |
./scripts/python_tests.sh -m 'chart_data_flow or sql_json_flow'
- name: Upload code coverage
if: steps.check.outputs.python
run: |
bash .github/workflows/codecov.sh -c -F python -F hive
uses: codecov/codecov-action@v4
with:
flags: python,hive
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@ -46,6 +46,8 @@ jobs:
run: |
pytest --durations-min=0.5 --cov-report= --cov=superset ./tests/common ./tests/unit_tests --cache-clear
- name: Upload code coverage
if: steps.check.outputs.python
run: |
bash .github/workflows/codecov.sh -c -F python -F unit
uses: codecov/codecov-action@v4
with:
flags: python,unit
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@ -47,7 +47,7 @@ repos:
hooks:
- id: check-docstring-first
- id: check-added-large-files
exclude: \.(geojson)$
exclude: ^.*\.(geojson)$|^docs/static/img/screenshots/.*
- id: check-yaml
exclude: ^helm/superset/templates/
- id: debug-statements

View File

@ -64,6 +64,7 @@ temporary_superset_ui/*
# docs overrides for third party logos we don't have the rights to
google-big-query.svg
google-sheets.svg
ibm-db2.svg
postgresql.svg
snowflake.svg

View File

@ -32,7 +32,6 @@ under the License.
- [#26369](https://github.com/apache/superset/pull/26369) refactor: Removes the filters set feature (@michael-s-molina)
- [#26416](https://github.com/apache/superset/pull/26416) fix: improve performance on reports log queries (@dpgaspar)
- [#26290](https://github.com/apache/superset/pull/26290) feat(echarts-funnel): Implement % calculation type (@kgabryje)
- [#26288](https://github.com/apache/superset/pull/26288) chore: Ensure Mixins are ordered according to the MRO (@john-bodley)
**Features**
@ -470,3 +469,4 @@ under the License.
- [#26100](https://github.com/apache/superset/pull/26100) build(deps-dev): bump @types/node from 20.9.4 to 20.10.0 in /superset-websocket (@dependabot[bot])
- [#26099](https://github.com/apache/superset/pull/26099) build(deps-dev): bump @types/cookie from 0.5.4 to 0.6.0 in /superset-websocket (@dependabot[bot])
- [#26104](https://github.com/apache/superset/pull/26104) docs: update CVEs fixed on 2.1.2 (@dpgaspar)
- [#26288](https://github.com/apache/superset/pull/26288) chore: Ensure Mixins are ordered according to the MRO (@john-bodley)

View File

@ -20,5 +20,5 @@ Contributions are welcome and are greatly appreciated! Every
little bit helps, and credit will always be given.
All matters related to contributions have moved to [this section of
the official Superset documentation](https://superset.apache.org/docs/contributing/contributing/). Source for the documentation is
the official Superset documentation](https://superset.apache.org/docs/contributing/). Source for the documentation is
[located here](https://github.com/apache/superset/tree/master/docs/docs).

108
README.md
View File

@ -1,3 +1,7 @@
---
hide_title: true
sidebar_position: 1
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
@ -30,12 +34,14 @@ under the License.
<picture width="500">
<source
width="600"
media="(prefers-color-scheme: dark)"
src="https://github.com/apache/superset/raw/master/superset-frontend/src/assets/branding/superset-logo-horiz-apache-dark.png"
src="https://superset.apache.org/img/superset-logo-horiz-dark.svg"
alt="Superset logo (dark)"
/>
<img
src="https://github.com/apache/superset/raw/master/superset-frontend/src/assets/branding/superset-logo-horiz-apache.png"
width="600"
src="https://superset.apache.org/img/superset-logo-horiz-apache.svg"
alt="Superset logo (light)"
/>
</picture>
@ -45,11 +51,11 @@ A modern, enterprise-ready business intelligence web application.
[**Why Superset?**](#why-superset) |
[**Supported Databases**](#supported-databases) |
[**Installation and Configuration**](#installation-and-configuration) |
[**Release Notes**](RELEASING/README.md#release-notes-for-recent-releases) |
[**Release Notes**](https://github.com/apache/superset/blob/master/RELEASING/README.md#release-notes-for-recent-releases) |
[**Get Involved**](#get-involved) |
[**Contributor Guide**](#contributor-guide) |
[**Resources**](#resources) |
[**Organizations Using Superset**](RESOURCES/INTHEWILD.md)
[**Organizations Using Superset**](https://github.com/apache/superset/blob/master/RESOURCES/INTHEWILD.md)
## Why Superset?
@ -71,69 +77,69 @@ Superset provides:
**Video Overview**
<!-- File hosted here https://github.com/apache/superset-site/raw/lfs/superset-video-4k.mp4 -->
https://user-images.githubusercontent.com/64562059/234390129-321d4f35-cb4b-45e8-89d9-20ae292f34fc.mp4
https://superset.staged.apache.org/superset-video-4k.mp4
<br/>
**Large Gallery of Visualizations**
<kbd><img title="Gallery" src="superset-frontend/src/assets/images/screenshots/gallery.jpg"/></kbd><br/>
<kbd><img title="Gallery" src="https://superset.apache.org/img/screenshots/gallery.jpg"/></kbd><br/>
**Craft Beautiful, Dynamic Dashboards**
<kbd><img title="View Dashboards" src="superset-frontend/src/assets/images/screenshots/slack_dash.jpg"/></kbd><br/>
<kbd><img title="View Dashboards" src="https://superset.apache.org/img/screenshots/slack_dash.jpg"/></kbd><br/>
**No-Code Chart Builder**
<kbd><img title="Slice & dice your data" src="superset-frontend/src/assets/images/screenshots/explore.jpg"/></kbd><br/>
<kbd><img title="Slice & dice your data" src="https://superset.apache.org/img/screenshots/explore.jpg"/></kbd><br/>
**Powerful SQL Editor**
<kbd><img title="SQL Lab" src="superset-frontend/src/assets/images/screenshots/sql_lab.jpg"/></kbd><br/>
<kbd><img title="SQL Lab" src="https://superset.apache.org/img/screenshots/sql_lab.jpg"/></kbd><br/>
## Supported Databases
Superset can query data from any SQL-speaking datastore or data engine (Presto, Trino, Athena, [and more](https://superset.apache.org/docs/databases/installing-database-drivers/)) that has a Python DB-API driver and a SQLAlchemy dialect.
Superset can query data from any SQL-speaking datastore or data engine (Presto, Trino, Athena, [and more](https://superset.apache.org/docs/configuration/databases)) that has a Python DB-API driver and a SQLAlchemy dialect.
Here are some of the major database solutions that are supported:
<p align="center">
<img src="superset-frontend/src/assets/images/redshift.png" alt="redshift" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/google-biquery.png" alt="google-biquery" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/snowflake.png" alt="snowflake" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/trino.png" alt="trino" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/presto.png" alt="presto" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/databricks.png" alt="databricks" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/druid.png" alt="druid" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/firebolt.png" alt="firebolt" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/timescale.png" alt="timescale" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/rockset.png" alt="rockset" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/postgresql.png" alt="postgresql" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/mysql.png" alt="mysql" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/mssql-server.png" alt="mssql-server" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/db2.png" alt="db2" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/sqlite.png" alt="sqlite" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/sybase.png" alt="sybase" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/mariadb.png" alt="mariadb" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/vertica.png" alt="vertica" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/oracle.png" alt="oracle" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/firebird.png" alt="firebird" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/greenplum.png" alt="greenplum" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/clickhouse.png" alt="clickhouse" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/exasol.png" alt="exasol" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/monet-db.png" alt="monet-db" border="0" width="200" height="80" />
<img src="superset-frontend/src/assets/images/apache-kylin.png" alt="apache-kylin" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/hologres.png" alt="hologres" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/netezza.png" alt="netezza" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/pinot.png" alt="pinot" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/teradata.png" alt="teradata" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/yugabyte.png" alt="yugabyte" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/databend.png" alt="databend" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/starrocks.png" alt="starrocks" border="0" width="200" height="80"/>
<img src="superset-frontend/src/assets/images/doris.png" alt="doris" border="0" width="200" height="80"/>
<img src="https://superset.apache.org/img/databases/redshift.png" alt="redshift" border="0" width="200"/>
<img src="https://superset.apache.org/img/databases/google-biquery.png" alt="google-biquery" border="0" width="200"/>
<img src="https://superset.apache.org/img/databases/snowflake.png" alt="snowflake" border="0" width="200"/>
<img src="https://superset.apache.org/img/databases/trino.png" alt="trino" border="0" width="150" />
<img src="https://superset.apache.org/img/databases/presto.png" alt="presto" border="0" width="200"/>
<img src="https://superset.apache.org/img/databases/databricks.png" alt="databricks" border="0" width="160" />
<img src="https://superset.apache.org/img/databases/druid.png" alt="druid" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/firebolt.png" alt="firebolt" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/timescale.png" alt="timescale" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/rockset.png" alt="rockset" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/postgresql.png" alt="postgresql" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/mysql.png" alt="mysql" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/mssql-server.png" alt="mssql-server" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/imb-db2.svg" alt="db2" border="0" width="220" />
<img src="https://superset.apache.org/img/databases/sqlite.png" alt="sqlite" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/sybase.png" alt="sybase" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/mariadb.png" alt="mariadb" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/vertica.png" alt="vertica" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/oracle.png" alt="oracle" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/firebird.png" alt="firebird" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/greenplum.png" alt="greenplum" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/clickhouse.png" alt="clickhouse" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/exasol.png" alt="exasol" border="0" width="160" />
<img src="https://superset.apache.org/img/databases/monet-db.png" alt="monet-db" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/apache-kylin.png" alt="apache-kylin" border="0" width="80"/>
<img src="https://superset.apache.org/img/databases/hologres.png" alt="hologres" border="0" width="80"/>
<img src="https://superset.apache.org/img/databases/netezza.png" alt="netezza" border="0" width="80"/>
<img src="https://superset.apache.org/img/databases/pinot.png" alt="pinot" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/teradata.png" alt="teradata" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/yugabyte.png" alt="yugabyte" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/databend.png" alt="databend" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/starrocks.png" alt="starrocks" border="0" width="200" />
<img src="https://superset.apache.org/img/databases/doris.png" alt="doris" border="0" width="200" />
</p>
**A more comprehensive list of supported databases** along with the configuration instructions can be found [here](https://superset.apache.org/docs/databases/installing-database-drivers).
**A more comprehensive list of supported databases** along with the configuration instructions can be found [here](https://superset.apache.org/docs/configuration/databases).
Want to add support for your datastore or data engine? Read more [here](https://superset.apache.org/docs/frequently-asked-questions#does-superset-work-with-insert-database-engine-here) about the technical requirements.
@ -159,9 +165,9 @@ how to set up a development environment.
## Resources
- [Superset "In the Wild"](RESOURCES/INTHEWILD.md) - open a PR to add your org to the list!
- [Feature Flags](RESOURCES/FEATURE_FLAGS.md) - the status of Superset's Feature Flags.
- [Standard Roles](RESOURCES/STANDARD_ROLES.md) - How RBAC permissions map to roles.
- [Superset "In the Wild"](https://github.com/apache/superset/blob/master/RESOURCES/INTHEWILD.md) - open a PR to add your org to the list!
- [Feature Flags](https://github.com/apache/superset/blob/master/RESOURCES/FEATURE_FLAGS.md) - the status of Superset's Feature Flags.
- [Standard Roles](https://github.com/apache/superset/blob/master/RESOURCES/STANDARD_ROLES.md) - How RBAC permissions map to roles.
- [Superset Wiki](https://github.com/apache/superset/wiki) - Tons of additional community resources: best practices, community content and other information.
- [Superset SIPs](https://github.com/orgs/apache/projects/170) - The status of Superset's SIPs (Superset Improvement Proposals) for both consensus and implementation status.
@ -172,7 +178,7 @@ Understanding the Superset Points of View
- Getting Started with Superset
- [Superset in 2 Minutes using Docker Compose](https://superset.apache.org/docs/installation/docker-compose#installing-superset-locally-using-docker-compose)
- [Installing Database Drivers](https://superset.apache.org/docs/databases/docker-add-drivers/)
- [Installing Database Drivers](https://superset.apache.org/docs/configuration/databases#installing-database-drivers)
- [Building New Database Connectors](https://preset.io/blog/building-database-connector/)
- [Create Your First Dashboard](https://superset.apache.org/docs/using-superset/creating-your-first-dashboard/)
- [Comprehensive Tutorial for Contributing Code to Apache Superset
@ -198,10 +204,10 @@ Understanding the Superset Points of View
- [Superset API](https://superset.apache.org/docs/rest-api)
## Repo Activity
<a href="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats?repo_id=39464018" target="_blank" style="display: block" align="center">
<a href="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats?repo_id=39464018" target="_blank" align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats/thumbnail.png?repo_id=39464018&image_size=auto&color_scheme=dark" width="655" height="auto">
<img alt="Performance Stats of apache/superset - Last 28 days" src="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats/thumbnail.png?repo_id=39464018&image_size=auto&color_scheme=light" width="655" height="auto">
<source media="(prefers-color-scheme: dark)" srcset="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats/thumbnail.png?repo_id=39464018&image_size=auto&color_scheme=dark" width="655" height="auto" />
<img alt="Performance Stats of apache/superset - Last 28 days" src="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats/thumbnail.png?repo_id=39464018&image_size=auto&color_scheme=light" width="655" height="auto" />
</picture>
</a>

View File

@ -506,3 +506,18 @@ and re-push the proper images and tags through this interface. The action
takes the version (ie `3.1.1`), the git reference (any SHA, tag or branch
reference), and whether to force the `latest` Docker tag on the
generated images.
### Npm Release
You might want to publish the latest @superset-ui release to npm
```bash
cd superset/superset-frontend
```
An automated GitHub action will run and generate a new tag, which will contain a version number provided as a parameter.
```bash
export GH_TOKEN={GITHUB_TOKEN}
npx lerna version {VERSION} --conventional-commits --create-release github --no-private --yes --message {COMMIT_MESSAGE}
```
This action will publish the specified version to npm registry.
```bash
npx lerna publish from-package --yes
```

View File

@ -46,7 +46,7 @@ These features are **finished** but currently being tested. They are usable, but
- CONFIRM_DASHBOARD_DIFF
- DRILL_TO_DETAIL
- DYNAMIC_PLUGINS: [(docs)](https://superset.apache.org/docs/configuration/running-on-kubernetes)
- ENABLE_SUPERSET_META_DB: [(docs)](https://superset.apache.org/docs/databases/meta-database/)
- ENABLE_SUPERSET_META_DB: [(docs)]()
- ESTIMATE_QUERY_COST
- GLOBAL_ASYNC_QUERIES [(docs)](https://github.com/apache/superset/blob/master/CONTRIBUTING.md#async-chart-queries)
- HORIZONTAL_FILTER_BAR

View File

@ -43,6 +43,10 @@ assists people when migrating to a new version.
set `SLACK_API_TOKEN` to fetch and serve Slack avatar links
- [28134](https://github.com/apache/superset/pull/28134/) The default logging level was changed
from DEBUG to INFO - which is the normal/sane default logging level for most software.
- [28205](https://github.com/apache/superset/pull/28205) The permission `all_database_access` now
more clearly provides access to all databases, as specified in its name. Before it only allowed
listing all databases in CRUD-view and dropdown and didn't provide access to data as it
seemed the name would imply.
## 4.0.0

BIN
databases/trino.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -39,7 +39,6 @@ x-common-build: &common-build
cache_from:
- apache/superset-cache:3.10-slim-bookworm
version: "4.0"
services:
nginx:
image: nginx:latest
@ -94,12 +93,11 @@ services:
depends_on: *superset-depends-on
volumes: *superset-volumes
environment:
CYPRESS_CONFIG: "${CYPRESS_CONFIG}"
CYPRESS_CONFIG: "${CYPRESS_CONFIG:-}"
superset-websocket:
container_name: superset_websocket
build: ./superset-websocket
image: superset-websocket
ports:
- 8080:8080
extra_hosts:
@ -122,7 +120,7 @@ services:
- /home/superset-websocket/dist
# Mounting a config file that contains a dummy secret required to boot up.
# do no not use this docker-compose in production
# do not use this docker-compose in production
- ./docker/superset-websocket/config.json:/home/superset-websocket/config.json
environment:
- PORT=8080
@ -144,7 +142,7 @@ services:
user: *superset-user
volumes: *superset-volumes
environment:
CYPRESS_CONFIG: "${CYPRESS_CONFIG}"
CYPRESS_CONFIG: "${CYPRESS_CONFIG:-}"
healthcheck:
disable: true
@ -153,7 +151,7 @@ services:
environment:
# set this to false if you have perf issues running the npm i; npm run dev in-docker
# if you do so, you have to run this manually on the host, which should perform better!
SCARF_ANALYTICS: "${SCARF_ANALYTICS}"
SCARF_ANALYTICS: "${SCARF_ANALYTICS:-}"
container_name: superset_node
command: ["/app/docker/docker-frontend.sh"]
env_file:

3
docs/.gitignore vendored
View File

@ -20,3 +20,6 @@ yarn-debug.log*
yarn-error.log*
docs/.zshrc
# Gets copied from the root of the project at build time (yarn start / yarn build)
docs/intro.md

View File

@ -5,7 +5,7 @@ sidebar_position: 2
version: 2
---
## Alerts and Reports
# Alerts and Reports
Users can configure automated alerts and reports to send dashboards or charts to an email recipient or Slack channel.
@ -14,11 +14,11 @@ Users can configure automated alerts and reports to send dashboards or charts to
Alerts and reports are disabled by default. To turn them on, you need to do some setup, described here.
### Requirements
## Requirements
#### Commons
### Commons
##### In your `superset_config.py` or `superset_config_docker.py`
#### In your `superset_config.py` or `superset_config_docker.py`
- `"ALERT_REPORTS"` [feature flag](/docs/configuration/configuring-superset#feature-flags) must be turned to True.
- `beat_schedule` in CeleryConfig must contain schedule for `reports.scheduler`.
@ -26,11 +26,11 @@ Alerts and reports are disabled by default. To turn them on, you need to do some
- emails: `SMTP_*` settings
- Slack messages: `SLACK_API_TOKEN`
###### Disable dry-run mode
##### Disable dry-run mode
Screenshots will be taken but no messages actually sent as long as `ALERT_REPORTS_NOTIFICATION_DRY_RUN = True`, its default value in `docker/pythonpath_dev/superset_config.py`. To disable dry-run mode and start receiving email/Slack notifications, set `ALERT_REPORTS_NOTIFICATION_DRY_RUN` to `False` in [superset config](https://github.com/apache/superset/blob/master/docker/pythonpath_dev/superset_config.py).
##### In your `Dockerfile`
#### In your `Dockerfile`
- You must install a headless browser, for taking screenshots of the charts and dashboards. Only Firefox and Chrome are currently supported.
> If you choose Chrome, you must also change the value of `WEBDRIVER_TYPE` to `"chrome"` in your `superset_config.py`.
@ -43,7 +43,7 @@ You can either install and configure the headless browser - see "Custom Dockerfi
*Note*: In this context, a "dev image" is the same application software as its corresponding non-dev image, just bundled with additional tools. So an image like `3.1.0-dev` is identical to `3.1.0` when it comes to stability, functionality, and running in production. The actual "in-development" versions of Superset - cutting-edge and unstable - are not tagged with version numbers on Docker Hub and will display version `0.0.0-dev` within the Superset UI.
#### Slack integration
### Slack integration
To send alerts and reports to Slack channels, you need to create a new Slack Application on your workspace.
@ -61,14 +61,14 @@ To send alerts and reports to Slack channels, you need to create a new Slack App
Note: when you configure an alert or a report, the Slack channel list takes channel names without the leading '#' e.g. use `alerts` instead of `#alerts`.
#### Kubernetes-specific
### Kubernetes-specific
- You must have a `celery beat` pod running. If you're using the chart included in the GitHub repository under [helm/superset](https://github.com/apache/superset/tree/master/helm/superset), you need to put `supersetCeleryBeat.enabled = true` in your values override.
- You can see the dedicated docs about [Kubernetes installation](/docs/installation/kubernetes) for more details.
#### Docker Compose specific
### Docker Compose specific
##### You must have in your `docker-compose.yml`
#### You must have in your `docker-compose.yml`
- A Redis message broker
- PostgreSQL DB instead of SQLlite
@ -195,14 +195,14 @@ Please refer to `ExecutorType` in the codebase for other executor types.
its default value of `http://0.0.0.0:8080/`.
### Custom Dockerfile
## Custom Dockerfile
If you're running the dev version of a released Superset image, like `apache/superset:3.1.0-dev`, you should be set with the above.
But if you're building your own image, or starting with a non-dev version, a webdriver (and headless browser) is needed to capture screenshots of the charts and dashboards which are then sent to the recipient.
Here's how you can modify your Dockerfile to take the screenshots either with Firefox or Chrome.
#### Using Firefox
### Using Firefox
```docker
FROM apache/superset:3.1.0
@ -223,7 +223,7 @@ RUN pip install --no-cache gevent psycopg2 redis
USER superset
```
#### Using Chrome
### Using Chrome
```docker
FROM apache/superset:3.1.0
@ -248,21 +248,21 @@ USER superset
Don't forget to set `WEBDRIVER_TYPE` and `WEBDRIVER_OPTION_ARGS` in your config if you use Chrome.
### Troubleshooting
## Troubleshooting
There are many reasons that reports might not be working. Try these steps to check for specific issues.
#### Confirm feature flag is enabled and you have sufficient permissions
### Confirm feature flag is enabled and you have sufficient permissions
If you don't see "Alerts & Reports" under the *Manage* section of the Settings dropdown in the Superset UI, you need to enable the `ALERT_REPORTS` feature flag (see above). Enable another feature flag and check to see that it took effect, to verify that your config file is getting loaded.
Log in as an admin user to ensure you have adequate permissions.
#### Check the logs of your Celery worker
### Check the logs of your Celery worker
This is the best source of information about the problem. In a docker compose deployment, you can do this with a command like `docker logs superset_worker --since 1h`.
#### Check web browser and webdriver installation
### Check web browser and webdriver installation
To take a screenshot, the worker visits the dashboard or chart using a headless browser, then takes a screenshot. If you are able to send a chart as CSV or text but can't send as PNG, your problem may lie with the browser.
@ -270,7 +270,7 @@ Superset docker images that have a tag ending with `-dev` have the Firefox headl
If you are handling the installation of that software on your own, or wish to use Chromium instead, do your own verification to ensure that the headless browser opens successfully in the worker environment.
#### Send a test email
### Send a test email
One symptom of an invalid connection to an email server is receiving an error of `[Errno 110] Connection timed out` in your logs when the report tries to send.
@ -301,7 +301,7 @@ Possible fixes:
- Some cloud hosts disable outgoing unauthenticated SMTP email to prevent spam. For instance, [Azure blocks port 25 by default on some machines](https://learn.microsoft.com/en-us/azure/virtual-network/troubleshoot-outbound-smtp-connectivity). Enable that port or use another sending method.
- Use another set of SMTP credentials that you verify works in this setup.
#### Browse to your report from the worker
### Browse to your report from the worker
The worker may be unable to reach the report. It will use the value of `WEBDRIVER_BASEURL` to browse to the report. If that route is invalid, or presents an authentication challenge that the worker can't pass, the report screenshot will fail.
@ -309,7 +309,7 @@ Check this by attempting to `curl` the URL of a report that you see in the error
In a deployment with authentication measures enabled like HTTPS and Single Sign-On, it may make sense to have the worker navigate directly to the Superset application running in the same location, avoiding the need to sign in. For instance, you could use `WEBDRIVER_BASEURL="http://superset_app:8088"` for a docker compose deployment, and set `"force_https": False,` in your `TALISMAN_CONFIG`.
### Scheduling Queries as Reports
## Scheduling Queries as Reports
You can optionally allow your users to schedule queries directly in SQL Lab. This is done by adding
extra metadata to saved queries, which are then picked up by an external scheduled (like

View File

@ -5,9 +5,9 @@ sidebar_position: 4
version: 1
---
## Async Queries via Celery
# Async Queries via Celery
### Celery
## Celery
On large analytic databases, its common to run queries that execute for minutes or hours. To enable
support for long running queries that execute beyond the typical web requests timeout (30-60
@ -89,7 +89,7 @@ issues arise. Please clear your existing results cache store when upgrading an e
- SQL Lab will _only run your queries asynchronously if_ you enable **Asynchronous Query Execution**
in your database settings (Sources > Databases > Edit record).
### Celery Flower
## Celery Flower
Flower is a web based tool for monitoring the Celery cluster which you can install from pip:

View File

@ -5,7 +5,7 @@ sidebar_position: 3
version: 1
---
## Caching
# Caching
Superset uses [Flask-Caching](https://flask-caching.readthedocs.io/) for caching purposes.
Flask-Caching supports various caching backends, including Redis (recommended), Memcached,
@ -33,7 +33,7 @@ FILTER_STATE_CACHE_CONFIG = {
}
```
### Dependencies
## Dependencies
In order to use dedicated cache stores, additional python libraries must be installed
@ -43,7 +43,7 @@ In order to use dedicated cache stores, additional python libraries must be inst
These libraries can be installed using pip.
### Fallback Metastore Cache
## Fallback Metastore Cache
Note, that some form of Filter State and Explore caching are required. If either of these caches
are undefined, Superset falls back to using a built-in cache that stores data in the metadata
@ -60,7 +60,7 @@ DATA_CACHE_CONFIG = {
}
```
### Chart Cache Timeout
## Chart Cache Timeout
The cache timeout for charts may be overridden by the settings for an individual chart, dataset, or
database. Each of these configurations will be checked in order before falling back to the default
@ -69,7 +69,7 @@ value defined in `DATA_CACHE_CONFIG`.
Note, that by setting the cache timeout to `-1`, caching for charting data can be disabled, either
per chart, dataset or database, or by default if set in `DATA_CACHE_CONFIG`.
### SQL Lab Query Results
## SQL Lab Query Results
Caching for SQL Lab query results is used when async queries are enabled and is configured using
`RESULTS_BACKEND`.
@ -79,7 +79,7 @@ instead requires a cachelib object.
See [Async Queries via Celery](/docs/configuration/async-queries-celery) for details.
### Caching Thumbnails
## Caching Thumbnails
This is an optional feature that can be turned on by activating its [feature flag](/docs/configuration/configuring-superset#feature-flags) on config:

View File

@ -5,28 +5,40 @@ sidebar_position: 1
version: 1
---
## Configuring Superset
# Configuring Superset
### Configuration
## superset_config.py
To configure your application, you need to create a file `superset_config.py`. Add this file to your
Superset exposes hundreds of configurable parameters through its
[config.py module](https://github.com/apache/superset/blob/master/superset/config.py). The
variables and objects exposed act as a public interface of the bulk of what you may want
to configure, alter and interface with. In this python module, you'll find all these
parameters, sensible defaults, as well as rich documentation in the form of comments
`PYTHONPATH` or create an environment variable `SUPERSET_CONFIG_PATH` specifying the full path of the `superset_config.py`.
To configure your application, you need to create you own configuration module, which
will allow you to override few or many of these parameters. Instead of altering the core module,
You'll want to define your own module (typically a file named `superset_config.py`.
Add this file to your `PYTHONPATH` or create an environment variable
`SUPERSET_CONFIG_PATH` specifying the full path of the `superset_config.py`.
For example, if deploying on Superset directly on a Linux-based system where your `superset_config.py` is under `/app` directory, you can run:
For example, if deploying on Superset directly on a Linux-based system where your
`superset_config.py` is under `/app` directory, you can run:
```bash
export SUPERSET_CONFIG_PATH=/app/superset_config.py
```
If you are using your own custom Dockerfile with official Superset image as base image, then you can add your overrides as shown below:
If you are using your own custom Dockerfile with official Superset image as base image,
then you can add your overrides as shown below:
```bash
COPY --chown=superset superset_config.py /app/
ENV SUPERSET_CONFIG_PATH /app/superset_config.py
```
Docker compose deployments handle application configuration differently. See [https://github.com/apache/superset/tree/master/docker#readme](https://github.com/apache/superset/tree/master/docker#readme) for details.
Docker compose deployments handle application configuration differently using specific conventions..
Refer to the [docker-compose tips & configuration](/docs/installation/docker-compose#docker-compose-tips--configuration)
for details.
The following is an example of just a few of the parameters you can set in your `superset_config.py` file:
@ -63,33 +75,39 @@ WTF_CSRF_TIME_LIMIT = 60 * 60 * 24 * 365
MAPBOX_API_KEY = ''
```
All the parameters and default values defined in
[https://github.com/apache/superset/blob/master/superset/config.py](https://github.com/apache/superset/blob/master/superset/config.py)
:::tip
Note that it is typical to copy and paste [only] the portions of the
core [superset/config.py](https://github.com/apache/superset/blob/master/superset/config.py) that
you want to alter, along with the related comments into your own `superset_config.py` file.
:::
All the parameters and default values defined
in [superset/config.py](https://github.com/apache/superset/blob/master/superset/config.py)
can be altered in your local `superset_config.py`. Administrators will want to read through the file
to understand what can be configured locally as well as the default values in place.
Since `superset_config.py` acts as a Flask configuration module, it can be used to alter the
settings Flask itself, as well as Flask extensions like `flask-wtf`, `flask-caching`, `flask-migrate`,
and `flask-appbuilder`. Flask App Builder, the web framework used by Superset, offers many
settings Flask itself, as well as Flask extensions that Superset bundles like
`flask-wtf`, `flask-caching`, `flask-migrate`,
and `flask-appbuilder`. Each one of these extensions offers intricate configurability.
Flask App Builder, the web framework used by Superset, also offers many
configuration settings. Please consult the
[Flask App Builder Documentation](https://flask-appbuilder.readthedocs.org/en/latest/config.html)
for more information on how to configure it.
Make sure to change:
You'll want to change:
- `SQLALCHEMY_DATABASE_URI`: by default it is stored at ~/.superset/superset.db
- `SECRET_KEY`: to a long random string
If you need to exempt endpoints from CSRF (e.g. if you are running a custom auth postback endpoint),
you can add the endpoints to `WTF_CSRF_EXEMPT_LIST`:
- `SQLALCHEMY_DATABASE_URI`: that by default points to sqlite database located at
~/.superset/superset.db
```
WTF_CSRF_EXEMPT_LIST = []
```
### Specifying a SECRET_KEY
## Specifying a SECRET_KEY
#### Adding an initial SECRET_KEY
### Adding an initial SECRET_KEY
Superset requires a user-specified SECRET_KEY to start up. This requirement was [added in version 2.1.0 to force secure configurations](https://preset.io/blog/superset-security-update-default-secret_key-vulnerability/). Add a strong SECRET_KEY to your `superset_config.py` file like:
@ -104,7 +122,7 @@ This key will be used for securely signing session cookies and encrypting sensit
Your deployment must use a complex, unique key.
:::
#### Rotating to a newer SECRET_KEY
### Rotating to a newer SECRET_KEY
If you wish to change your existing SECRET_KEY, add the existing SECRET_KEY to your `superset_config.py` file as
`PREVIOUS_SECRET_KEY = `and provide your new key as `SECRET_KEY =`. You can find your current SECRET_KEY with these
@ -117,9 +135,13 @@ from flask import current_app; print(current_app.config["SECRET_KEY"])
Save your `superset_config.py` with these values and then run `superset re-encrypt-secrets`.
### Using a production metastore
## Setting up a production metadata database
By default, Superset is configured to use SQLite, which is a simple and fast way to get started
Superset needs a database to store the information it manages, like the definitions of
charts, dashboards, and many other things.
By default, Superset is configured to use [SQLite](https://www.sqlite.org/),
a self-contained, single-file database that offers a simple and fast way to get started
(without requiring any installation). However, for production environments,
using SQLite is highly discouraged due to security, scalability, and data integrity reasons.
It's important to use only the supported database engines and consider using a different
@ -139,10 +161,17 @@ Use the following database drivers and connection strings:
| [PostgreSQL](https://www.postgresql.org/) | `pip install psycopg2` | `postgresql://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [MySQL](https://www.mysql.com/) | `pip install mysqlclient` | `mysql://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
:::tip
Properly setting up metadata store is beyond the scope of this documentation. We recommend
using a hosted managed service such as [Amazon RDS](https://aws.amazon.com/rds/) or
[Google Cloud Databases](https://cloud.google.com/products/databases?hl=en) to handle
service and supporting infrastructure and backup strategy.
:::
To configure Superset metastore set `SQLALCHEMY_DATABASE_URI` config key on `superset_config`
to the appropriate connection string.
### Running on a WSGI HTTP Server
## Running on a WSGI HTTP Server
While you can run Superset on NGINX or Apache, we recommend using Gunicorn in async mode. This
enables impressive concurrency even and is fairly easy to install and configure. Please refer to the
@ -171,12 +200,12 @@ If you're not using Gunicorn, you may want to disable the use of `flask-compress
Currently, Google BigQuery python sdk is not compatible with `gevent`, due to some dynamic monkeypatching on python core library by `gevent`.
So, when you use `BigQuery` datasource on Superset, you have to use `gunicorn` worker type except `gevent`.
### HTTPS Configuration
## HTTPS Configuration
You can configure HTTPS upstream via a load balancer or a reverse proxy (such as nginx) and do SSL/TLS Offloading before traffic reaches the Superset application. In this setup, local traffic from a Celery worker taking a snapshot of a chart for Alerts & Reports can access Superset at a `http://` URL, from behind the ingress point.
You can also configure [SSL in Gunicorn](https://docs.gunicorn.org/en/stable/settings.html#ssl) (the Python webserver) if you are using an official Superset Docker image.
### Configuration Behind a Load Balancer
## Configuration Behind a Load Balancer
If you are running superset behind a load balancer or reverse proxy (e.g. NGINX or ELB on AWS), you
may need to utilize a healthcheck endpoint so that your load balancer knows if your superset
@ -194,7 +223,7 @@ In case the reverse proxy is used for providing SSL encryption, an explicit defi
RequestHeader set X-Forwarded-Proto "https"
```
### Custom OAuth2 Configuration
## Custom OAuth2 Configuration
Superset is built on Flask-AppBuilder (FAB), which supports many providers out of the box
(GitHub, Twitter, LinkedIn, Google, Azure, etc). Beyond those, Superset can be configured to connect
@ -293,19 +322,19 @@ CUSTOM_SECURITY_MANAGER = CustomSsoSecurityManager
]
```
### LDAP Authentication
## LDAP Authentication
FAB supports authenticating user credentials against an LDAP server.
To use LDAP you must install the [python-ldap](https://www.python-ldap.org/en/latest/installing.html) package.
See [FAB's LDAP documentation](https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-ldap)
for details.
### Mapping LDAP or OAUTH groups to Superset roles
## Mapping LDAP or OAUTH groups to Superset roles
AUTH_ROLES_MAPPING in Flask-AppBuilder is a dictionary that maps from LDAP/OAUTH group names to FAB roles.
It is used to assign roles to users who authenticate using LDAP or OAuth.
#### Mapping OAUTH groups to Superset roles
### Mapping OAUTH groups to Superset roles
The following `AUTH_ROLES_MAPPING` dictionary would map the OAUTH group "superset_users" to the Superset roles "Gamma" as well as "Alpha", and the OAUTH group "superset_admins" to the Superset role "Admin".
@ -315,7 +344,7 @@ AUTH_ROLES_MAPPING = {
"superset_admins": ["Admin"],
}
```
#### Mapping LDAP groups to Superset roles
### Mapping LDAP groups to Superset roles
The following `AUTH_ROLES_MAPPING` dictionary would map the LDAP DN "cn=superset_users,ou=groups,dc=example,dc=com" to the Superset roles "Gamma" as well as "Alpha", and the LDAP DN "cn=superset_admins,ou=groups,dc=example,dc=com" to the Superset role "Admin".
@ -327,11 +356,11 @@ AUTH_ROLES_MAPPING = {
```
Note: This requires `AUTH_LDAP_SEARCH` to be set. For more details, please see the [FAB Security documentation](https://flask-appbuilder.readthedocs.io/en/latest/security.html).
#### Syncing roles at login
### Syncing roles at login
You can also use the `AUTH_ROLES_SYNC_AT_LOGIN` configuration variable to control how often Flask-AppBuilder syncs the user's roles with the LDAP/OAUTH groups. If `AUTH_ROLES_SYNC_AT_LOGIN` is set to True, Flask-AppBuilder will sync the user's roles each time they log in. If `AUTH_ROLES_SYNC_AT_LOGIN` is set to False, Flask-AppBuilder will only sync the user's roles when they first register.
### Flask app Configuration Hook
## Flask app Configuration Hook
`FLASK_APP_MUTATOR` is a configuration function that can be provided in your environment, receives
the app object and can alter it in any way. For example, add `FLASK_APP_MUTATOR` into your
@ -354,7 +383,7 @@ def FLASK_APP_MUTATOR(app: Flask) -> None:
app.before_request_funcs.setdefault(None, []).append(make_session_permanent)
```
### Feature Flags
## Feature Flags
To support a diverse set of users, Superset has some features that are not enabled by default. For
example, some users have stronger security restrictions, while some others may not. So Superset

View File

@ -1,13 +1,12 @@
---
title: Country Map Tools
hide_title: true
sidebar_position: 10
version: 1
---
import countriesData from '../../data/countries.json';
## The Country Map Visualization
# The Country Map Visualization
The Country Map visualization allows you to plot lightweight choropleth maps of
your countries by province, states, or other subdivision types. It does not rely

File diff suppressed because it is too large Load Diff

View File

@ -1,18 +1,18 @@
---
title: Event Logging
hide_title: true
sidebar_position: 9
version: 1
---
## Logging
# Logging
### Event Logging
## Event Logging
Superset by default logs special action events in its internal database (DBEventLogger). These logs can be accessed
on the UI by navigating to **Security > Action Log**. You can freely customize these logs by
implementing your own event log class.
**When custom log class is enabled DBEventLogger is disabled and logs stop being populated in UI logs view.**
**When custom log class is enabled DBEventLogger is disabled and logs
stop being populated in UI logs view.**
To achieve both, custom log class should extend built-in DBEventLogger log class.
Here's an example of a simple JSON-to-stdout class:
@ -44,9 +44,10 @@ End by updating your config to pass in an instance of the logger you want to use
EVENT_LOGGER = JSONStdOutEventLogger()
```
### StatsD Logging
## StatsD Logging
Superset can be instrumented to log events to StatsD if desired. Most endpoints hit are logged as
Superset can be configured to log events to [StatsD](https://github.com/statsd/statsd)
if desired. Most endpoints hit are logged as
well as key events like query start and end in SQL Lab.
To setup StatsD logging, its a matter of configuring the logger in your `superset_config.py`.

View File

@ -5,7 +5,7 @@ sidebar_position: 11
version: 1
---
## Importing and Exporting Datasources
# Importing and Exporting Datasources
The superset cli allows you to import and export datasources from and to YAML. Datasources include
databases. The data is expected to be organized in the following hierarchy:
@ -26,7 +26,7 @@ databases. The data is expected to be organized in the following hierarchy:
| └── ... (more databases)
```
### Exporting Datasources to YAML
## Exporting Datasources to YAML
You can print your current datasources to stdout by running:
@ -61,7 +61,7 @@ superset export_datasource_schema
As a reminder, you can use the `-b` flag to include back references.
### Importing Datasources
## Importing Datasources
In order to import datasources from a ZIP file, run:
@ -75,9 +75,9 @@ The optional username flag **-u** sets the user used for the datasource import.
superset import_datasources -p <path / filename> -u 'admin'
```
### Legacy Importing Datasources
## Legacy Importing Datasources
#### From older versions of Superset to current version
### From older versions of Superset to current version
When using Superset version 4.x.x to import from an older version (2.x.x or 3.x.x) importing is supported as the command `legacy_import_datasources` and expects a JSON or directory of JSONs. The options are `-r` for recursive and `-u` for specifying a user. Example of legacy import without options:
@ -85,7 +85,7 @@ When using Superset version 4.x.x to import from an older version (2.x.x or 3.x.
superset legacy_import_datasources -p <path or filename>
```
#### From older versions of Superset to older versions
### From older versions of Superset to older versions
When using an older Superset version (2.x.x & 3.x.x) of Superset, the command is `import_datasources`. ZIP and YAML files are supported and to switch between them the feature flag `VERSIONED_EXPORT` is used. When `VERSIONED_EXPORT` is `True`, `import_datasources` expects a ZIP file, otherwise YAML. Example:

View File

@ -1,13 +1,12 @@
---
title: Additional Networking Settings
hide_title: true
title: Network and Security Settings
sidebar_position: 7
version: 1
---
## Additional Networking Settings
# Network and Security Settings
### CORS
## CORS
To configure CORS, or cross-origin resource sharing, the following dependency must be installed:
@ -21,7 +20,37 @@ The following keys in `superset_config.py` can be specified to configure CORS:
- `CORS_OPTIONS`: options passed to Flask-CORS
([documentation](https://flask-cors.corydolphin.com/en/latest/api.html#extension))
### Domain Sharding
## HTTP headers
Note that Superset bundles [flask-talisman](https://pypi.org/project/talisman/)
Self-descried as a small Flask extension that handles setting HTTP headers that can help
protect against a few common web application security issues.
## CSRF settings
Similarly, [flask-wtf](https://flask-wtf.readthedocs.io/en/0.15.x/config/) is used manage
some CSRF configurations. If you need to exempt endpoints from CSRF (e.g. if you are
running a custom auth postback endpoint), you can add the endpoints to `WTF_CSRF_EXEMPT_LIST`:
## SSH Tunneling
1. Turn on feature flag
- Change [`SSH_TUNNELING`](https://github.com/apache/superset/blob/eb8386e3f0647df6d1bbde8b42073850796cc16f/superset/config.py#L489) to `True`
- If you want to add more security when establishing the tunnel we allow users to overwrite the `SSHTunnelManager` class [here](https://github.com/apache/superset/blob/eb8386e3f0647df6d1bbde8b42073850796cc16f/superset/config.py#L507)
- You can also set the [`SSH_TUNNEL_LOCAL_BIND_ADDRESS`](https://github.com/apache/superset/blob/eb8386e3f0647df6d1bbde8b42073850796cc16f/superset/config.py#L508) this the host address where the tunnel will be accessible on your VPC
2. Create database w/ ssh tunnel enabled
- With the feature flag enabled you should now see ssh tunnel toggle.
- Click the toggle to enables ssh tunneling and add your credentials accordingly.
- Superset allows for 2 different type authentication (Basic + Private Key). These credentials should come from your service provider.
3. Verify data is flowing
- Once SSH tunneling has been enabled, go to SQL Lab and write a query to verify data is properly flowing.
## Domain Sharding
Chrome allows up to 6 open connections per domain at a time. When there are more than 6 slices in
dashboard, a lot of time fetch requests are queued up and wait for next available socket.
@ -42,7 +71,7 @@ or add the following setting in your `superset_config.py` file if domain shards
- `SESSION_COOKIE_DOMAIN = '.mydomain.com'`
### Middleware
## Middleware
Superset allows you to add your own middleware. To add your own middleware, update the
`ADDITIONAL_MIDDLEWARE` key in your `superset_config.py`. `ADDITIONAL_MIDDLEWARE` should be a list

View File

@ -4,18 +4,3 @@ hide_title: true
sidebar_position: 8
version: 1
---
## SSH Tunneling
1. Turn on feature flag
- Change [`SSH_TUNNELING`](https://github.com/apache/superset/blob/eb8386e3f0647df6d1bbde8b42073850796cc16f/superset/config.py#L489) to `True`
- If you want to add more security when establishing the tunnel we allow users to overwrite the `SSHTunnelManager` class [here](https://github.com/apache/superset/blob/eb8386e3f0647df6d1bbde8b42073850796cc16f/superset/config.py#L507)
- You can also set the [`SSH_TUNNEL_LOCAL_BIND_ADDRESS`](https://github.com/apache/superset/blob/eb8386e3f0647df6d1bbde8b42073850796cc16f/superset/config.py#L508) this the host address where the tunnel will be accessible on your VPC
2. Create database w/ ssh tunnel enabled
- With the feature flag enabled you should now see ssh tunnel toggle.
- Click the toggle to enables ssh tunneling and add your credentials accordingly.
- Superset allows for 2 different type authentication (Basic + Private Key). These credentials should come from your service provider.
3. Verify data is flowing
- Once SSH tunneling has been enabled, go to SQL Lab and write a query to verify data is properly flowing.

View File

@ -5,9 +5,9 @@ sidebar_position: 5
version: 1
---
## SQL Templating
# SQL Templating
### Jinja Templates
## Jinja Templates
SQL Lab and Explore supports [Jinja templating](https://jinja.palletsprojects.com/en/2.11.x/) in queries.
To enable templating, the `ENABLE_TEMPLATE_PROCESSING` [feature flag](/docs/configuration/configuring-superset#feature-flags) needs to be enabled in
@ -168,7 +168,7 @@ FEATURE_FLAGS = {
The available validators and names can be found in
[sql_validators](https://github.com/apache/superset/tree/master/superset/sql_validators).
### Available Macros
## Available Macros
In this section, we'll walkthrough the pre-defined Jinja macros in Superset.

View File

@ -5,7 +5,7 @@ sidebar_position: 6
version: 1
---
## Timezones
# Timezones
There are four distinct timezone components which relate to Apache Superset,
@ -20,7 +20,7 @@ To help make the problem somewhat tractable—given that Apache Superset has no
To strive for data consistency (regardless of the timezone of the client) the Apache Superset backend tries to ensure that any timestamp sent to the client has an explicit (or semi-explicit as in the case with [Epoch time](https://en.wikipedia.org/wiki/Unix_time) which is always in reference to UTC) timezone encoded within.
The challenge however lies with the slew of [database engines](/docs/databases/installing-database-drivers#install-database-drivers) which Apache Superset supports and various inconsistencies between their [Python Database API (DB-API)](https://www.python.org/dev/peps/pep-0249/) implementations combined with the fact that we use [Pandas](https://pandas.pydata.org/) to read SQL into a DataFrame prior to serializing to JSON. Regrettably Pandas ignores the DB-API [type_code](https://www.python.org/dev/peps/pep-0249/#type-objects) relying by default on the underlying Python type returned by the DB-API. Currently only a subset of the supported database engines work correctly with Pandas, i.e., ensuring timestamps without an explicit timestamp are serializd to JSON with the server timezone, thus guaranteeing the client will display timestamps in a consistent manner irrespective of the client's timezone.
The challenge however lies with the slew of [database engines](/docs/configuration/databases#installing-drivers-in-docker-images) which Apache Superset supports and various inconsistencies between their [Python Database API (DB-API)](https://www.python.org/dev/peps/pep-0249/) implementations combined with the fact that we use [Pandas](https://pandas.pydata.org/) to read SQL into a DataFrame prior to serializing to JSON. Regrettably Pandas ignores the DB-API [type_code](https://www.python.org/dev/peps/pep-0249/#type-objects) relying by default on the underlying Python type returned by the DB-API. Currently only a subset of the supported database engines work correctly with Pandas, i.e., ensuring timestamps without an explicit timestamp are serializd to JSON with the server timezone, thus guaranteeing the client will display timestamps in a consistent manner irrespective of the client's timezone.
For example the following is a comparison of MySQL and Presto,

View File

@ -88,7 +88,7 @@ text strings from Superset's UI. You can jump into the existing
language dictionaries at
`superset/translations/<language_code>/LC_MESSAGES/messages.po`, or
even create a dictionary for a new language altogether.
See [Translating](#translating) for more details.
See [Translating](howtos#contribute-translations) for more details.
### Ask Questions

View File

@ -1,16 +0,0 @@
---
title: Ascend.io
hide_title: true
sidebar_position: 10
version: 1
---
## Ascend.io
The recommended connector library to Ascend.io is [impyla](https://github.com/cloudera/impyla).
The expected connection string is formatted as follows:
```
ascend://{username}:{password}@{hostname}:{port}/{database}?auth_mechanism=PLAIN;use_ssl=true
```

View File

@ -1,39 +0,0 @@
---
title: Amazon Athena
hide_title: true
sidebar_position: 4
version: 1
---
## AWS Athena
### PyAthenaJDBC
[PyAthenaJDBC](https://pypi.org/project/PyAthenaJDBC/) is a Python DB 2.0 compliant wrapper for the
[Amazon Athena JDBC driver](https://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html).
The connection string for Amazon Athena is as follows:
```
awsathena+jdbc://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...
```
Note that you'll need to escape & encode when forming the connection string like so:
```
s3://... -> s3%3A//...
```
### PyAthena
You can also use the [PyAthena library](https://pypi.org/project/PyAthena/) (no Java required) with the
following connection string:
```
awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...
```
The PyAthena library also allows to assume a specific IAM role which you can define by adding following parameters in Superset's Athena database connection UI under ADVANCED --> Other --> ENGINE PARAMETERS.
```
{"connect_args":{"role_arn":"<role arn>"}}
```

View File

@ -1,92 +0,0 @@
---
title: Google BigQuery
hide_title: true
sidebar_position: 20
version: 1
---
## Google BigQuery
The recommended connector library for BigQuery is
[sqlalchemy-bigquery](https://github.com/googleapis/python-bigquery-sqlalchemy).
### Install BigQuery Driver
Follow the steps [here](/docs/databases/docker-add-drivers) about how to
install new database drivers when setting up Superset locally via docker compose.
```
echo "sqlalchemy-bigquery" >> ./docker/requirements-local.txt
```
### Connecting to BigQuery
When adding a new BigQuery connection in Superset, you'll need to add the GCP Service Account
credentials file (as a JSON).
1. Create your Service Account via the Google Cloud Platform control panel, provide it access to the
appropriate BigQuery datasets, and download the JSON configuration file for the service account.
2. In Superset, you can either upload that JSON or add the JSON blob in the following format (this should be the content of your credential JSON file):
```
{
"type": "service_account",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
```
![CleanShot 2021-10-22 at 04 18 11](https://user-images.githubusercontent.com/52086618/138352958-a18ef9cb-8880-4ef1-88c1-452a9f1b8105.gif)
3. Additionally, can connect via SQLAlchemy URI instead
The connection string for BigQuery looks like:
```
bigquery://{project_id}
```
Go to the **Advanced** tab, Add a JSON blob to the **Secure Extra** field in the database configuration form with
the following format:
```
{
"credentials_info": <contents of credentials JSON file>
}
```
The resulting file should have this structure:
```
{
"credentials_info": {
"type": "service_account",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
}
```
You should then be able to connect to your BigQuery datasets.
![CleanShot 2021-10-22 at 04 47 08](https://user-images.githubusercontent.com/52086618/138354340-df57f477-d3e5-42d4-b032-d901c69d2213.gif)
To be able to upload CSV or Excel files to BigQuery in Superset, you'll need to also add the
[pandas_gbq](https://github.com/pydata/pandas-gbq) library.
Currently, Google BigQuery python sdk is not compatible with `gevent`, due to some dynamic monkeypatching on python core library by `gevent`.
So, when you deploy Superset with `gunicorn` server, you have to use worker type except `gevent`.

View File

@ -1,42 +0,0 @@
---
title: ClickHouse
hide_title: true
sidebar_position: 15
version: 1
---
## ClickHouse
To use ClickHouse with Superset, you will need to add the following Python library:
```
clickhouse-connect>=0.6.8
```
If running Superset using Docker Compose, add the following to your `./docker/requirements-local.txt` file:
```
clickhouse-connect>=0.6.8
```
The recommended connector library for ClickHouse is
[clickhouse-connect](https://github.com/ClickHouse/clickhouse-connect).
The expected connection string is formatted as follows:
```
clickhousedb://<user>:<password>@<host>:<port>/<database>[?options…]clickhouse://{username}:{password}@{hostname}:{port}/{database}
```
Here's a concrete example of a real connection string:
```
clickhousedb://demo:demo@github.demo.trial.altinity.cloud/default?secure=true
```
If you're using Clickhouse locally on your computer, you can get away with using a http protocol URL that
uses the default user without a password (and doesn't encrypt the connection):
```
clickhousedb://localhost/default
```

View File

@ -1,17 +0,0 @@
---
title: CockroachDB
hide_title: true
sidebar_position: 16
version: 1
---
## CockroachDB
The recommended connector library for CockroachDB is
[sqlalchemy-cockroachdb](https://github.com/cockroachdb/sqlalchemy-cockroachdb).
The expected connection string is formatted as follows:
```
cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable
```

View File

@ -1,24 +0,0 @@
---
title: CrateDB
hide_title: true
sidebar_position: 36
version: 1
---
## CrateDB
The recommended connector library for CrateDB is
[crate](https://pypi.org/project/crate/).
You need to install the extras as well for this library.
We recommend adding something like the following
text to your requirements file:
```
crate[sqlalchemy]==0.26.0
```
The expected connection string is formatted as follows:
```
crate://crate@127.0.0.1:4200
```

View File

@ -1,23 +0,0 @@
---
title: Databend
hide_title: true
sidebar_position: 39
version: 1
---
## Databend
The recommended connector library for Databend is [databend-sqlalchemy](https://pypi.org/project/databend-sqlalchemy/).
Superset has been tested on `databend-sqlalchemy>=0.2.3`.
The recommended connection string is:
```
databend://{username}:{password}@{host}:{port}/{database_name}
```
Here's a connection string example of Superset connecting to a Databend database:
```
databend://user:password@localhost:8000/default?secure=false
```

View File

@ -1,89 +0,0 @@
---
title: Databricks
hide_title: true
sidebar_position: 37
version: 1
---
## Databricks
Databricks now offer a native DB API 2.0 driver, `databricks-sql-connector`, that can be used with the `sqlalchemy-databricks` dialect. You can install both with:
```bash
pip install "apache-superset[databricks]"
```
To use the Hive connector you need the following information from your cluster:
- Server hostname
- Port
- HTTP path
These can be found under "Configuration" -> "Advanced Options" -> "JDBC/ODBC".
You also need an access token from "Settings" -> "User Settings" -> "Access Tokens".
Once you have all this information, add a database of type "Databricks Native Connector" and use the following SQLAlchemy URI:
```
databricks+connector://token:{access_token}@{server_hostname}:{port}/{database_name}
```
You also need to add the following configuration to "Other" -> "Engine Parameters", with your HTTP path:
```json
{
"connect_args": {"http_path": "sql/protocolv1/o/****"}
}
```
## Older driver
Originally Superset used `databricks-dbapi` to connect to Databricks. You might want to try it if you're having problems with the official Databricks connector:
```bash
pip install "databricks-dbapi[sqlalchemy]"
```
There are two ways to connect to Databricks when using `databricks-dbapi`: using a Hive connector or an ODBC connector. Both ways work similarly, but only ODBC can be used to connect to [SQL endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).
### Hive
To connect to a Hive cluster add a database of type "Databricks Interactive Cluster" in Superset, and use the following SQLAlchemy URI:
```
databricks+pyhive://token:{access_token}@{server_hostname}:{port}/{database_name}
```
You also need to add the following configuration to "Other" -> "Engine Parameters", with your HTTP path:
```json
{"connect_args": {"http_path": "sql/protocolv1/o/****"}}
```
### ODBC
For ODBC you first need to install the [ODBC drivers for your platform](https://databricks.com/spark/odbc-drivers-download).
For a regular connection use this as the SQLAlchemy URI after selecting either "Databricks Interactive Cluster" or "Databricks SQL Endpoint" for the database, depending on your use case:
```
databricks+pyodbc://token:{access_token}@{server_hostname}:{port}/{database_name}
```
And for the connection arguments:
```json
{"connect_args": {"http_path": "sql/protocolv1/o/****", "driver_path": "/path/to/odbc/driver"}}
```
The driver path should be:
- `/Library/simba/spark/lib/libsparkodbc_sbu.dylib` (Mac OS)
- `/opt/simba/spark/lib/64/libsparkodbc_sb64.so` (Linux)
For a connection to a SQL endpoint you need to use the HTTP path from the endpoint:
```json
{"connect_args": {"http_path": "/sql/1.0/endpoints/****", "driver_path": "/path/to/odbc/driver"}}
```

View File

@ -1,76 +0,0 @@
---
title: Using Database Connection UI
hide_title: true
sidebar_position: 3
version: 1
---
Here is the documentation on how to leverage the new DB Connection UI. This will provide admins the ability to enhance the UX for users who want to connect to new databases.
![db-conn-docs](https://user-images.githubusercontent.com/27827808/125499607-94e300aa-1c0f-4c60-b199-3f9de41060a3.gif)
There are now 3 steps when connecting to a database in the new UI:
Step 1: First the admin must inform superset what engine they want to connect to. This page is powered by the `/available` endpoint which pulls on the engines currently installed in your environment, so that only supported databases are shown.
Step 2: Next, the admin is prompted to enter database specific parameters. Depending on whether there is a dynamic form available for that specific engine, the admin will either see the new custom form or the legacy SQLAlchemy form. We currently have built dynamic forms for (Redshift, MySQL, Postgres, and BigQuery). The new form prompts the user for the parameters needed to connect (for example, username, password, host, port, etc.) and provides immediate feedback on errors.
Step 3: Finally, once the admin has connected to their DB using the dynamic form they have the opportunity to update any optional advanced settings.
We hope this feature will help eliminate a huge bottleneck for users to get into the application and start crafting datasets.
### How to setup up preferred database options and images
We added a new configuration option where the admin can define their preferred databases, in order:
```python
# A list of preferred databases, in order. These databases will be
# displayed prominently in the "Add Database" dialog. You should
# use the "engine_name" attribute of the corresponding DB engine spec
# in `superset/db_engine_specs/`.
PREFERRED_DATABASES: list[str] = [
"PostgreSQL",
"Presto",
"MySQL",
"SQLite",
]
```
For copyright reasons the logos for each database are not distributed with Superset.
### Setting images
- To set the images of your preferred database, admins must create a mapping in the `superset_text.yml` file with engine and location of the image. The image can be host locally inside your static/file directory or online (e.g. S3)
```python
DB_IMAGES:
postgresql: "path/to/image/postgres.jpg"
bigquery: "path/to/s3bucket/bigquery.jpg"
snowflake: "path/to/image/snowflake.jpg"
```
### How to add new database engines to available endpoint
Currently the new modal supports the following databases:
- Postgres
- Redshift
- MySQL
- BigQuery
When the user selects a database not in this list they will see the old dialog asking for the SQLAlchemy URI. New databases can be added gradually to the new flow. In order to support the rich configuration a DB engine spec needs to have the following attributes:
1. `parameters_schema`: a Marshmallow schema defining the parameters needed to configure the database. For Postgres this includes username, password, host, port, etc. ([see](https://github.com/apache/superset/blob/accee507c0819cd0d7bcfb5a3e1199bc81eeebf2/superset/db_engine_specs/base.py#L1309-L1320)).
2. `default_driver`: the name of the recommended driver for the DB engine spec. Many SQLAlchemy dialects support multiple drivers, but usually one is the official recommendation. For Postgres we use "psycopg2".
3. `sqlalchemy_uri_placeholder`: a string that helps the user in case they want to type the URI directly.
4. `encryption_parameters`: parameters used to build the URI when the user opts for an encrypted connection. For Postgres this is `{"sslmode": "require"}`.
In addition, the DB engine spec must implement these class methods:
- `build_sqlalchemy_uri(cls, parameters, encrypted_extra)`: this method receives the distinct parameters and builds the URI from them.
- `get_parameters_from_uri(cls, uri, encrypted_extra)`: this method does the opposite, extracting the parameters from a given URI.
- `validate_parameters(cls, parameters)`: this method is used for `onBlur` validation of the form. It should return a list of `SupersetError` indicating which parameters are missing, and which parameters are definitely incorrect ([example](https://github.com/apache/superset/blob/accee507c0819cd0d7bcfb5a3e1199bc81eeebf2/superset/db_engine_specs/base.py#L1404)).
For databases like MySQL and Postgres that use the standard format of `engine+driver://user:password@host:port/dbname` all you need to do is add the `BasicParametersMixin` to the DB engine spec, and then define the parameters 2-4 (`parameters_schema` is already present in the mixin).
For other databases you need to implement these methods yourself. The BigQuery DB engine spec is a good example of how to do that.

View File

@ -1,63 +0,0 @@
---
title: Adding New Drivers in Docker
hide_title: true
sidebar_position: 2
version: 1
---
## Adding New Database Drivers in Docker
Superset requires a Python database driver to be installed for each additional type of database you want to connect to.
In this example, we'll walk through how to install the MySQL connector library. The connector library installation process is the same for all additional libraries.
### 1. Determine the driver you need
Consult the [list of database drivers](/docs/databases/installing-database-drivers) and find the PyPI package needed to connect to your database. In this example, we're connecting to a MySQL database, so we'll need the `mysqlclient` connector library.
### 2. Install the driver in the container
We need to get the `mysqlclient` library installed into the Superset docker container (it doesn't matter if it's installed on the host machine). We could enter the running container with `docker exec -it <container_name> bash` and run `pip install mysqlclient` there, but that wouldn't persist permanently.
To address this, the Superset `docker compose` deployment uses the convention of a `requirements-local.txt` file. All packages listed in this file will be installed into the container from PyPI at runtime. This file will be ignored by Git for the purposes of local development.
Create the file `requirements-local.txt` in a subdirectory called `docker` that exists in the directory with your `docker-compose.yml` or `docker-compose-non-dev.yml` file.
```
# Run from the repo root:
touch ./docker/requirements-local.txt
```
Add the driver identified in step above. You can use a text editor or do it from the command line like:
```
echo "mysqlclient" >> ./docker/requirements-local.txt
```
**If you are running a stock (non-customized) Superset image**, you are done. Launch Superset with `docker compose -f docker-compose-non-dev.yml up` and the driver should be present.
You can check its presence by entering the running container with `docker exec -it <container_name> bash` and running `pip freeze`. The PyPI package should be present in the printed list.
**If you're running a customized docker image**, rebuild your local image with the new driver baked in:
```
docker compose build --force-rm
```
After the rebuild of the Docker images is complete, relaunch Superset by running `docker compose up`.
### 3. Connect to MySQL
Now that you've got a MySQL driver installed in your container, you should be able to connect to your database via the Superset web UI.
As an admin user, go to Settings -> Data: Database Connections and click the +DATABASE button. From there, follow the steps on the [Using Database Connection UI page](/docs/databases/db-connection-ui).
Consult the page for your specific database type in the Superset documentation to determine the connection string and any other parameters you need to input. For instance, on the [MySQL page](/docs/databases/mysql), we see that the connection string to a local MySQL database differs depending on whether the setup is running on Linux or Mac.
Click the “Test Connection” button, which should result in a popup message saying, "Connection looks good!".
### 4. Troubleshooting
If the test fails, review your docker logs for error messages. Superset uses SQLAlchemy to connect to databases; to troubleshoot the connection string for your database, you might start Python in the Superset application container or host environment and try to connect directly to the desired database and fetch data. This eliminates Superset for the purposes of isolating the problem.
Repeat this process for each different type of database you want Superset to be able to connect to.

View File

@ -1,26 +0,0 @@
---
title: Apache Doris
hide_title: true
sidebar_position: 5
version: 1
---
## Doris
The [sqlalchemy-doris](https://pypi.org/project/pydoris/) library is the recommended way to connect to Apache Doris through SQLAlchemy.
You'll need the following setting values to form the connection string:
- **User**: User Name
- **Password**: Password
- **Host**: Doris FE Host
- **Port**: Doris FE port
- **Catalog**: Catalog Name
- **Database**: Database Name
Here's what the connection string looks like:
```
doris://<User>:<Password>@<Host>:<Port>/<Catalog>.<Database>
```

View File

@ -1,26 +0,0 @@
---
title: Dremio
hide_title: true
sidebar_position: 17
version: 1
---
## Dremio
The recommended connector library for Dremio is
[sqlalchemy_dremio](https://pypi.org/project/sqlalchemy-dremio/).
The expected connection string for ODBC (Default port is 31010) is formatted as follows:
```
dremio://{username}:{password}@{host}:{port}/{database_name}/dremio?SSL=1
```
The expected connection string for Arrow Flight (Dremio 4.9.1+. Default port is 32010) is formatted as follows:
```
dremio+flight://{username}:{password}@{host}:{port}/dremio
```
This [blog post by Dremio](https://www.dremio.com/tutorials/dremio-apache-superset/) has some
additional helpful instructions on connecting Superset to Dremio.

View File

@ -1,47 +0,0 @@
---
title: Apache Drill
hide_title: true
sidebar_position: 6
version: 1
---
## Apache Drill
### SQLAlchemy
The recommended way to connect to Apache Drill is through SQLAlchemy. You can use the
[sqlalchemy-drill](https://github.com/JohnOmernik/sqlalchemy-drill) package.
Once that is done, you can connect to Drill in two ways, either via the REST interface or by JDBC.
If you are connecting via JDBC, you must have the Drill JDBC Driver installed.
The basic connection string for Drill looks like this:
```
drill+sadrill://<username>:<password>@<host>:<port>/<storage_plugin>?use_ssl=True
```
To connect to Drill running on a local machine running in embedded mode you can use the following
connection string:
```
drill+sadrill://localhost:8047/dfs?use_ssl=False
```
### JDBC
Connecting to Drill through JDBC is more complicated and we recommend following
[this tutorial](https://drill.apache.org/docs/using-the-jdbc-driver/).
The connection string looks like:
```
drill+jdbc://<username>:<password>@<host>:<port>
```
### ODBC
We recommend reading the
[Apache Drill documentation](https://drill.apache.org/docs/installing-the-driver-on-linux/) and read
the [GitHub README](https://github.com/JohnOmernik/sqlalchemy-drill#usage-with-odbc) to learn how to
work with Drill through ODBC.

View File

@ -1,71 +0,0 @@
---
title: Apache Druid
hide_title: true
sidebar_position: 7
version: 1
---
import useBaseUrl from "@docusaurus/useBaseUrl";
## Apache Druid
A native connector to Druid ships with Superset (behind the `DRUID_IS_ACTIVE` flag) but this is
slowly getting deprecated in favor of the SQLAlchemy / DBAPI connector made available in the
[pydruid library](https://pythonhosted.org/pydruid/).
The connection string looks like:
```
druid://<User>:<password>@<Host>:<Port-default-9088>/druid/v2/sql
```
Here's a breakdown of the key components of this connection string:
- `User`: username portion of the credentials needed to connect to your database
- `Password`: password portion of the credentials needed to connect to your database
- `Host`: IP address (or URL) of the host machine that's running your database
- `Port`: specific port that's exposed on your host machine where your database is running
### Customizing Druid Connection
When adding a connection to Druid, you can customize the connection a few different ways in the
**Add Database** form.
**Custom Certificate**
You can add certificates in the **Root Certificate** field when configuring the new database
connection to Druid:
<img src={useBaseUrl("/img/root-cert-example.png")} />{" "}
When using a custom certificate, pydruid will automatically use https scheme.
**Disable SSL Verification**
To disable SSL verification, add the following to the **Extras** field:
```
engine_params:
{"connect_args":
{"scheme": "https", "ssl_verify_cert": false}}
```
### Aggregations
Common aggregations or Druid metrics can be defined and used in Superset. The first and simpler use
case is to use the checkbox matrix exposed in your datasources edit view (**Sources -> Druid
Datasources -> [your datasource] -> Edit -> [tab] List Druid Column**).
Clicking the GroupBy and Filterable checkboxes will make the column appear in the related dropdowns
while in the Explore view. Checking Count Distinct, Min, Max or Sum will result in creating new
metrics that will appear in the **List Druid Metric** tab upon saving the datasource.
By editing these metrics, youll notice that their JSON element corresponds to Druid aggregation
definition. You can create your own aggregations manually from the **List Druid Metric** tab
following Druid documentation.
### Post-Aggregations
Druid supports post aggregation and this works in Superset. All you have to do is create a metric,
much like you would create an aggregation manually, but specify `postagg` as a `Metric Type`. You
then have to provide a valid json post-aggregation definition (as specified in the Druid docs) in
the JSON field.

View File

@ -1,20 +0,0 @@
---
title: Amazon DynamoDB
hide_title: true
sidebar_position: 4
version: 1
---
## AWS DynamoDB
### PyDynamoDB
[PyDynamoDB](https://pypi.org/project/PyDynamoDB/) is a Python DB API 2.0 (PEP 249) client for Amazon DynamoDB.
The connection string for Amazon DynamoDB is as follows:
```
dynamodb://{aws_access_key_id}:{aws_secret_access_key}@dynamodb.{region_name}.amazonaws.com:443?connector=superset
```
To get more documentation, please visit: [PyDynamoDB WIKI](https://github.com/passren/PyDynamoDB/wiki/5.-Superset).

View File

@ -1,76 +0,0 @@
---
title: Elasticsearch
hide_title: true
sidebar_position: 18
version: 1
---
## Elasticsearch
The recommended connector library for Elasticsearch is
[elasticsearch-dbapi](https://github.com/preset-io/elasticsearch-dbapi).
The connection string for Elasticsearch looks like this:
```
elasticsearch+http://{user}:{password}@{host}:9200/
```
**Using HTTPS**
```
elasticsearch+https://{user}:{password}@{host}:9200/
```
Elasticsearch as a default limit of 10000 rows, so you can increase this limit on your cluster or
set Supersets row limit on config
```
ROW_LIMIT = 10000
```
You can query multiple indices on SQL Lab for example
```
SELECT timestamp, agent FROM "logstash"
```
But, to use visualizations for multiple indices you need to create an alias index on your cluster
```
POST /_aliases
{
"actions" : [
{ "add" : { "index" : "logstash-**", "alias" : "logstash_all" } }
]
}
```
Then register your table with the alias name logstash_all
**Time zone**
By default, Superset uses UTC time zone for elasticsearch query. If you need to specify a time zone,
please edit your Database and enter the settings of your specified time zone in the Other > ENGINE PARAMETERS:
```
{
"connect_args": {
"time_zone": "Asia/Shanghai"
}
}
```
Another issue to note about the time zone problem is that before elasticsearch7.8, if you want to convert a string into a `DATETIME` object,
you need to use the `CAST` function,but this function does not support our `time_zone` setting. So it is recommended to upgrade to the version after elasticsearch7.8.
After elasticsearch7.8, you can use the `DATETIME_PARSE` function to solve this problem.
The DATETIME_PARSE function is to support our `time_zone` setting, and here you need to fill in your elasticsearch version number in the Other > VERSION setting.
the superset will use the `DATETIME_PARSE` function for conversion.
**Disable SSL Verification**
To disable SSL verification, add the following to the **SQLALCHEMY URI** field:
```
elasticsearch+https://{user}:{password}@{host}:9200/?verify_certs=False
```

View File

@ -1,17 +0,0 @@
---
title: Exasol
hide_title: true
sidebar_position: 19
version: 1
---
## Exasol
The recommended connector library for Exasol is
[sqlalchemy-exasol](https://github.com/exasol/sqlalchemy-exasol).
The connection string for Exasol looks like this:
```
exa+pyodbc://{username}:{password}@{hostname}:{port}/my_schema?CONNECTIONLCALL=en_US.UTF-8&driver=EXAODBC
```

View File

@ -1,69 +0,0 @@
---
title: Extra Database Settings
hide_title: true
sidebar_position: 40
version: 1
---
## Extra Database Settings
### Deeper SQLAlchemy Integration
It is possible to tweak the database connection information using the parameters exposed by
SQLAlchemy. In the **Database edit** view, you can edit the **Extra** field as a JSON blob.
This JSON string contains extra configuration elements. The `engine_params` object gets unpacked
into the `sqlalchemy.create_engine` call, while the `metadata_params` get unpacked into the
`sqlalchemy.MetaData` call. Refer to the SQLAlchemy docs for more information.
### Schemas
Databases like Postgres and Redshift use the **schema** as the logical entity on top of the
**database**. For Superset to connect to a specific schema, you can set the **schema** parameter in
the **Edit Tables** form (Sources > Tables > Edit record).
### External Password Store for SQLAlchemy Connections
Superset can be configured to use an external store for database passwords. This is useful if you a
running a custom secret distribution framework and do not wish to store secrets in Supersets meta
database.
Example: Write a function that takes a single argument of type `sqla.engine.url` and returns the
password for the given connection string. Then set `SQLALCHEMY_CUSTOM_PASSWORD_STORE` in your config
file to point to that function.
```python
def example_lookup_password(url):
secret = <<get password from external framework>>
return 'secret'
SQLALCHEMY_CUSTOM_PASSWORD_STORE = example_lookup_password
```
A common pattern is to use environment variables to make secrets available.
`SQLALCHEMY_CUSTOM_PASSWORD_STORE` can also be used for that purpose.
```python
def example_password_as_env_var(url):
# assuming the uri looks like
# mysql://localhost?superset_user:{SUPERSET_PASSWORD}
return url.password.format(**os.environ)
SQLALCHEMY_CUSTOM_PASSWORD_STORE = example_password_as_env_var
```
### SSL Access to Databases
You can use the `Extra` field in the **Edit Databases** form to configure SSL:
```JSON
{
"metadata_params": {},
"engine_params": {
"connect_args":{
"sslmode":"require",
"sslrootcert": "/path/to/my/pem"
}
}
}
```

View File

@ -1,23 +0,0 @@
---
title: Firebird
hide_title: true
sidebar_position: 38
version: 1
---
## Firebird
The recommended connector library for Firebird is [sqlalchemy-firebird](https://pypi.org/project/sqlalchemy-firebird/).
Superset has been tested on `sqlalchemy-firebird>=0.7.0, <0.8`.
The recommended connection string is:
```
firebird+fdb://{username}:{password}@{host}:{port}//{path_to_db_file}
```
Here's a connection string example of Superset connecting to a local Firebird database:
```
firebird+fdb://SYSDBA:masterkey@192.168.86.38:3050//Library/Frameworks/Firebird.framework/Versions/A/Resources/examples/empbuild/employee.fdb
```

View File

@ -1,26 +0,0 @@
---
title: Firebolt
hide_title: true
sidebar_position: 39
version: 1
---
## Firebolt
The recommended connector library for Firebolt is [firebolt-sqlalchemy](https://pypi.org/project/firebolt-sqlalchemy/).
The recommended connection string is:
```
firebolt://{username}:{password}@{database}?account_name={name}
or
firebolt://{username}:{password}@{database}/{engine_name}?account_name={name}
```
It's also possible to connect using a service account:
```
firebolt://{client_id}:{client_secret}@{database}?account_name={name}
or
firebolt://{client_id}:{client_secret}@{database}/{engine_name}?account_name={name}
```

View File

@ -1,16 +0,0 @@
---
title: Google Sheets
hide_title: true
sidebar_position: 21
version: 1
---
## Google Sheets
Google Sheets has a very limited
[SQL API](https://developers.google.com/chart/interactive/docs/querylanguage). The recommended
connector library for Google Sheets is [shillelagh](https://github.com/betodealmeida/shillelagh).
There are a few steps involved in connecting Superset to Google Sheets. This
[tutorial](https://preset.io/blog/2020-06-01-connect-superset-google-sheets/) has the most up to date
instructions on setting up this connection.

View File

@ -1,16 +0,0 @@
---
title: Hana
hide_title: true
sidebar_position: 22
version: 1
---
## Hana
The recommended connector library is [sqlalchemy-hana](https://github.com/SAP/sqlalchemy-hana).
The connection string is formatted as follows:
```
hana://{username}:{password}@{host}:{port}
```

View File

@ -1,16 +0,0 @@
---
title: Apache Hive
hide_title: true
sidebar_position: 8
version: 1
---
## Apache Hive
The [pyhive](https://pypi.org/project/PyHive/) library is the recommended way to connect to Hive through SQLAlchemy.
The expected connection string is formatted as follows:
```
hive://hive@{hostname}:{port}/{database}
```

View File

@ -1,24 +0,0 @@
---
title: Hologres
hide_title: true
sidebar_position: 33
version: 1
---
## Hologres
Hologres is a real-time interactive analytics service developed by Alibaba Cloud. It is fully compatible with PostgreSQL 11 and integrates seamlessly with the big data ecosystem.
Hologres sample connection parameters:
- **User Name**: The AccessKey ID of your Alibaba Cloud account.
- **Password**: The AccessKey secret of your Alibaba Cloud account.
- **Database Host**: The public endpoint of the Hologres instance.
- **Database Name**: The name of the Hologres database.
- **Port**: The port number of the Hologres instance.
The connection string looks like:
```
postgresql+psycopg2://{username}:{password}@{host}:{port}/{database}
```

View File

@ -1,23 +0,0 @@
---
title: IBM DB2
hide_title: true
sidebar_position: 23
version: 1
---
## IBM DB2
The [IBM_DB_SA](https://github.com/ibmdb/python-ibmdbsa/tree/master/ibm_db_sa) library provides a
Python / SQLAlchemy interface to IBM Data Servers.
Here's the recommended connection string:
```
db2+ibm_db://{username}:{passport}@{hostname}:{port}/{database}
```
There are two DB2 dialect versions implemented in SQLAlchemy. If you are connecting to a DB2 version without `LIMIT [n]` syntax, the recommended connection string to be able to use the SQL Lab is:
```
ibm_db_sa://{username}:{passport}@{hostname}:{port}/{database}
```

View File

@ -1,16 +0,0 @@
---
title: Apache Impala
hide_title: true
sidebar_position: 9
version: 1
---
## Apache Impala
The recommended connector library to Apache Impala is [impyla](https://github.com/cloudera/impyla).
The expected connection string is formatted as follows:
```
impala://{hostname}:{port}/{database}
```

View File

@ -1,81 +0,0 @@
---
title: Installing Database Drivers
hide_title: true
sidebar_position: 1
version: 1
---
## Install Database Drivers
Superset requires a Python DB-API database driver and a SQLAlchemy
dialect to be installed for each datastore you want to connect to.
You can read more [here](/docs/databases/docker-add-drivers) about how to
install new database drivers into your Superset configuration.
### Supported Databases and Dependencies
Superset does not ship bundled with connectivity to databases, except for SQLite,
which is part of the Python standard library.
Youll need to install the required packages for the database you want to use as your metadata database
as well as the packages needed to connect to the databases you want to access through Superset.
Some of the recommended packages are shown below. Please refer to
[pyproject.toml](https://github.com/apache/superset/blob/master/pyproject.toml) for the versions that
are compatible with Superset.
| Database | PyPI package | Connection String |
| --------------------------------------------------------- | ---------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [Amazon Athena](/docs/databases/athena) | `pip install pyathena[pandas]` , `pip install PyAthenaJDBC` | `awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&... ` |
| [Apache Doris](/docs/databases/doris) | `pip install pydoris` | `doris://<User>:<Password>@<Host>:<Port>/<Catalog>.<Database>` |
| [Amazon DynamoDB](/docs/databases/dynamodb) | `pip install pydynamodb` | `dynamodb://{access_key_id}:{secret_access_key}@dynamodb.{region_name}.amazonaws.com?connector=superset` |
| [Amazon Redshift](/docs/databases/redshift) | `pip install sqlalchemy-redshift` | ` redshift+psycopg2://<userName>:<DBPassword>@<AWS End Point>:5439/<Database Name>` |
| [Apache Drill](/docs/databases/drill) | `pip install sqlalchemy-drill` | `drill+sadrill:// For JDBC drill+jdbc://` |
| [Apache Druid](/docs/databases/druid) | `pip install pydruid` | `druid://<User>:<password>@<Host>:<Port-default-9088>/druid/v2/sql` |
| [Apache Hive](/docs/databases/hive) | `pip install pyhive` | `hive://hive@{hostname}:{port}/{database}` |
| [Apache Impala](/docs/databases/impala) | `pip install impyla` | `impala://{hostname}:{port}/{database}` |
| [Apache Kylin](/docs/databases/kylin) | `pip install kylinpy` | `kylin://<username>:<password>@<hostname>:<port>/<project>?<param1>=<value1>&<param2>=<value2>` |
| [Apache Pinot](/docs/databases/pinot) | `pip install pinotdb` | `pinot://BROKER:5436/query?server=http://CONTROLLER:5983/` |
| [Apache Solr](/docs/databases/solr) | `pip install sqlalchemy-solr` | `solr://{username}:{password}@{hostname}:{port}/{server_path}/{collection}` |
| [Apache Spark SQL](/docs/databases/spark-sql) | `pip install pyhive` | `hive://hive@{hostname}:{port}/{database}` |
| [Ascend.io](/docs/databases/ascend) | `pip install impyla` | `ascend://{username}:{password}@{hostname}:{port}/{database}?auth_mechanism=PLAIN;use_ssl=true` |
| [Azure MS SQL](/docs/databases/sql-server) | `pip install pymssql` | `mssql+pymssql://UserName@presetSQL:TestPassword@presetSQL.database.windows.net:1433/TestSchema` |
| [Big Query](/docs/databases/bigquery) | `pip install sqlalchemy-bigquery` | `bigquery://{project_id}` |
| [ClickHouse](/docs/databases/clickhouse) | `pip install clickhouse-connect` | `clickhousedb://{username}:{password}@{hostname}:{port}/{database}` |
| [CockroachDB](/docs/databases/cockroachdb) | `pip install cockroachdb` | `cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable` |
| [Dremio](/docs/databases/dremio) | `pip install sqlalchemy_dremio` | `dremio://user:pwd@host:31010/` |
| [Elasticsearch](/docs/databases/elasticsearch) | `pip install elasticsearch-dbapi` | `elasticsearch+http://{user}:{password}@{host}:9200/` |
| [Exasol](/docs/databases/exasol) | `pip install sqlalchemy-exasol` | `exa+pyodbc://{username}:{password}@{hostname}:{port}/my_schema?CONNECTIONLCALL=en_US.UTF-8&driver=EXAODBC` |
| [Google Sheets](/docs/databases/google-sheets) | `pip install shillelagh[gsheetsapi]` | `gsheets://` |
| [Firebolt](/docs/databases/firebolt) | `pip install firebolt-sqlalchemy` | `firebolt://{client_id}:{client_secret}@{database}/{engine_name}?account_name={name}` |
| [Hologres](/docs/databases/hologres) | `pip install psycopg2` | `postgresql+psycopg2://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [IBM Db2](/docs/databases/ibm-db2) | `pip install ibm_db_sa` | `db2+ibm_db://` |
| [IBM Netezza Performance Server](/docs/databases/netezza) | `pip install nzalchemy` | `netezza+nzpy://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [MySQL](/docs/databases/mysql) | `pip install mysqlclient` | `mysql://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [Oracle](/docs/databases/oracle) | `pip install cx_Oracle` | `oracle://` |
| [PostgreSQL](/docs/databases/postgres) | `pip install psycopg2` | `postgresql://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [Presto](/docs/databases/presto) | `pip install pyhive` | `presto://` |
| [Rockset](/docs/databases/rockset) | `pip install rockset-sqlalchemy` | `rockset://<api_key>:@<api_server>` |
| [SAP Hana](/docs/databases/hana) | `pip install hdbcli sqlalchemy-hana or pip install apache-superset[hana]` | `hana://{username}:{password}@{host}:{port}` |
| [StarRocks](/docs/databases/starrocks) | `pip install starrocks` | `starrocks://<User>:<Password>@<Host>:<Port>/<Catalog>.<Database>` |
| [Snowflake](/docs/databases/snowflake) | `pip install snowflake-sqlalchemy` | `snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}` |
| SQLite | No additional library needed | `sqlite://path/to/file.db?check_same_thread=false` |
| [SQL Server](/docs/databases/sql-server) | `pip install pymssql` | `mssql+pymssql://` |
| [Teradata](/docs/databases/teradata) | `pip install teradatasqlalchemy` | `teradatasql://{user}:{password}@{host}` |
| [TimescaleDB](/docs/databases/timescaledb) | `pip install psycopg2` | `postgresql://<UserName>:<DBPassword>@<Database Host>:<Port>/<Database Name>` |
| [Trino](/docs/databases/trino) | `pip install trino` | `trino://{username}:{password}@{hostname}:{port}/{catalog}` |
| [Vertica](/docs/databases/vertica) | `pip install sqlalchemy-vertica-python` | `vertica+vertica_python://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [YugabyteDB](/docs/databases/yugabytedb) | `pip install psycopg2` | `postgresql://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
---
Note that many other databases are supported, the main criteria being the existence of a functional
SQLAlchemy dialect and Python driver. Searching for the keyword "sqlalchemy + (database name)"
should help get you to the right place.
If your database or data engine isn't on the list but a SQL interface
exists, please file an issue on the
[Superset GitHub repo](https://github.com/apache/superset/issues), so we can work on documenting and
supporting it.
If you'd like to build a database connector for Superset integration,
read the [following tutorial](https://preset.io/blog/building-database-connector/).

View File

@ -1,26 +0,0 @@
---
name: Kusto
hide_title: true
sidebar_position: 41
version: 2
---
## Kusto
The recommended connector library for Kusto is
[sqlalchemy-kusto](https://pypi.org/project/sqlalchemy-kusto/2.0.0/)>=2.0.0.
The connection string for Kusto (sql dialect) looks like this:
```
kustosql+https://{cluster_url}/{database}?azure_ad_client_id={azure_ad_client_id}&azure_ad_client_secret={azure_ad_client_secret}&azure_ad_tenant_id={azure_ad_tenant_id}&msi=False
```
The connection string for Kusto (kql dialect) looks like this:
```
kustokql+https://{cluster_url}/{database}?azure_ad_client_id={azure_ad_client_id}&azure_ad_client_secret={azure_ad_client_secret}&azure_ad_tenant_id={azure_ad_tenant_id}&msi=False
```
Make sure the user has privileges to access and use all required
databases/tables/views.

View File

@ -1,17 +0,0 @@
---
title: Apache Kylin
hide_title: true
sidebar_position: 11
version: 1
---
## Apache Kylin
The recommended connector library for Apache Kylin is
[kylinpy](https://github.com/Kyligence/kylinpy).
The expected connection string is formatted as follows:
```
kylin://<username>:<password>@<hostname>:<port>/<project>?<param1>=<value1>&<param2>=<value2>
```

View File

@ -1,48 +0,0 @@
---
title: Querying across databases
hide_title: true
sidebar_position: 42
version: 1
---
## Querying across databases
Superset offers an experimental feature for querying across different databases. This is done via a special database called "Superset meta database" that uses the "superset://" SQLAlchemy URI. When using the database it's possible to query any table in any of the configured databases using the following syntax:
```sql
SELECT * FROM "database name.[[catalog.].schema].table name";
```
For example:
```sql
SELECT * FROM "examples.birth_names";
```
Spaces are allowed, but periods in the names must be replaced by `%2E`. Eg:
```sql
SELECT * FROM "Superset meta database.examples%2Ebirth_names";
```
The query above returns the same rows as `SELECT * FROM "examples.birth_names"`, and also shows that the meta database can query tables from any table — even itself!
## Considerations
Before enabling this feature, there are a few considerations that you should have in mind. First, the meta database enforces permissions on the queried tables, so users should only have access via the database to tables that they originally have access to. Nevertheless, the meta database is a new surface for potential attacks, and bugs could allow users to see data they should not.
Second, there are performance considerations. The meta database will push any filtering, sorting, and limiting to the underlying databases, but any aggregations and joins will happen in memory in the process running the query. Because of this, it's recommended to run the database in async mode, so queries are executed in Celery workers, instead of the web workers. Additionally, it's possible to specify a hard limit on how many rows are returned from the underlying databases.
## Enabling the meta database
To enable the Superset meta database, first you need to set the `ENABLE_SUPERSET_META_DB` feature flag to true. Then, add a new database of type "Superset meta database" with the SQLAlchemy URI "superset://".
If you enable DML in the meta database users will be able to run DML queries on underlying databases **as long as DML is also enabled in them**. This allows users to run queries that move data across databases.
Second, you might want to change the value of `SUPERSET_META_DB_LIMIT`. The default value is 1000, and defines how many are read from each database before any aggregations and joins are executed. You can also set this value `None` if you only have small tables.
Additionally, you might want to restrict the databases to with the meta database has access to. This can be done in the database configuration, under "Advanced" -> "Other" -> "ENGINE PARAMETERS" and adding:
```json
{"allowed_dbs":["Google Sheets","examples"]}
```

View File

@ -1,30 +0,0 @@
---
title: MySQL
hide_title: true
sidebar_position: 25
version: 1
---
## MySQL
The recommended connector library for MySQL is [mysqlclient](https://pypi.org/project/mysqlclient/).
Here's the connection string:
```
mysql://{username}:{password}@{host}/{database}
```
Host:
- For Localhost: `localhost` or `127.0.0.1`
- Docker running on Linux: `172.18.0.1`
- For On Prem: IP address or Host name
- For Docker running in OSX: `docker.for.mac.host.internal`
Port: `3306` by default
One problem with `mysqlclient` is that it will fail to connect to newer MySQL databases using `caching_sha2_password` for authentication, since the plugin is not included in the client. In this case, you should use [mysql-connector-python](https://pypi.org/project/mysql-connector-python/) instead:
```
mysql+mysqlconnector://{username}:{password}@{host}/{database}
```

View File

@ -1,17 +0,0 @@
---
title: IBM Netezza Performance Server
hide_title: true
sidebar_position: 24
version: 1
---
## IBM Netezza Performance Server
The [nzalchemy](https://pypi.org/project/nzalchemy/) library provides a
Python / SQLAlchemy interface to IBM Netezza Performance Server (aka Netezza).
Here's the recommended connection string:
```
netezza+nzpy://{username}:{password}@{hostname}:{port}/{database}
```

View File

@ -1,37 +0,0 @@
---
title: Ocient DB
hide_title: true
sidebar_position: 20
version: 1
---
## Ocient DB
The recommended connector library for Ocient is [sqlalchemy-ocient](https://pypi.org/project/sqlalchemy-ocient).
## Install the Ocient Driver
```
pip install sqlalchemy-ocient
```
## Connecting to Ocient
The format of the Ocient DSN is:
```shell
ocient://user:password@[host][:port][/database][?param1=value1&...]
```
The DSN for connecting to an `exampledb` database hosted at `examplehost:4050` with TLS enabled is:
```shell
ocient://admin:abc123@examplehost:4050/exampledb?tls=on
```
**NOTE**: You must enter the `user` and `password` credentials. `host` defaults to localhost,
port defaults to 4050, database defaults to `system` and `tls` defaults
to `unverified`.
## User Access Control
Make sure the user has privileges to access and use all required databases, schemas, tables, views, and warehouses, as the Ocient SQLAlchemy engine does not test for user or role rights by default.

View File

@ -1,17 +0,0 @@
---
title: Oracle
hide_title: true
sidebar_position: 26
version: 1
---
## Oracle
The recommended connector library is
[cx_Oracle](https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html).
The connection string is formatted as follows:
```
oracle://<username>:<password>@<hostname>:<port>
```

View File

@ -1,22 +0,0 @@
---
title: Apache Pinot
hide_title: true
sidebar_position: 12
version: 1
---
## Apache Pinot
The recommended connector library for Apache Pinot is [pinotdb](https://pypi.org/project/pinotdb/).
The expected connection string is formatted as follows:
```
pinot+http://<pinot-broker-host>:<pinot-broker-port>/query?controller=http://<pinot-controller-host>:<pinot-controller-port>/``
```
The expected connection string using username and password is formatted as follows:
```
pinot://<username>:<password>@<pinot-broker-host>:<pinot-broker-port>/query/sql?controller=http://<pinot-controller-host>:<pinot-controller-port>/verify_ssl=true``
```

View File

@ -1,42 +0,0 @@
---
title: Postgres
hide_title: true
sidebar_position: 27
version: 1
---
## Postgres
Note that, if you're using docker compose, the Postgres connector library [psycopg2](https://www.psycopg.org/docs/)
comes out of the box with Superset.
Postgres sample connection parameters:
- **User Name**: UserName
- **Password**: DBPassword
- **Database Host**:
- For Localhost: localhost or 127.0.0.1
- For On Prem: IP address or Host name
- For AWS Endpoint
- **Database Name**: Database Name
- **Port**: default 5432
The connection string looks like:
```
postgresql://{username}:{password}@{host}:{port}/{database}
```
You can require SSL by adding `?sslmode=require` at the end:
```
postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=require
```
You can read about the other SSL modes that Postgres supports in
[Table 31-1 from this documentation](https://www.postgresql.org/docs/9.1/libpq-ssl.html).
More information about PostgreSQL connection options can be found in the
[SQLAlchemy docs](https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2)
and the
[PostgreSQL docs](https://www.postgresql.org/docs/9.1/libpq-connect.html#LIBPQ-PQCONNECTDBPARAMS).

View File

@ -1,48 +0,0 @@
---
title: Presto
hide_title: true
sidebar_position: 28
version: 1
---
## Presto
The [pyhive](https://pypi.org/project/PyHive/) library is the recommended way to connect to Presto through SQLAlchemy.
The expected connection string is formatted as follows:
```
presto://{hostname}:{port}/{database}
```
You can pass in a username and password as well:
```
presto://{username}:{password}@{hostname}:{port}/{database}
```
Here is an example connection string with values:
```
presto://datascientist:securepassword@presto.example.com:8080/hive
```
By default Superset assumes the most recent version of Presto is being used when querying the
datasource. If youre using an older version of Presto, you can configure it in the extra parameter:
```
{
"version": "0.123"
}
```
SSL Secure extra add json config to extra connection information.
```
{
"connect_args":
{"protocol": "https",
"requests_kwargs":{"verify":false}
}
}
```

View File

@ -1,66 +0,0 @@
---
title: Amazon Redshift
hide_title: true
sidebar_position: 5
version: 1
---
## AWS Redshift
The [sqlalchemy-redshift](https://pypi.org/project/sqlalchemy-redshift/) library is the recommended
way to connect to Redshift through SQLAlchemy.
This dialect requires either [redshift_connector](https://pypi.org/project/redshift-connector/) or [psycopg2](https://pypi.org/project/psycopg2/) to work properly.
You'll need to set the following values to form the connection string:
- **User Name**: userName
- **Password**: DBPassword
- **Database Host**: AWS Endpoint
- **Database Name**: Database Name
- **Port**: default 5439
### psycopg2
Here's what the SQLALCHEMY URI looks like:
```
redshift+psycopg2://<userName>:<DBPassword>@<AWS End Point>:5439/<Database Name>
```
### redshift_connector
Here's what the SQLALCHEMY URI looks like:
```
redshift+redshift_connector://<userName>:<DBPassword>@<AWS End Point>:5439/<Database Name>
```
#### Using IAM-based credentials with Redshift cluster:
[Amazon redshift cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) also supports generating temporary IAM-based database user credentials.
Your superset app's [IAM role should have permissions](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-iam-credentials-role-permissions.html) to call the `redshift:GetClusterCredentials` operation.
You have to define the following arguments in Superset's redshift database connection UI under ADVANCED --> Others --> ENGINE PARAMETERS.
```
{"connect_args":{"iam":true,"database":"<database>","cluster_identifier":"<cluster_identifier>","db_user":"<db_user>"}}
```
and SQLALCHEMY URI should be set to `redshift+redshift_connector://`
#### Using IAM-based credentials with Redshift serverless:
[Redshift serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-whatis.html) supports connection using IAM roles.
Your superset app's IAM role should have `redshift-serverless:GetCredentials` and `redshift-serverless:GetWorkgroup` permissions on Redshift serverless workgroup.
You have to define the following arguments in Superset's redshift database connection UI under ADVANCED --> Others --> ENGINE PARAMETERS.
```
{"connect_args":{"iam":true,"is_serverless":true,"serverless_acct_id":"<aws account number>","serverless_work_group":"<redshift work group>","database":"<database>","user":"IAMR:<superset iam role name>"}}
```

View File

@ -1,17 +0,0 @@
---
title: RisingWave
hide_title: true
sidebar_position: 16
version: 1
---
## RisingWave
The recommended connector library for RisingWave is
[sqlalchemy-risingwave](https://github.com/risingwavelabs/sqlalchemy-risingwave).
The expected connection string is formatted as follows:
```
risingwave://root@{hostname}:{port}/{database}?sslmode=disable
```

View File

@ -1,25 +0,0 @@
---
title: Rockset
hide_title: true
sidebar_position: 35
version: 1
---
## Rockset
The connection string for Rockset is:
```
rockset://{api key}:@{api server}
```
Get your API key from the [Rockset console](https://console.rockset.com/apikeys).
Find your API server from the [API reference](https://rockset.com/docs/rest-api/#introduction). Omit the `https://` portion of the URL.
To target to a specific virtual instance, use this URI format:
```
rockset://{api key}:@{api server}/{VI ID}
```
For more complete instructions, we recommend the [Rockset documentation](https://docs.rockset.com/apache-superset/).

View File

@ -1,68 +0,0 @@
---
title: Snowflake
hide_title: true
sidebar_position: 29
version: 1
---
## Snowflake
### Install Snowflake Driver
Follow the steps [here](/docs/databases/docker-add-drivers) about how to
install new database drivers when setting up Superset locally via docker compose.
```
echo "snowflake-sqlalchemy" >> ./docker/requirements-local.txt
```
The recommended connector library for Snowflake is
[snowflake-sqlalchemy](https://pypi.org/project/snowflake-sqlalchemy/).
The connection string for Snowflake looks like this:
```
snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}
```
The schema is not necessary in the connection string, as it is defined per table/query. The role and
warehouse can be omitted if defaults are defined for the user, i.e.
```
snowflake://{user}:{password}@{account}.{region}/{database}
```
Make sure the user has privileges to access and use all required
databases/schemas/tables/views/warehouses, as the Snowflake SQLAlchemy engine does not test for
user/role rights during engine creation by default. However, when pressing the “Test Connection”
button in the Create or Edit Database dialog, user/role credentials are validated by passing
“validate_default_parameters”: True to the connect() method during engine creation. If the user/role
is not authorized to access the database, an error is recorded in the Superset logs.
And if you want connect Snowflake with [Key Pair Authentication](https://docs.snowflake.com/en/user-guide/key-pair-auth.html#step-6-configure-the-snowflake-client-to-use-key-pair-authentication).
Please make sure you have the key pair and the public key is registered in Snowflake.
To connect Snowflake with Key Pair Authentication, you need to add the following parameters to "SECURE EXTRA" field.
***Please note that you need to merge multi-line private key content to one line and insert `\n` between each line***
```
{
"auth_method": "keypair",
"auth_params": {
"privatekey_body": "-----BEGIN ENCRYPTED PRIVATE KEY-----\n...\n...\n-----END ENCRYPTED PRIVATE KEY-----",
"privatekey_pass":"Your Private Key Password"
}
}
```
If your private key is stored on server, you can replace "privatekey_body" with “privatekey_path” in parameter.
```
{
"auth_method": "keypair",
"auth_params": {
"privatekey_path":"Your Private Key Path",
"privatekey_pass":"Your Private Key Password"
}
}
```

View File

@ -1,17 +0,0 @@
---
title: Apache Solr
hide_title: true
sidebar_position: 13
version: 1
---
## Apache Solr
The [sqlalchemy-solr](https://pypi.org/project/sqlalchemy-solr/) library provides a
Python / SQLAlchemy interface to Apache Solr.
The connection string for Solr looks like this:
```
solr://{username}:{password}@{host}:{port}/{server_path}/{collection}[/?use_ssl=true|false]
```

View File

@ -1,16 +0,0 @@
---
title: Apache Spark SQL
hide_title: true
sidebar_position: 14
version: 1
---
## Apache Spark SQL
The recommended connector library for Apache Spark SQL [pyhive](https://pypi.org/project/PyHive/).
The expected connection string is formatted as follows:
```
hive://hive@{hostname}:{port}/{database}
```

View File

@ -1,23 +0,0 @@
---
title: Microsoft SQL Server
hide_title: true
sidebar_position: 30
version: 1
---
## SQL Server
The recommended connector library for SQL Server is [pymssql](https://github.com/pymssql/pymssql).
The connection string for SQL Server looks like this:
```
mssql+pymssql://<Username>:<Password>@<Host>:<Port-default:1433>/<Database Name>/?Encrypt=yes
```
It is also possible to connect using [pyodbc](https://pypi.org/project/pyodbc) with the parameter [odbc_connect](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#pass-through-exact-pyodbc-string)
The connection string for SQL Server looks like this:
```
mssql+pyodbc:///?odbc_connect=Driver%3D%7BODBC+Driver+17+for+SQL+Server%7D%3BServer%3Dtcp%3A%3Cmy_server%3E%2C1433%3BDatabase%3Dmy_database%3BUid%3Dmy_user_name%3BPwd%3Dmy_password%3BEncrypt%3Dyes%3BConnection+Timeout%3D30
```

View File

@ -1,26 +0,0 @@
---
title: StarRocks
hide_title: true
sidebar_position: 5
version: 1
---
## StarRocks
The [sqlalchemy-starrocks](https://pypi.org/project/starrocks/) library is the recommended
way to connect to StarRocks through SQLAlchemy.
You'll need to the following setting values to form the connection string:
- **User**: User Name
- **Password**: DBPassword
- **Host**: StarRocks FE Host
- **Catalog**: Catalog Name
- **Database**: Database Name
- **Port**: StarRocks FE port
Here's what the connection string looks like:
```
starrocks://<User>:<Password>@<Host>:<Port>/<Catalog>.<Database>
```

View File

@ -1,36 +0,0 @@
---
title: Teradata
hide_title: true
sidebar_position: 31
version: 1
---
## Teradata
The recommended connector library is
[teradatasqlalchemy](https://pypi.org/project/teradatasqlalchemy/).
The connection string for Teradata looks like this:
```
teradatasql://{user}:{password}@{host}
```
## ODBC Driver
There's also an older connector named
[sqlalchemy-teradata](https://github.com/Teradata/sqlalchemy-teradata) that
requires the installation of ODBC drivers. The Teradata ODBC Drivers
are available
here: https://downloads.teradata.com/download/connectivity/odbc-driver/linux
Here are the required environment variables:
```
export ODBCINI=/.../teradata/client/ODBC_64/odbc.ini
export ODBCINST=/.../teradata/client/ODBC_64/odbcinst.ini
```
We recommend using the first library because of the
lack of requirement around ODBC drivers and
because it's more regularly updated.

View File

@ -1,38 +0,0 @@
---
title: TimescaleDB
hide_title: true
sidebar_position: 31
version: 1
---
## TimescaleDB
[TimescaleDB](https://www.timescale.com) is the open-source relational database for time-series and analytics to build powerful data-intensive applications.
TimescaleDB is a PostgreSQL extension, and you can use the standard PostgreSQL connector library, [psycopg2](https://www.psycopg.org/docs/), to connect to the database.
If you're using docker compose, psycopg2 comes out of the box with Superset.
TimescaleDB sample connection parameters:
- **User Name**: User
- **Password**: Password
- **Database Host**:
- For Localhost: localhost or 127.0.0.1
- For On Prem: IP address or Host name
- For [Timescale Cloud](https://console.cloud.timescale.com) service: Host name
- For [Managed Service for TimescaleDB](https://portal.managed.timescale.com) service: Host name
- **Database Name**: Database Name
- **Port**: default 5432 or Port number of the service
The connection string looks like:
```
postgresql://{username}:{password}@{host}:{port}/{database name}
```
You can require SSL by adding `?sslmode=require` at the end (e.g. in case you use [Timescale Cloud](https://www.timescale.com/cloud)):
```
postgresql://{username}:{password}@{host}:{port}/{database name}?sslmode=require
```
[Learn more about TimescaleDB!](https://docs.timescale.com/)

View File

@ -1,117 +0,0 @@
---
title: Trino
hide_title: true
sidebar_position: 34
version: 1
---
## Trino
Supported trino version 352 and higher
### Connection String
The connection string format is as follows:
```
trino://{username}:{password}@{hostname}:{port}/{catalog}
```
If you are running Trino with docker on local machine, please use the following connection URL
```
trino://trino@host.docker.internal:8080
```
### Authentications
#### 1. Basic Authentication
You can provide `username`/`password` in the connection string or in the `Secure Extra` field at `Advanced / Security`
* In Connection String
```
trino://{username}:{password}@{hostname}:{port}/{catalog}
```
* In `Secure Extra` field
```json
{
"auth_method": "basic",
"auth_params": {
"username": "<username>",
"password": "<password>"
}
}
```
NOTE: if both are provided, `Secure Extra` always takes higher priority.
#### 2. Kerberos Authentication
In `Secure Extra` field, config as following example:
```json
{
"auth_method": "kerberos",
"auth_params": {
"service_name": "superset",
"config": "/path/to/krb5.config",
...
}
}
```
All fields in `auth_params` are passed directly to the [`KerberosAuthentication`](https://github.com/trinodb/trino-python-client/blob/0.306.0/trino/auth.py#L40) class.
NOTE: Kerberos authentication requires installing the [`trino-python-client`](https://github.com/trinodb/trino-python-client) locally with either the `all` or `kerberos` optional features, i.e., installing `trino[all]` or `trino[kerberos]` respectively.
#### 3. Certificate Authentication
In `Secure Extra` field, config as following example:
```json
{
"auth_method": "certificate",
"auth_params": {
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem"
}
}
```
All fields in `auth_params` are passed directly to the [`CertificateAuthentication`](https://github.com/trinodb/trino-python-client/blob/0.315.0/trino/auth.py#L416) class.
#### 4. JWT Authentication
Config `auth_method` and provide token in `Secure Extra` field
```json
{
"auth_method": "jwt",
"auth_params": {
"token": "<your-jwt-token>"
}
}
```
#### 5. Custom Authentication
To use custom authentication, first you need to add it into
`ALLOWED_EXTRA_AUTHENTICATIONS` allow list in Superset config file:
```python
from your.module import AuthClass
from another.extra import auth_method
ALLOWED_EXTRA_AUTHENTICATIONS: Dict[str, Dict[str, Callable[..., Any]]] = {
"trino": {
"custom_auth": AuthClass,
"another_auth_method": auth_method,
},
}
```
Then in `Secure Extra` field:
```json
{
"auth_method": "custom_auth",
"auth_params": {
...
}
}
```
You can also use custom authentication by providing reference to your `trino.auth.Authentication` class
or factory function (which returns an `Authentication` instance) to `auth_method`.
All fields in `auth_params` are passed directly to your class/function.
**Reference**:
* [Trino-Superset-Podcast](https://trino.io/episodes/12.html)

View File

@ -1,31 +0,0 @@
---
title: Vertica
hide_title: true
sidebar_position: 32
version: 1
---
## Vertica
The recommended connector library is
[sqlalchemy-vertica-python](https://pypi.org/project/sqlalchemy-vertica-python/). The
[Vertica](http://www.vertica.com/) connection parameters are:
- **User Name:** UserName
- **Password:** DBPassword
- **Database Host:**
- For Localhost : localhost or 127.0.0.1
- For On Prem : IP address or Host name
- For Cloud: IP Address or Host Name
- **Database Name:** Database Name
- **Port:** default 5433
The connection string is formatted as follows:
```
vertica+vertica_python://{username}:{password}@{host}/{database}
```
Other parameters:
- Load Balancer - Backup Host

View File

@ -1,20 +0,0 @@
---
title: YugabyteDB
hide_title: true
sidebar_position: 38
version: 1
---
## YugabyteDB
[YugabyteDB](https://www.yugabyte.com/) is a distributed SQL database built on top of PostgreSQL.
Note that, if you're using docker compose, the
Postgres connector library [psycopg2](https://www.psycopg.org/docs/)
comes out of the box with Superset.
The connection string looks like:
```
postgresql://{username}:{password}@{host}:{port}/{database}
```

View File

@ -159,7 +159,7 @@ table afterwards to configure the Columns tab, check the appropriate boxes and s
To clarify, the database backend is an OLTP database used by Superset to store its internal
information like your list of users and dashboard definitions. While Superset supports a
[variety of databases as data *sources*](/docs/databases/installing-database-drivers/),
[variety of databases as data *sources*](/docs/configuration/databases#installing-database-drivers),
only a few database engines are supported for use as the OLTP backend / metadata store.
Superset is tested using MySQL, PostgreSQL, and SQLite backends. Its recommended you install
@ -189,7 +189,7 @@ Metadata attribute using the `label_colors` key.
## Does Superset work with [insert database engine here]?
The [Connecting to Databases section](/docs/databases/installing-database-drivers) provides the best
The [Connecting to Databases section](/docs/configuration/databases) provides the best
overview for supported databases. Database engines not listed on that page may work too. We rely on
the community to contribute to this knowledge base.

View File

@ -5,23 +5,29 @@ sidebar_position: 3
version: 1
---
## Using Docker Compose
import useBaseUrl from "@docusaurus/useBaseUrl";
**DO NOT USE THIS FOR PRODUCTION!**
# Using Docker Compose
The fastest way to try Superset locally is using Docker Compose on a Linux or Mac OSX
computer. Superset does not have official support for Windows. It's also the easiest
way to launch a fully functioning **development environment** quickly.
<img src={useBaseUrl("/img/docker-compose.webp" )} width="150" />
<br /><br />
:::caution
Since `docker-compose` is primarily designed to run a set of containers on **a single host**
and can't credibly support **high availability** as a result, we do not support nor recommend
and can't support requirements for **high availability**, we do not support nor recommend
using our `docker-compose` constructs to support production-type use-cases. For single host
environments, we recommend using [minikube](https://minikube.sigs.k8s.io/docs/start/) along
our [installing on k8s](https://superset.apache.org/docs/installation/running-on-kubernetes)
documentation.
:::
As mentioned in our [quickstart guidee](/docs/quickstart), The fastest way to try
Superset locally is using Docker Compose on a Linux or Mac OSX
computer. Superset does not have official support for Windows. It's also the easiest
way to launch a fully functioning **development environment** quickly.
Note that there are 3 major ways we support to run docker-compose:
1. **docker-compose.yml:** for interactive development, where we mount your local folder with the
frontend/backend files that you can edit and experience the changes you
@ -37,26 +43,26 @@ Note that there are 3 major ways we support to run docker-compose:
More on these two approaches after setting up the requirements for either.
### Requirements
## Requirements
Note that this documentation assumes that you have [Docker](https://www.docker.com),
[docker-compose](https://docs.docker.com/compose/), and
[git](https://git-scm.com/) installed.
### 1. Clone Superset's GitHub repository
## 1. Clone Superset's GitHub repository
[Clone Superset's repo](https://github.com/apache/superset) in your terminal with the
following command:
```bash
git clone https://github.com/apache/superset.git
git clone --depth=1 https://github.com/apache/superset.git
```
Once that command completes successfully, you should see a new `superset` folder in your
current directory.
### 2. Launch Superset Through Docker Compose
## 2. Launch Superset Through Docker Compose
First let's assume you're familiar with docker-compose mechanics. Here we'll refer generally
to `docker compose up` even though in some cases you may want to force a check for newer remote
@ -86,13 +92,13 @@ perform those operations. In this case, we recommend you set the env var
Simply trigger `npm i && npm run dev`, this should be MUCH faster.
:::
### Option #2 - build an immutable image from the local branch
### Option #2 - build a set of immutable images from the local branch
```bash
docker compose -f docker-compose-non-dev.yml up
```
### Option #3 - pull and build a release image from docker-hub
### Option #3 - boot up an official release
```bash
export TAG=3.1.1
@ -103,7 +109,7 @@ Here various release tags, github SHA, and latest `master` can be referenced by
Refer to the docker-related documentation to learn more about existing tags you can point to
from Docker Hub.
## General tips & configuration
## docker-compose tips & configuration
:::caution
All of the content belonging to a Superset instance - charts, dashboards, users, etc. - is stored in
@ -119,7 +125,7 @@ You should see a wall of logging output from the containers being launched on yo
this output slows, you should have a running instance of Superset on your local machine! To avoid
the wall of text on future runs, add the `-d` option to the end of the `docker compose up` command.
#### Configuring Further
### Configuring Further
The following is for users who want to configure how Superset runs in Docker Compose; otherwise, you
can skip to the next section.
@ -170,7 +176,7 @@ To disable the Scarf telemetry pixel, set the `SCARF_ANALYTICS` environment vari
your terminal and/or in your `docker/.env` file.
:::
### 3. Log in to Superset
## 3. Log in to Superset
Your local Superset instance also includes a Postgres server to store your data and is already
pre-loaded with some example datasets that ship with Superset. You can access Superset now via your
@ -187,7 +193,7 @@ username: admin
password: admin
```
### 4. Connecting Superset to your local database instance
## 4. Connecting Superset to your local database instance
When running Superset using `docker` or `docker compose` it runs in its own docker container, as if
the Superset was running in a separate machine entirely. Therefore attempts to connect to your local

View File

@ -5,12 +5,18 @@ sidebar_position: 1
version: 1
---
## Installing on Kubernetes
import useBaseUrl from "@docusaurus/useBaseUrl";
# Installing on Kubernetes
<img src={useBaseUrl("/img/k8s.png" )} width="150" />
<br /><br />
Running Superset on Kubernetes is supported with the provided [Helm](https://helm.sh/) chart
found in the official [Superset helm repository](https://apache.github.io/superset/index.yaml).
### Prerequisites
## Prerequisites
- A Kubernetes cluster
- Helm installed
@ -22,7 +28,7 @@ and works fantastically well with the Helm chart referenced here.
:::
### Running
## Running
1. Add the Superset helm repository
@ -89,9 +95,9 @@ Depending how you configured external access, the URL will vary. Once you've ide
- user: `admin`
- password: `admin`
### Important settings
## Important settings
#### Security settings
### Security settings
Default security settings and passwords are included but you **MUST** update them to run `prod` instances, in particular:
@ -135,7 +141,7 @@ Superset uses [Scarf Gateway](https://about.scarf.sh/scarf-gateway) to collect t
To opt-out of this data collection in your Helm-based installation, edit the `repository:` line in your `helm/superset/values.yaml` file, replacing `apachesuperset.docker.scarf.sh/apache/superset` with `apache/superset` to pull the image directly from Docker Hub.
:::
#### Dependencies
### Dependencies
Install additional packages and do any other bootstrap configuration in the bootstrap script.
For production clusters it's recommended to build own image with this step done in CI.
@ -145,7 +151,7 @@ For production clusters it's recommended to build own image with this step done
Superset requires a Python DB-API database driver and a SQLAlchemy
dialect to be installed for each datastore you want to connect to.
See [Install Database Drivers](/docs/databases/installing-database-drivers) for more information.
See [Install Database Drivers](/docs/configuration/databases) for more information.
:::
@ -161,7 +167,7 @@ bootstrapScript: |
if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi
```
#### superset_config.py
### superset_config.py
The default `superset_config.py` is fairly minimal and you will very likely need to extend it. This is done by specifying one or more key/value entries in `configOverrides`, e.g.:
@ -181,7 +187,7 @@ The entire `superset_config.py` will be installed as a secret, so it is safe to
Full python files can be provided by running `helm upgrade --install --values my-values.yaml --set-file configOverrides.oauth=set_oauth.py`
#### Environment Variables
### Environment Variables
Those can be passed as key/values either with `extraEnv` or `extraSecretEnv` if they're sensitive. They can then be referenced from `superset_config.py` using e.g. `os.environ.get("VAR")`.
@ -206,7 +212,7 @@ configOverrides:
SMTP_PASSWORD = os.getenv("SMTP_PASSWORD","superset")
```
#### System packages
### System packages
If new system packages are required, they can be installed before application startup by overriding the container's `command`, e.g.:
@ -225,7 +231,7 @@ supersetWorker:
. {{ .Values.configMountPath }}/superset_bootstrap.sh; celery --app=superset.tasks.celery_app:app worker
```
#### Data sources
### Data sources
Data source definitions can be automatically declared by providing key/value yaml definitions in `extraConfigs`:
@ -246,9 +252,9 @@ extraConfigs:
Those will also be mounted as secrets and can include sensitive parameters.
### Configuration Examples
## Configuration Examples
#### Setting up OAuth
### Setting up OAuth
:::note
@ -302,11 +308,11 @@ configOverrides:
AUTH_USER_REGISTRATION_ROLE = "Admin"
```
#### Enable Alerts and Reports
### Enable Alerts and Reports
For this, as per the [Alerts and Reports doc](/docs/configuration/alerts-reports), you will need to:
##### Install a supported webdriver in the Celery worker
#### Install a supported webdriver in the Celery worker
This is done either by using a custom image that has the webdriver pre-installed, or installing at startup time by overriding the `command`. Here's a working example for `chromedriver`:
@ -335,7 +341,7 @@ supersetWorker:
. {{ .Values.configMountPath }}/superset_bootstrap.sh; celery --app=superset.tasks.celery_app:app worker
```
##### Run the Celery beat
#### Run the Celery beat
This pod will trigger the scheduled tasks configured in the alerts and reports UI section:
@ -344,7 +350,7 @@ supersetCeleryBeat:
enabled: true
```
##### Configure the appropriate Celery jobs and SMTP/Slack settings
#### Configure the appropriate Celery jobs and SMTP/Slack settings
```yaml
extraEnv:
@ -428,7 +434,7 @@ configOverrides:
"--disable-extensions",
]
```
#### Load the Examples data and dashboards
### Load the Examples data and dashboards
If you are trying Superset out and want some data and dashboards to explore, you can load some examples by creating a `my_values.yaml` and deploying it as described above in the **Configure your setting overrides** step of the **Running** section.
To load the examples, add the following to the `my_values.yaml` file:
```yaml

View File

@ -5,11 +5,16 @@ sidebar_position: 2
version: 1
---
## Installing Superset from PyPI
import useBaseUrl from "@docusaurus/useBaseUrl";
# Installing Superset from PyPI
<img src={useBaseUrl("/img/pypi.png" )} width="150" />
<br /><br />
This page describes how to install Superset using the `apache-superset` package [published on PyPI](https://pypi.org/project/apache-superset/).
### OS Dependencies
## OS Dependencies
Superset stores database connection information in its metadata database. For that purpose, we use
the cryptography Python library to encrypt connection passwords. Unfortunately, this library has OS
@ -91,7 +96,7 @@ export CFLAGS="-I$(brew --prefix openssl)/include"
These will now be available when pip installing requirements.
### Python Virtual Environment
## Python Virtual Environment
We highly recommend installing Superset inside of a virtual environment. Python ships with
`virtualenv` out of the box. If you're using [pyenv](https://github.com/pyenv/pyenv), you can install [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv). Or you can install it with `pip`:

View File

@ -5,9 +5,9 @@ sidebar_position: 4
version: 1
---
## Upgrading Superset
# Upgrading Superset
### Docker Compose
## Docker Compose
First make sure to wind down the running containers in Docker Compose:
@ -27,12 +27,19 @@ Then, restart the containers and any changed Docker images will be automatically
docker compose up
```
### Updating Superset Manually
## Updating Superset Manually
To upgrade superset in a native installation, run the following commands:
```bash
pip install apache-superset --upgrade
```
## Upgrading the Metadata Database
Migrate the metadata database by running:
```bash
superset db upgrade
superset init
```

View File

@ -1,65 +0,0 @@
---
title: Introduction
hide_title: true
sidebar_position: 1
---
## What is Apache Superset?
Apache Superset is a modern, enterprise-ready business intelligence web application. It
is fast, lightweight, intuitive, and loaded with options that make it easy for users of all skill
sets to explore and visualize their data, from simple pie charts to highly detailed deck.gl
geospatial charts.
Here are a **few different ways you can get started with Superset**:
- Try a [Quickstart deployment](/docs/quickstart), powered by [Docker Compose](https://docs.docker.com/compose/)
- Install Superset [from PyPI](/docs/installation/pypi/)
- Deploy Superset [with Kubernetes](/docs/installation/kubernetes)
- Download the [source code from Apache Foundation's website](https://dist.apache.org/repos/dist/release/superset/)
#### Video Overview
https://user-images.githubusercontent.com/64562059/234390129-321d4f35-cb4b-45e8-89d9-20ae292f34fc.mp4
#### Features
Superset provides:
- An intuitive interface for visualizing datasets and crafting interactive dashboards
- A wide array of beautiful visualizations to showcase your data
- Code-free visualization builder to extract and present datasets
- A world-class SQL IDE for preparing data for visualization, including a rich metadata browser
- A lightweight semantic layer which empowers data analysts to quickly define custom dimensions and metrics
- Out-of-the-box support for [most SQL-speaking databases](/docs/databases/installing-database-drivers/)
- Seamless, in-memory asynchronous caching and queries
- An extensible security model that allows configuration of very intricate rules on who can access which product features and datasets.
- Integration with major authentication backends (database, OpenID, LDAP, OAuth, REMOTE_USER, etc)
- The ability to add custom visualization plugins
- An API for programmatic customization
- A cloud-native architecture designed from the ground up for scale
#### Backend Technology
Superset is cloud-native and designed to be highly available. It was designed to scale out to large,
distributed environments and works very well inside containers. While you can easily test drive
Superset on a modest setup or simply on your laptop, theres virtually no limit around scaling out
the platform.
Superset is also cloud-native in the sense that it is flexible and lets you choose the:
- Web server (Gunicorn, Nginx, Apache),
- Metadata database engine (PostgreSQL, MySQL, MariaDB),
- Message queue (Celery, Redis, RabbitMQ, SQS, etc.),
- Results backend (Redis, S3, Memcached, etc.),
- Caching layer (Redis, Memcached, etc.)
Superset also works well with [event-logging](/docs/configuration/event-logging/)
services like StatsD, NewRelic, and DataDog.
Superset is currently run at scale at many companies. For example, Superset is run in Airbnbs
production environment inside Kubernetes and serves 600+ daily active users viewing over 100K charts
a day.
You can find a partial list of industries and companies embracing Superset
[on this page in GitHub](https://github.com/apache/superset/blob/master/RESOURCES/INTHEWILD.md).

View File

@ -61,7 +61,7 @@ processes by running Docker Compose `stop` command. By doing so, you can avoid d
From this point on, you can head on to:
- [Create your first Dashboard](/docs/using-superset/creating-your-first-dashboard)
- [Connect to a Database](/docs/databases/installing-database-drivers)
- [Connect to a Database](/docs/configuration/databases)
- [Using Docker Compose](/docs/installation/docker-compose)
- [Configure Superset](/docs/configuration/configuring-superset/)
- [Installing on Kubernetes](/docs/installation/kubernetes/)

View File

@ -80,7 +80,7 @@ const config = {
from: '/gallery.html',
},
{
to: '/docs/databases/druid',
to: '/docs/configuration/databases',
from: '/druid.html',
},
{
@ -128,7 +128,7 @@ const config = {
from: '/docs/contributing/contribution-page',
},
{
to: '/docs/databases/yugabytedb',
to: '/docs/configuration/databases',
from: '/docs/databases/yugabyte/',
},
{
@ -231,6 +231,7 @@ const config = {
items: [
{
label: 'Documentation',
to: '/docs/intro',
items: [
{
label: 'Getting Started',
@ -244,6 +245,7 @@ const config = {
},
{
label: 'Community',
to: '/community',
items: [
{
label: 'Resources',

View File

@ -5,8 +5,9 @@
"license": "Apache-2.0",
"scripts": {
"docusaurus": "docusaurus",
"start": "docusaurus start",
"build": "docusaurus build",
"_init": "cp ../README.md docs/intro.md",
"start": "npm run _init && docusaurus start",
"build": "npm run _init && docusaurus build",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"clear": "docusaurus clear",
@ -30,7 +31,7 @@
"@svgr/webpack": "^8.1.0",
"antd": "^4.19.3",
"buffer": "^6.0.3",
"clsx": "^2.1.0",
"clsx": "^2.1.1",
"docusaurus-plugin-less": "^2.0.2",
"file-loader": "^6.2.0",
"less": "^4.2.0",

View File

@ -60,14 +60,6 @@ const sidebars = {
dirName: 'using-superset',
}]
},
{
type: 'category',
label: 'Databases',
items: [{
type: 'autogenerated',
dirName: 'databases',
}]
},
{
type: 'category',
label: 'Contributing',

View File

@ -16,6 +16,7 @@
* specific language governing permissions and limitations
* under the License.
*/
// @ts-nocheck
import React, { useRef, useState, useEffect } from 'react';
import Layout from '@theme/Layout';
import Link from '@docusaurus/Link';
@ -28,6 +29,8 @@ import SectionHeader from '../components/SectionHeader';
import BlurredSection from '../components/BlurredSection';
import '../styles/main.less';
// @ts-ignore
const features = [
{
image: 'powerful-yet-easy.jpg',
@ -644,7 +647,7 @@ export default function Home(): JSX.Element {
</div>
</Carousel>
<video autoPlay muted controls loop>
<source src="https://user-images.githubusercontent.com/64562059/234390129-321d4f35-cb4b-45e8-89d9-20ae292f34fc.mp4" type="video/mp4" />
<source src="https://superset.staged.apache.org/superset-video-4k.mp4" type="video/mp4" />
</video>
</StyledSliderSection>
<StyledKeyFeatures>
@ -731,7 +734,7 @@ export default function Home(): JSX.Element {
</div>
<span className="database-sub">
...and many other{' '}
<a href="/docs/databases/installing-database-drivers">
<a href="/docs/configuration/databases#installing-database-drivers">
compatible databases
</a>
</span>

View File

@ -39,6 +39,12 @@
font-weight: 700;
font-style: bold;
}
/* Hiding ugly linkout icons */
ul.dropdown__menu svg {
display: none;
}
:root {
--ifm-color-primary: #20a7c9;
--ifm-color-primary-dark: #1985a0;
@ -60,4 +66,6 @@
--ifm-border-color: #ededed;
--ifm-primary-text: #484848;
--ifm-secondary-text: #5f5f5f;
--ifm-code-padding-vertical: 3px;
--ifm-code-padding-horizontal: 5px;
}

42
docs/static/.htaccess vendored
View File

@ -23,3 +23,45 @@ RewriteCond %{HTTP_HOST} ^superset.incubator.apache.org$ [NC]
RewriteRule ^(.*)$ https://superset.apache.org/$1 [R=301,L]
Header set Content-Security-Policy "default-src data: blob: 'self' *.apache.org *.githubusercontent.com *.scarf.sh *.googleapis.com *.github.com *.algolia.net *.algolianet.com 'unsafe-inline' 'unsafe-eval'; frame-src *; frame-ancestors 'self' *.google.com https://sidebar.bugherd.com; form-action 'self'; worker-src blob:; img-src 'self' blob: data: https:; font-src 'self'; object-src 'none'"
# REDIRECTS
RewriteEngine On
RewriteRule ^installation\.html$ /docs/installation/docker-compose [R=301,L]
RewriteRule ^tutorials\.html$ /docs/intro [R=301,L]
RewriteRule ^admintutorial\.html$ /docs/using-superset/creating-your-first-dashboard [R=301,L]
RewriteRule ^usertutorial\.html$ /docs/using-superset/creating-your-first-dashboard [R=301,L]
RewriteRule ^security\.html$ /docs/security/ [R=301,L]
RewriteRule ^sqllab\.html$ /docs/configuration/sql-templating [R=301,L]
RewriteRule ^gallery\.html$ /docs/intro [R=301,L]
RewriteRule ^druid\.html$ /docs/configuration/databases [R=301,L]
RewriteRule ^misc\.html$ /docs/configuration/country-map-tools [R=301,L]
RewriteRule ^visualization\.html$ /docs/configuration/country-map-tools [R=301,L]
RewriteRule ^videos\.html$ /docs/faq [R=301,L]
RewriteRule ^faq\.html$ /docs/faq [R=301,L]
RewriteRule ^tutorial\.html$ /docs/using-superset/creating-your-first-dashboard [R=301,L]
RewriteRule ^docs/creating-charts-dashboards/first-dashboard$ /docs/using-superset/creating-your-first-dashboard [R=301,L]
RewriteRule ^docs/rest-api$ /docs/api [R=301,L]
RewriteRule ^docs/installation/email-reports$ /docs/configuration/alerts-reports [R=301,L]
RewriteRule ^docs/roadmap$ /docs/intro [R=301,L]
RewriteRule ^docs/contributing/contribution-guidelines$ /docs/contributing/ [R=301,L]
RewriteRule ^docs/contributing/contribution-page$ /docs/contributing/ [R=301,L]
RewriteRule ^docs/databases/yugabyte/$ /docs/configuration/databases [R=301,L]
RewriteRule ^docs/frequently-asked-questions$ /docs/faq [R=301,L]
RewriteRule ^docs/installation/running-on-kubernetes/$ /docs/installation/kubernetes [R=301,L]
RewriteRule ^docs/contributing/testing-locally/$ /docs/contributing/howtos [R=301,L]
RewriteRule ^docs/creating-charts-dashboards/creating-your-first-dashboard/$ /docs/using-superset/creating-your-first-dashboard [R=301,L]
RewriteRule ^docs/creating-charts-dashboards/exploring-data/$ /docs/using-superset/creating-your-first-dashboard [R=301,L]
RewriteRule ^docs/installation/installing-superset-using-docker-compose/$ /docs/installation/docker-compose [R=301,L]
RewriteRule ^docs/contributing/creating-viz-plugins/$ /docs/contributing/howtos [R=301,L]
RewriteRule ^docs/installation/$ /docs/installation/kubernetes [R=301,L]
RewriteRule ^docs/installation/installing-superset-from-pypi/$ /docs/installation/pypi [R=301,L]
RewriteRule ^docs/installation/configuring-superset/$ /docs/configuration/configuring-superset [R=301,L]
RewriteRule ^docs/installation/cache/$ /docs/configuration/cache [R=301,L]
RewriteRule ^docs/installation/async-queries-celery/$ /docs/configuration/async-queries-celery [R=301,L]
RewriteRule ^docs/installation/event-logging/$ /docs/configuration/event-logging [R=301,L]
RewriteRule ^docs/databases.*$ /docs/configuration/databases [R=301,L]
RewriteRule ^docs/configuration/setup-ssh-tunneling$ /docs/configuration/networking-settings [R=301,L]

BIN
docs/static/img/databases/databend.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

BIN
docs/static/img/databases/db2.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

BIN
docs/static/img/databases/firebolt.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

51
docs/static/img/databases/ibm-db2.svg vendored Normal file
View File

@ -0,0 +1,51 @@
<svg width="600" height="263" viewBox="0 0 600 263" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0)">
<path fill-rule="evenodd" clip-rule="evenodd" d="M0.0488281 224.047H300.809V38.0058H0.0488281V224.047Z" fill="#1B1918"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M246.245 96.0585H211.238L213.393 90.0245H246.245V96.0585Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M152.919 90.0216L185.422 90.0188L187.551 96.0698C187.566 96.0613 152.93 96.0782 152.93 96.0698C152.93 96.0613 152.908 90.0245 152.919 90.0245V90.0216Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M127.104 90.0217C132.462 90.5372 137.118 92.4471 141.085 96.0698C141.085 96.0698 82.1699 96.0754 82.1699 96.0698C82.1699 96.0613 82.1699 90.0217 82.1699 90.0217H127.104Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M35.9355 96.0585H76.8667V90.0217H35.9355V96.0585Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M246.244 107.386H207.223C207.223 107.386 209.352 101.38 209.338 101.377H246.244V107.386Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M191.571 107.391H152.916V101.377H189.453L191.571 107.391Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M145.674 101.374C146.452 103.357 147.37 105.053 147.37 107.383H82.1816V101.374H145.674Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M35.9355 107.383H76.8667V101.374H35.9355V107.383Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M203.23 118.738L205.374 112.704L234.671 112.713V118.724L203.23 118.738Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M193.453 112.701L195.585 118.738H164.443V112.701H193.453Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M147.886 112.701C147.886 114.769 147.627 116.927 147.024 118.738H129.176V112.701H147.886Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M47.3438 118.735H64.936V112.73H47.3438V118.735Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M94.0215 118.735H111.583V112.701H94.0215V118.735Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M181.743 125.254C181.743 125.254 181.745 129.823 181.743 129.832H164.443V123.826H197.411L199.459 129.426C199.467 129.429 201.431 123.812 201.436 123.823H234.663V129.832H217.445C217.445 129.823 217.437 125.257 217.437 125.257L215.842 129.832L183.331 129.823L181.743 125.254Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M94.0215 123.823H144.437C143.333 125.834 141.333 128.279 139.434 129.832C139.434 129.832 94.0215 129.84 94.0215 129.832C94.0215 129.823 94.0215 123.834 94.0215 123.823Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M47.3438 129.832H64.936V123.823H47.3438V129.832Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M217.447 141.156H234.665V135.15H217.447V141.156Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M181.743 135.15H164.443V141.156C164.432 141.153 181.745 141.159 181.745 141.156C181.745 141.153 181.765 135.15 181.743 135.15Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M213.97 135.15C213.891 135.142 211.953 141.153 211.871 141.156L187.349 141.167C187.33 141.159 185.194 135.142 185.191 135.15H213.97Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M94.0195 135.147H139.351C141.418 136.874 143.289 139.001 144.754 141.156C144.838 141.153 94.0223 141.159 94.0223 141.156C94.0223 141.153 94.0195 135.156 94.0195 135.147Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M64.9323 141.156V135.147H47.3711C47.3711 135.147 47.3767 141.153 47.3711 141.153C47.3683 141.153 64.9239 141.156 64.9323 141.156Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M217.447 152.483H234.662V146.474H217.447V152.483Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M164.441 152.48H181.743V146.474H164.441V152.48Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M207.846 152.511C207.851 152.483 209.97 146.469 210.001 146.474H189.155C189.118 146.472 191.293 152.483 191.293 152.483C191.293 152.483 207.84 152.537 207.846 152.511Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M94.0227 152.48C94.0227 152.506 94.0199 146.474 94.0227 146.474H111.846C111.846 146.474 111.86 152.483 111.846 152.483C111.829 152.483 94.0227 152.478 94.0227 152.48Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M147.254 146.474C148.116 148.283 148.204 150.441 148.375 152.509H129.49V146.474H147.254Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M47.3711 152.48H64.9323V146.474H47.3711V152.48Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M217.447 163.577H246.246V157.568H217.447V163.577Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M152.916 163.577H181.742V157.568H152.916V163.577Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M203.972 163.577H195.18L193.07 157.568H206.006L203.972 163.577Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M35.9317 157.568V163.574H76.8346C76.8515 163.585 76.8149 157.571 76.8346 157.571C76.8515 157.571 35.8866 157.568 35.9317 157.568Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M147.627 157.568C147.196 159.551 146.802 161.965 145.354 163.577L144.813 163.574H82.1797V157.568H147.627Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M199.986 174.915H199.178L197.121 168.892H202.096L199.986 174.915Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M217.447 174.929H246.246V168.892H217.447V174.929Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M82.1699 174.915V168.904C82.1699 168.904 141.262 168.912 141.431 168.912C137.462 172.704 132.02 174.845 126.157 174.929L82.1784 174.918L82.1699 174.915Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M152.916 174.929H181.742V168.892H152.916V174.929Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M76.8348 168.92C76.8348 168.92 76.8151 174.915 76.8348 174.915C76.8517 174.915 35.9459 174.94 35.9319 174.926C35.9234 174.915 35.9431 168.904 35.9319 168.904C35.9234 168.904 76.8151 168.94 76.8348 168.92Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M256.196 162.731C257.3 162.731 258.379 163.013 259.436 163.585C260.486 164.148 261.309 164.963 261.895 166.019C262.484 167.075 262.777 168.177 262.777 169.323C262.777 170.456 262.489 171.551 261.909 172.597C261.329 173.645 260.52 174.459 259.472 175.036C258.427 175.616 257.334 175.907 256.196 175.907C255.055 175.907 253.962 175.616 252.917 175.036C251.869 174.459 251.058 173.645 250.475 172.597C249.895 171.551 249.602 170.456 249.602 169.323C249.602 168.177 249.897 167.075 250.489 166.019C251.08 164.963 251.903 164.148 252.954 163.585C254.01 163.013 255.089 162.731 256.196 162.731ZM256.196 163.822C255.269 163.822 254.368 164.061 253.492 164.534C252.616 165.008 251.931 165.686 251.435 166.568C250.94 167.45 250.695 168.368 250.695 169.323C250.695 170.273 250.937 171.18 251.424 172.05C251.909 172.921 252.588 173.599 253.461 174.087C254.334 174.571 255.247 174.816 256.196 174.816C257.143 174.816 258.055 174.571 258.929 174.087C259.802 173.599 260.481 172.921 260.963 172.05C261.447 171.18 261.687 170.273 261.687 169.323C261.687 168.368 261.439 167.45 260.948 166.568C260.455 165.686 259.771 165.008 258.892 164.534C258.013 164.061 257.115 163.822 256.196 163.822ZM253.303 172.957V165.878H255.74C256.574 165.878 257.176 165.943 257.548 166.075C257.923 166.205 258.219 166.433 258.441 166.76C258.664 167.084 258.774 167.43 258.774 167.796C258.774 168.312 258.588 168.765 258.216 169.146C257.847 169.532 257.354 169.746 256.743 169.794C256.991 169.898 257.193 170.022 257.343 170.168C257.63 170.444 257.979 170.912 258.393 171.571L259.255 172.957H257.867L257.236 171.842C256.743 170.963 256.34 170.413 256.036 170.191C255.827 170.03 255.523 169.951 255.12 169.954H254.447V172.957H253.303ZM254.447 168.971H255.836C256.5 168.971 256.954 168.873 257.193 168.675C257.436 168.478 257.557 168.216 257.557 167.895C257.557 167.687 257.498 167.498 257.382 167.334C257.267 167.168 257.106 167.044 256.9 166.965C256.692 166.884 256.309 166.844 255.748 166.844H254.447V168.971Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M300.809 222.993H600.056V37.9718H300.809V222.993Z" fill="#09AF05"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M352.369 169.633V86.9004H375.215C404.36 86.9004 418.952 100.332 418.952 127.232C418.952 139.993 414.895 150.255 406.802 158.013C398.709 165.751 387.878 169.633 374.288 169.633H352.369ZM362.06 95.6641V160.869H374.407C385.258 160.869 393.686 157.954 399.731 152.145C405.78 146.334 408.791 138.102 408.791 127.448C408.791 106.259 397.526 95.6641 374.978 95.6641H362.06ZM435.22 169.633V86.9004H458.753C465.903 86.9004 471.593 88.6526 475.768 92.14C479.963 95.6444 482.07 100.214 482.07 105.825C482.07 110.513 480.791 114.589 478.25 118.054C475.709 121.521 472.224 123.983 467.754 125.44V125.677C473.326 126.327 477.796 128.434 481.146 131.978C484.492 135.542 486.166 140.17 486.166 145.863C486.166 152.931 483.625 158.664 478.546 163.055C473.464 167.444 467.064 169.633 459.325 169.633H435.22ZM444.908 95.6641V122.389H454.835C460.131 122.389 464.306 121.107 467.359 118.547C470.393 115.986 471.909 112.383 471.909 107.715C471.909 99.6812 466.63 95.6641 456.035 95.6641H444.908ZM444.908 131.091V160.869H458.063C463.756 160.869 468.168 159.509 471.298 156.833C474.45 154.134 476.005 150.43 476.005 145.745C476.005 135.975 469.348 131.091 456.035 131.091H444.908ZM547.492 169.633H497.628V161.202L521.753 137.04C527.897 130.877 532.308 125.755 534.928 121.659C537.565 117.564 538.886 113.192 538.886 108.524C538.886 103.757 537.525 100.076 534.79 97.4557C532.052 94.8359 528.173 93.5373 523.13 93.5373C515.744 93.5373 508.676 96.6867 501.902 102.988V93.064C508.361 88.0216 515.882 85.5032 524.469 85.5032C531.855 85.5032 537.644 87.5117 541.878 91.509C546.112 95.5064 548.238 100.884 548.238 107.617C548.238 112.915 546.762 118.093 543.847 123.155C540.914 128.217 535.675 134.499 528.094 142.001L508.992 160.869V161.086H547.492V169.633Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M557.768 163.258C558.873 163.258 559.952 163.543 561.005 164.112C562.059 164.678 562.881 165.489 563.467 166.546C564.053 167.602 564.349 168.706 564.349 169.85C564.349 170.985 564.059 172.078 563.479 173.123C562.901 174.171 562.09 174.985 561.045 175.563C560 176.146 558.907 176.433 557.768 176.433C556.628 176.433 555.535 176.146 554.487 175.563C553.442 174.985 552.627 174.171 552.047 173.123C551.467 172.078 551.174 170.985 551.174 169.85C551.174 168.706 551.47 167.602 552.058 166.546C552.653 165.489 553.475 164.678 554.526 164.112C555.58 163.543 556.659 163.258 557.768 163.258ZM557.768 164.348C556.842 164.348 555.937 164.588 555.061 165.061C554.185 165.534 553.501 166.213 553.008 167.098C552.512 167.98 552.264 168.895 552.264 169.85C552.264 170.799 552.509 171.709 552.994 172.58C553.481 173.447 554.16 174.129 555.033 174.614C555.906 175.098 556.816 175.343 557.768 175.343C558.715 175.343 559.628 175.098 560.501 174.614C561.371 174.129 562.053 173.447 562.535 172.58C563.017 171.709 563.259 170.799 563.259 169.85C563.259 168.895 563.011 167.98 562.521 167.098C562.028 166.213 561.343 165.534 560.464 165.061C559.585 164.588 558.687 164.348 557.768 164.348ZM554.875 173.484V166.405H557.312C558.146 166.405 558.749 166.47 559.121 166.602C559.495 166.732 559.791 166.96 560.014 167.287C560.233 167.61 560.346 167.957 560.346 168.323C560.346 168.839 560.16 169.292 559.788 169.673C559.419 170.058 558.926 170.273 558.312 170.32C558.563 170.425 558.766 170.549 558.915 170.695C559.199 170.971 559.549 171.442 559.963 172.098L560.828 173.484H559.439L558.808 172.371C558.312 171.49 557.912 170.94 557.608 170.72C557.399 170.557 557.095 170.478 556.692 170.481H556.019V173.484H554.875ZM556.019 169.498H557.408C558.073 169.498 558.523 169.399 558.766 169.202C559.008 169.005 559.129 168.743 559.129 168.422C559.129 168.213 559.07 168.025 558.954 167.861C558.836 167.695 558.676 167.574 558.47 167.492C558.261 167.413 557.881 167.371 557.321 167.371H556.019V169.498Z" fill="white"/>
</g>
<defs>
<clipPath id="clip0">
<rect width="600" height="186.058" fill="white" transform="translate(0 38)"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 12 KiB

BIN
docs/static/img/databases/mariadb.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Some files were not shown because too many files have changed in this diff Show More