diff --git a/RELEASING/release-notes-2-0/README.md b/RELEASING/release-notes-2-0/README.md index 265dd32941..617159100d 100644 --- a/RELEASING/release-notes-2-0/README.md +++ b/RELEASING/release-notes-2-0/README.md @@ -34,7 +34,7 @@ Superset 2.0 is a big step forward. This release cleans up many legacy code path - New GitHub workflow to test Storybook Netlify instance nightly ([#19852](https://github.com/apache/superset/pull/19852)) -- Minimum requirement for Superset is now Python 3.8 ([#19017](https://github.com/apache/superset/pull/19017) +- Minimum requirement for Superset is now Python 3.8 ([#19017](https://github.com/apache/superset/pull/19017)) ## Features diff --git a/docs/docs/frequently-asked-questions.mdx b/docs/docs/frequently-asked-questions.mdx index 3007584ab1..8c9fa034cf 100644 --- a/docs/docs/frequently-asked-questions.mdx +++ b/docs/docs/frequently-asked-questions.mdx @@ -154,7 +154,7 @@ Table schemas evolve, and Superset needs to reflect that. It’s pretty common i dashboard to want to add a new dimension or metric. To get Superset to discover your new columns, all you have to do is to go to **Data -> Datasets**, click the edit icon next to the dataset whose schema has changed, and hit **Sync columns from source** from the **Columns** tab. -Behind the scene, the new columns will get merged it. Following this, you may want to re-edit the +Behind the scene, the new columns will get merged. Following this, you may want to re-edit the table afterwards to configure the Columns tab, check the appropriate boxes and save again. ### What database engine can I use as a backend for Superset? @@ -220,7 +220,7 @@ and write your own connector. The only example of this at the moment is the Drui is getting superseded by Druid’s growing SQL support and the recent availability of a DBAPI and SQLAlchemy driver. If the database you are considering integrating has any kind of of SQL support, it’s probably preferable to go the SQLAlchemy route. Note that for a native connector to be possible -the database needs to have support for running OLAP-type queries and should be able to things that +the database needs to have support for running OLAP-type queries and should be able to do things that are typical in basic SQL: - aggregate data diff --git a/docs/docs/installation/event-logging.mdx b/docs/docs/installation/event-logging.mdx index e6b0f8b356..f5dcb53c8d 100644 --- a/docs/docs/installation/event-logging.mdx +++ b/docs/docs/installation/event-logging.mdx @@ -56,5 +56,5 @@ from superset.stats_logger import StatsdStatsLogger STATS_LOGGER = StatsdStatsLogger(host='localhost', port=8125, prefix='superset') ``` -Note that it’s also possible to implement you own logger by deriving +Note that it’s also possible to implement your own logger by deriving `superset.stats_logger.BaseStatsLogger`.