docs: fix spelling and grammar (#26381)

This commit is contained in:
Fenil Mehta 2024-01-02 23:36:45 +05:30 committed by GitHub
parent 6e443ad1eb
commit 24e6ec3dca
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 4 additions and 4 deletions

View File

@ -34,7 +34,7 @@ Superset 2.0 is a big step forward. This release cleans up many legacy code path
- New GitHub workflow to test Storybook Netlify instance nightly ([#19852](https://github.com/apache/superset/pull/19852))
- Minimum requirement for Superset is now Python 3.8 ([#19017](https://github.com/apache/superset/pull/19017)
- Minimum requirement for Superset is now Python 3.8 ([#19017](https://github.com/apache/superset/pull/19017))
## Features

View File

@ -154,7 +154,7 @@ Table schemas evolve, and Superset needs to reflect that. Its pretty common i
dashboard to want to add a new dimension or metric. To get Superset to discover your new columns,
all you have to do is to go to **Data -> Datasets**, click the edit icon next to the dataset
whose schema has changed, and hit **Sync columns from source** from the **Columns** tab.
Behind the scene, the new columns will get merged it. Following this, you may want to re-edit the
Behind the scene, the new columns will get merged. Following this, you may want to re-edit the
table afterwards to configure the Columns tab, check the appropriate boxes and save again.
### What database engine can I use as a backend for Superset?
@ -220,7 +220,7 @@ and write your own connector. The only example of this at the moment is the Drui
is getting superseded by Druids growing SQL support and the recent availability of a DBAPI and
SQLAlchemy driver. If the database you are considering integrating has any kind of of SQL support,
its probably preferable to go the SQLAlchemy route. Note that for a native connector to be possible
the database needs to have support for running OLAP-type queries and should be able to things that
the database needs to have support for running OLAP-type queries and should be able to do things that
are typical in basic SQL:
- aggregate data

View File

@ -56,5 +56,5 @@ from superset.stats_logger import StatsdStatsLogger
STATS_LOGGER = StatsdStatsLogger(host='localhost', port=8125, prefix='superset')
```
Note that its also possible to implement you own logger by deriving
Note that its also possible to implement your own logger by deriving
`superset.stats_logger.BaseStatsLogger`.