Alerts and Reports

(version 1.0.1 and above)

Users can configure automated alerts and reports to send dashboards or charts to an email recipient or Slack channel.

  • Alerts are sent when a SQL condition is reached
  • Reports are sent on a schedule

Alerts and reports are disabled by default. To turn them on, you need to do some setup, described here.

Requirements

Commons

In your superset_config.py
  • "ALERT_REPORTS" feature flag must be turned to True.
  • CELERYBEAT_SCHEDULE in CeleryConfig must contain schedule for reports.scheduler.
  • At least one of those must be configured, depending on what you want to use:
    • emails: SMTP_* settings
    • Slack messages: SLACK_API_TOKEN
In your Dockerfile
  • You must install a headless browser, for taking screenshots of the charts and dashboards. Only Firefox and Chrome are currently supported.

    If you choose Chrome, you must also change the value of WEBDRIVER_TYPE to "chrome" in your superset_config.py.

Note : All the components required (headless browser, redis, postgres db, celery worker and celery beat) are present in the docker image if you are following Installing Superset Locally. All you need to do is add the required config (See Detailed Config). Set ALERT_REPORTS_NOTIFICATION_DRY_RUN to False in superset config to disable dry-run mode and start receiving email/slack notifications.

Slack integration

To send alerts and reports to Slack channels, you need to create a new Slack Application on your workspace.

  1. Connect to your Slack workspace, then head to https://api.slack.com/apps.
  2. Create a new app.
  3. Go to “OAuth & Permissions” section, and give the following scopes to your app:
    • incoming-webhook
    • files:write
    • chat:write
  4. At the top of the “OAuth and Permissions” section, click “install to workspace”.
  5. Select a default channel for your app and continue. (You can post to any channel by inviting your Superset app into that channel).
  6. The app should now be installed in your workspace, and a “Bot User OAuth Access Token” should have been created. Copy that token in the SLACK_API_TOKEN variable of your superset_config.py.
  7. Restart the service (or run superset init) to pull in the new configuration.

Note: when you configure an alert or a report, the Slack channel list take channel names without the leading ‘#’ e.g. use alerts instead of #alerts.

Kubernetes specific

  • You must have a celery beat pod running. If you’re using the chart included in the GitHub repository under helm/superset, you need to put supersetCeleryBeat.enabled = true in your values override.
  • You can see the dedicated docs about Kubernetes installation for more generic details.

Docker-compose specific

You must have in yourdocker-compose.yaml
  • a redis message broker
  • PostgreSQL DB instead of SQLlite
  • one or more celery worker
  • a single celery beat

Detailed config

The following configurations need to be added to the superset_config.py file. This file is loaded when the image runs, and any configurations in it will override the default configurations found in the config.py.

You can find documentation about each field in the default config.py in the GitHub repository under superset/config.py.

You need to replace default values with your custom Redis, Slack and/or SMTP config.

In the CeleryConfig, only the CELERYBEAT_SCHEDULE is relative to this feature, the rest of the CeleryConfig can be changed for your needs.

  1. from celery.schedules import crontab
  2. FEATURE_FLAGS = {
  3. "ALERT_REPORTS": True
  4. }
  5. REDIS_HOST = "redis-superset"
  6. REDIS_PORT = "6379"
  7. class CeleryConfig:
  8. broker_url = 'redis://%s:%s/0' % (REDIS_HOST, REDIS_PORT)
  9. imports = ('superset.sql_lab', "superset.tasks", "superset.tasks.thumbnails", )
  10. result_backend = 'redis://%s:%s/0' % (REDIS_HOST, REDIS_PORT)
  11. worker_prefetch_multiplier = 10
  12. task_acks_late = True
  13. task_annotations = {
  14. 'sql_lab.get_sql_results': {
  15. 'rate_limit': '100/s',
  16. },
  17. 'email_reports.send': {
  18. 'rate_limit': '1/s',
  19. 'time_limit': 600,
  20. 'soft_time_limit': 600,
  21. 'ignore_result': True,
  22. },
  23. }
  24. beat_schedule = {
  25. 'reports.scheduler': {
  26. 'task': 'reports.scheduler',
  27. 'schedule': crontab(minute='*', hour='*'),
  28. },
  29. 'reports.prune_log': {
  30. 'task': 'reports.prune_log',
  31. 'schedule': crontab(minute=0, hour=0),
  32. },
  33. }
  34. CELERY_CONFIG = CeleryConfig
  35. SCREENSHOT_LOCATE_WAIT = 100
  36. SCREENSHOT_LOAD_WAIT = 600
  37. # Slack configuration
  38. SLACK_API_TOKEN = "xoxb-"
  39. # Email configuration
  40. SMTP_HOST = "smtp.sendgrid.net" #change to your host
  41. SMTP_STARTTLS = True
  42. SMTP_SSL = False
  43. SMTP_USER = "your_user"
  44. SMTP_PORT = 2525 # your port eg. 587
  45. SMTP_PASSWORD = "your_password"
  46. SMTP_MAIL_FROM = "noreply@youremail.com"
  47. # WebDriver configuration
  48. # If you use Firefox, you can stick with default values
  49. # If you use Chrome, then add the following WEBDRIVER_TYPE and WEBDRIVER_OPTION_ARGS
  50. WEBDRIVER_TYPE = "chrome"
  51. WEBDRIVER_OPTION_ARGS = [
  52. "--force-device-scale-factor=2.0",
  53. "--high-dpi-support=2.0",
  54. "--headless",
  55. "--disable-gpu",
  56. "--disable-dev-shm-usage",
  57. "--no-sandbox",
  58. "--disable-setuid-sandbox",
  59. "--disable-extensions",
  60. ]
  61. # This is for internal use, you can keep http
  62. WEBDRIVER_BASEURL="http://superset:8088"
  63. # This is the link sent to the recipient, change to your domain eg. https://superset.mydomain.com
  64. WEBDRIVER_BASEURL_USER_FRIENDLY="http://localhost:8088"

Custom Dockerfile

A webdriver (and headless browser) is needed to capture screenshots of the charts and dashboards which are then sent to the recipient. As the base superset image does not have a webdriver installed, we need to extend it and install the webdriver.

Using Firefox

  1. FROM apache/superset:1.0.1
  2. USER root
  3. RUN apt-get update && \
  4. apt-get install --no-install-recommends -y firefox-esr
  5. ENV GECKODRIVER_VERSION=0.29.0
  6. RUN wget -q https://github.com/mozilla/geckodriver/releases/download/v${GECKODRIVER_VERSION}/geckodriver-v${GECKODRIVER_VERSION}-linux64.tar.gz && \
  7. tar -x geckodriver -zf geckodriver-v${GECKODRIVER_VERSION}-linux64.tar.gz -O > /usr/bin/geckodriver && \
  8. chmod 755 /usr/bin/geckodriver && \
  9. rm geckodriver-v${GECKODRIVER_VERSION}-linux64.tar.gz
  10. RUN pip install --no-cache gevent psycopg2 redis
  11. USER superset

Using Chrome

  1. FROM apache/superset:1.0.1
  2. USER root
  3. RUN apt-get update && \
  4. wget -q https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb && \
  5. apt-get install -y --no-install-recommends ./google-chrome-stable_current_amd64.deb && \
  6. rm -f google-chrome-stable_current_amd64.deb
  7. RUN export CHROMEDRIVER_VERSION=$(curl --silent https://chromedriver.storage.googleapis.com/LATEST_RELEASE_88) && \
  8. wget -q https://chromedriver.storage.googleapis.com/${CHROMEDRIVER_VERSION}/chromedriver_linux64.zip && \
  9. unzip chromedriver_linux64.zip -d /usr/bin && \
  10. chmod 755 /usr/bin/chromedriver && \
  11. rm -f chromedriver_linux64.zip
  12. RUN pip install --no-cache gevent psycopg2 redis
  13. USER superset

Don’t forget to set WEBDRIVER_TYPE and WEBDRIVER_OPTION_ARGS in your config if you use Chrome.

Summary of steps to turn on alerts and reporting:

Using the templates below,

  1. Create a new directory and create the Dockerfile
  2. Build the extended image using the Dockerfile
  3. Create the docker-compose.yaml file in the same directory
  4. Create a new subdirectory called config
  5. Create the superset_config.py file in the config subdirectory
  6. Run the image using docker-compose up in the same directory as the docker-compose.py file
  7. In a new terminal window, upgrade the DB by running docker exec -it superset-1.0.1-extended superset db upgrade
  8. Then run docker exec -it superset-1.0.1-extended superset init
  9. Then setup your admin user if need be, docker exec -it superset-1.0.1-extended superset fab create-admin
  10. Finally, restart the running instance - CTRL-C, then docker-compose up

(note: v 1.0.1 is current at time of writing, you can change the version number to the latest version if a newer version is available)

Docker compose

The docker compose file lists the services that will be used when running the image. The specific services needed for alerts and reporting are outlined below.

Redis message broker

To ferry requests between the celery worker and the Superset instance, we use a message broker. This template uses Redis.

Replacing SQLite with Postgres

While it might be possible to use SQLite for alerts and reporting, it is highly recommended using a more production ready DB for Superset in general. Our template uses Postgres.

Celery worker

The worker will process the tasks that need to be performed when an alert or report is fired.

Celery beat

The beat is the scheduler that tells the worker when to perform its tasks. This schedule is defined when you create the alert or report.

Full docker-compose.yaml configuration

The Redis, Postgres, Celery worker and Celery beat services are defined in the template:

Config for docker-compose.yaml:

  1. version: '3.6'
  2. services:
  3. redis:
  4. image: redis:6.0.9-buster
  5. restart: on-failure
  6. volumes:
  7. - redis:/data
  8. postgres:
  9. image: postgres
  10. restart: on-failure
  11. environment:
  12. POSTGRES_DB: superset
  13. POSTGRES_PASSWORD: superset
  14. POSTGRES_USER: superset
  15. volumes:
  16. - db:/var/lib/postgresql/data
  17. worker:
  18. image: superset-1.0.1-extended
  19. restart: on-failure
  20. healthcheck:
  21. disable: true
  22. depends_on:
  23. - superset
  24. - postgres
  25. - redis
  26. command: "celery --app=superset.tasks.celery_app:app worker --pool=gevent --concurrency=500"
  27. volumes:
  28. - ./config/:/app/pythonpath/
  29. beat:
  30. image: superset-1.0.1-extended
  31. restart: on-failure
  32. healthcheck:
  33. disable: true
  34. depends_on:
  35. - superset
  36. - postgres
  37. - redis
  38. command: "celery --app=superset.tasks.celery_app:app beat --pidfile /tmp/celerybeat.pid --schedule /tmp/celerybeat-schedule"
  39. volumes:
  40. - ./config/:/app/pythonpath/
  41. superset:
  42. image: superset-1.0.1-extended
  43. restart: on-failure
  44. environment:
  45. - SUPERSET_PORT=8088
  46. ports:
  47. - "8088:8088"
  48. depends_on:
  49. - postgres
  50. - redis
  51. command: gunicorn --bind 0.0.0.0:8088 --access-logfile - --error-logfile - --workers 5 --worker-class gthread --threads 4 --timeout 200 --limit-request-line 4094 --limit-request-field_size 8190 superset.app:create_app()
  52. volumes:
  53. - ./config/:/app/pythonpath/
  54. volumes:
  55. db:
  56. external: true
  57. redis:
  58. external: false

Summary

With the extended image created by using the Dockerfile, and then running that image using docker-compose.yaml, plus the required configurations in the superset_config.py you should now have alerts and reporting working correctly.

  • The above templates also work in a Docker swarm environment, you would just need to add Deploy: to the Superset, Redis and Postgres services along with your specific configs for your swarm

Old Reports feature

Scheduling and Emailing Reports

(version 0.38 and below)

Email Reports

Email reports allow users to schedule email reports for:

  • chart and dashboard visualization (attachment or inline)
  • chart data (CSV attachment on inline table)

Enable email reports in your superset_config.py file:

  1. ENABLE_SCHEDULED_EMAIL_REPORTS = True

This flag enables some permissions that are stored in your database, so you’ll want to run superset init again if you are running this in a dev environment. Now you will find two new items in the navigation bar that allow you to schedule email reports:

  • Manage > Dashboard Emails
  • Manage > Chart Email Schedules

Schedules are defined in crontab format and each schedule can have a list of recipients (all of them can receive a single mail, or separate mails). For audit purposes, all outgoing mails can have a mandatory BCC.

In order get picked up you need to configure a celery worker and a celery beat (see section above “Celery Tasks”). Your celery configuration also needs an entry email_reports.schedule_hourly for CELERYBEAT_SCHEDULE.

To send emails you need to configure SMTP settings in your superset_config.py configuration file.

  1. EMAIL_NOTIFICATIONS = True
  2. SMTP_HOST = "email-smtp.eu-west-1.amazonaws.com"
  3. SMTP_STARTTLS = True
  4. SMTP_SSL = False
  5. SMTP_USER = "smtp_username"
  6. SMTP_PORT = 25
  7. SMTP_PASSWORD = os.environ.get("SMTP_PASSWORD")
  8. SMTP_MAIL_FROM = "insights@komoot.com"

To render dashboards you need to install a local browser on your Superset instance:

You’ll need to adjust the WEBDRIVER_TYPE accordingly in your configuration. You also need to specify on behalf of which username to render the dashboards. In general dashboards and charts are not accessible to unauthorized requests, that is why the worker needs to take over credentials of an existing user to take a snapshot.

  1. THUMBNAIL_SELENIUM_USER = 'username_with_permission_to_access_dashboards'

Important notes

  • Be mindful of the concurrency setting for celery (using -c 4). Selenium/webdriver instances can consume a lot of CPU / memory on your servers.
  • In some cases, if you notice a lot of leaked geckodriver processes, try running your celery processes with celery worker --pool=prefork --max-tasks-per-child=128 ...
  • It is recommended to run separate workers for the sql_lab and email_reports tasks. This can be done using the queue field in CELERY_ANNOTATIONS.
  • Adjust WEBDRIVER_BASEURL in your configuration file if celery workers can’t access Superset via its default value of http://0.0.0.0:8080/.

Schedule Reports

You can optionally allow your users to schedule queries directly in SQL Lab. This is done by adding extra metadata to saved queries, which are then picked up by an external scheduled (like Apache Airflow).

To allow scheduled queries, add the following to SCHEDULED_QUERIES in your configuration file:

  1. SCHEDULED_QUERIES = {
  2. # This information is collected when the user clicks "Schedule query",
  3. # and saved into the `extra` field of saved queries.
  4. # See: https://github.com/mozilla-services/react-jsonschema-form
  5. 'JSONSCHEMA': {
  6. 'title': 'Schedule',
  7. 'description': (
  8. 'In order to schedule a query, you need to specify when it '
  9. 'should start running, when it should stop running, and how '
  10. 'often it should run. You can also optionally specify '
  11. 'dependencies that should be met before the query is '
  12. 'executed. Please read the documentation for best practices '
  13. 'and more information on how to specify dependencies.'
  14. ),
  15. 'type': 'object',
  16. 'properties': {
  17. 'output_table': {
  18. 'type': 'string',
  19. 'title': 'Output table name',
  20. },
  21. 'start_date': {
  22. 'type': 'string',
  23. 'title': 'Start date',
  24. # date-time is parsed using the chrono library, see
  25. # https://www.npmjs.com/package/chrono-node#usage
  26. 'format': 'date-time',
  27. 'default': 'tomorrow at 9am',
  28. },
  29. 'end_date': {
  30. 'type': 'string',
  31. 'title': 'End date',
  32. # date-time is parsed using the chrono library, see
  33. # https://www.npmjs.com/package/chrono-node#usage
  34. 'format': 'date-time',
  35. 'default': '9am in 30 days',
  36. },
  37. 'schedule_interval': {
  38. 'type': 'string',
  39. 'title': 'Schedule interval',
  40. },
  41. 'dependencies': {
  42. 'type': 'array',
  43. 'title': 'Dependencies',
  44. 'items': {
  45. 'type': 'string',
  46. },
  47. },
  48. },
  49. },
  50. 'UISCHEMA': {
  51. 'schedule_interval': {
  52. 'ui:placeholder': '@daily, @weekly, etc.',
  53. },
  54. 'dependencies': {
  55. 'ui:help': (
  56. 'Check the documentation for the correct format when '
  57. 'defining dependencies.'
  58. ),
  59. },
  60. },
  61. 'VALIDATION': [
  62. # ensure that start_date <= end_date
  63. {
  64. 'name': 'less_equal',
  65. 'arguments': ['start_date', 'end_date'],
  66. 'message': 'End date cannot be before start date',
  67. # this is where the error message is shown
  68. 'container': 'end_date',
  69. },
  70. ],
  71. # link to the scheduler; this example links to an Airflow pipeline
  72. # that uses the query id and the output table as its name
  73. 'linkback': (
  74. 'https://airflow.example.com/admin/airflow/tree?'
  75. 'dag_id=query_${id}_${extra_json.schedule_info.output_table}'
  76. ),
  77. }

This configuration is based on react-jsonschema-form and will add a menu item called “Schedule” to SQL Lab. When the menu item is clicked, a modal will show up where the user can add the metadata required for scheduling the query.

This information can then be retrieved from the endpoint /savedqueryviewapi/api/read and used to schedule the queries that have scheduled_queries in their JSON metadata. For schedulers other than Airflow, additional fields can be easily added to the configuration file above.