Alerts and Reports

Users can configure automated alerts and reports to send dashboards or charts to an email recipient or Slack channel.

  • Alerts are sent when a SQL condition is reached
  • Reports are sent on a schedule

Alerts and reports are disabled by default. To turn them on, you need to do some setup, described here.

Requirements

Commons

In your superset_config.py or superset_config_docker.py
  • "ALERT_REPORTS" feature flag must be turned to True.
  • beat_schedule in CeleryConfig must contain schedule for reports.scheduler.
  • At least one of those must be configured, depending on what you want to use:
    • emails: SMTP_* settings
    • Slack messages: SLACK_API_TOKEN
Disable dry-run mode

Screenshots will be taken but no messages actually sent as long as ALERT_REPORTS_NOTIFICATION_DRY_RUN = True, its default value in docker/pythonpath_dev/superset_config.py. To disable dry-run mode and start receiving email/Slack notifications, set ALERT_REPORTS_NOTIFICATION_DRY_RUN to False in superset config.

In your Dockerfile
  • You must install a headless browser, for taking screenshots of the charts and dashboards. Only Firefox and Chrome are currently supported.

    If you choose Chrome, you must also change the value of WEBDRIVER_TYPE to "chrome" in your superset_config.py.

Note: All the components required (Firefox headless browser, Redis, Postgres db, celery worker and celery beat) are present in the dev docker image if you are following Installing Superset Locally. All you need to do is add the required config variables described in this guide (See Detailed Config).

If you are running a non-dev docker image, e.g., a stable release like apache/superset:2.0.1, that image does not include a headless browser. Only the superset_worker container needs this headless browser to browse to the target chart or dashboard. You can either install and configure the headless browser - see “Custom Dockerfile” section below - or when deploying via docker compose, modify your docker-compose.yml file to use a dev image for the worker container and a stable release image for the superset_app container.

Note: In this context, a “dev image” is the same application software as its corresponding non-dev image, just bundled with additional tools. So an image like 2.0.1-dev is identical to 2.0.1 when it comes to stability, functionality, and running in production. The actual “in-development” versions of Superset - cutting-edge and unstable - are not tagged with version numbers on Docker Hub and will display version 0.0.0-dev within the Superset UI.

Slack integration

To send alerts and reports to Slack channels, you need to create a new Slack Application on your workspace.

  1. Connect to your Slack workspace, then head to https://api.slack.com/apps.
  2. Create a new app.
  3. Go to “OAuth & Permissions” section, and give the following scopes to your app:
    • incoming-webhook
    • files:write
    • chat:write
  4. At the top of the “OAuth and Permissions” section, click “install to workspace”.
  5. Select a default channel for your app and continue. (You can post to any channel by inviting your Superset app into that channel).
  6. The app should now be installed in your workspace, and a “Bot User OAuth Access Token” should have been created. Copy that token in the SLACK_API_TOKEN variable of your superset_config.py.
  7. Restart the service (or run superset init) to pull in the new configuration.

Note: when you configure an alert or a report, the Slack channel list takes channel names without the leading ‘#’ e.g. use alerts instead of #alerts.

Kubernetes-specific

  • You must have a celery beat pod running. If you’re using the chart included in the GitHub repository under helm/superset, you need to put supersetCeleryBeat.enabled = true in your values override.
  • You can see the dedicated docs about Kubernetes installation for more generic details.

Docker Compose specific

You must have in your docker-compose.yml
  • A Redis message broker
  • PostgreSQL DB instead of SQLlite
  • One or more celery worker
  • A single celery beat

This process also works in a Docker swarm environment, you would just need to add Deploy: to the Superset, Redis and Postgres services along with your specific configs for your swarm.

Detailed config

The following configurations need to be added to the superset_config.py file. This file is loaded when the image runs, and any configurations in it will override the default configurations found in the config.py.

You can find documentation about each field in the default config.py in the GitHub repository under superset/config.py.

You need to replace default values with your custom Redis, Slack and/or SMTP config.

Superset uses Celery beat and Celery worker(s) to send alerts and reports.

  • The beat is the scheduler that tells the worker when to perform its tasks. This schedule is defined when you create the alert or report.
  • The worker will process the tasks that need to be performed when an alert or report is fired.

In the CeleryConfig, only the beat_schedule is relevant to this feature, the rest of the CeleryConfig can be changed for your needs.

  1. from celery.schedules import crontab
  2. FEATURE_FLAGS = {
  3. "ALERT_REPORTS": True
  4. }
  5. REDIS_HOST = "superset_cache"
  6. REDIS_PORT = "6379"
  7. class CeleryConfig:
  8. broker_url = f"redis://{REDIS_HOST}:{REDIS_PORT}/0"
  9. imports = (
  10. "superset.sql_lab",
  11. "superset.tasks.scheduler",
  12. )
  13. result_backend = f"redis://{REDIS_HOST}:{REDIS_PORT}/0"
  14. worker_prefetch_multiplier = 10
  15. task_acks_late = True
  16. task_annotations = {
  17. "sql_lab.get_sql_results": {
  18. "rate_limit": "100/s",
  19. },
  20. }
  21. beat_schedule = {
  22. "reports.scheduler": {
  23. "task": "reports.scheduler",
  24. "schedule": crontab(minute="*", hour="*"),
  25. },
  26. "reports.prune_log": {
  27. "task": "reports.prune_log",
  28. "schedule": crontab(minute=0, hour=0),
  29. },
  30. }
  31. CELERY_CONFIG = CeleryConfig
  32. SCREENSHOT_LOCATE_WAIT = 100
  33. SCREENSHOT_LOAD_WAIT = 600
  34. # Slack configuration
  35. SLACK_API_TOKEN = "xoxb-"
  36. # Email configuration
  37. SMTP_HOST = "smtp.sendgrid.net" # change to your host
  38. SMTP_PORT = 2525 # your port, e.g. 587
  39. SMTP_STARTTLS = True
  40. SMTP_SSL_SERVER_AUTH = True # If your using an SMTP server with a valid certificate
  41. SMTP_SSL = False
  42. SMTP_USER = "your_user" # use the empty string "" if using an unauthenticated SMTP server
  43. SMTP_PASSWORD = "your_password" # use the empty string "" if using an unauthenticated SMTP server
  44. SMTP_MAIL_FROM = "noreply@youremail.com"
  45. EMAIL_REPORTS_SUBJECT_PREFIX = "[Superset] " # optional - overwrites default value in config.py of "[Report] "
  46. # WebDriver configuration
  47. # If you use Firefox, you can stick with default values
  48. # If you use Chrome, then add the following WEBDRIVER_TYPE and WEBDRIVER_OPTION_ARGS
  49. WEBDRIVER_TYPE = "chrome"
  50. WEBDRIVER_OPTION_ARGS = [
  51. "--force-device-scale-factor=2.0",
  52. "--high-dpi-support=2.0",
  53. "--headless",
  54. "--disable-gpu",
  55. "--disable-dev-shm-usage",
  56. "--no-sandbox",
  57. "--disable-setuid-sandbox",
  58. "--disable-extensions",
  59. ]
  60. # This is for internal use, you can keep http
  61. WEBDRIVER_BASEURL = "http://superset:8088"
  62. # This is the link sent to the recipient. Change to your domain, e.g. https://superset.mydomain.com
  63. WEBDRIVER_BASEURL_USER_FRIENDLY = "http://localhost:8088"

You also need to specify on behalf of which username to render the dashboards. In general dashboards and charts are not accessible to unauthorized requests, that is why the worker needs to take over credentials of an existing user to take a snapshot.

By default, Alerts and Reports are executed as the owner of the alert/report object. To use a fixed user account, just change the config as follows (admin in this example):

  1. from superset.tasks.types import ExecutorType
  2. THUMBNAIL_SELENIUM_USER = 'admin'
  3. ALERT_REPORTS_EXECUTE_AS = [ExecutorType.SELENIUM]

Please refer to ExecutorType in the codebase for other executor types.

Important notes

  • Be mindful of the concurrency setting for celery (using -c 4). Selenium/webdriver instances can consume a lot of CPU / memory on your servers.
  • In some cases, if you notice a lot of leaked geckodriver processes, try running your celery processes with celery worker --pool=prefork --max-tasks-per-child=128 ...
  • It is recommended to run separate workers for the sql_lab and email_reports tasks. This can be done using the queue field in task_annotations.
  • Adjust WEBDRIVER_BASEURL in your configuration file if celery workers can’t access Superset via its default value of http://0.0.0.0:8080/.

Custom Dockerfile

If you’re running the dev version of a released Superset image, like apache/superset:2.0.1-dev, you should be set with the above.

But if you’re building your own image, or starting with a non-dev version, a webdriver (and headless browser) is needed to capture screenshots of the charts and dashboards which are then sent to the recipient. Here’s how you can modify your Dockerfile to take the screenshots either with Firefox or Chrome.

Using Firefox

  1. FROM apache/superset:2.0.1
  2. USER root
  3. RUN apt-get update && \
  4. apt-get install --no-install-recommends -y firefox-esr
  5. ENV GECKODRIVER_VERSION=0.29.0
  6. RUN wget -q https://github.com/mozilla/geckodriver/releases/download/v${GECKODRIVER_VERSION}/geckodriver-v${GECKODRIVER_VERSION}-linux64.tar.gz && \
  7. tar -x geckodriver -zf geckodriver-v${GECKODRIVER_VERSION}-linux64.tar.gz -O > /usr/bin/geckodriver && \
  8. chmod 755 /usr/bin/geckodriver && \
  9. rm geckodriver-v${GECKODRIVER_VERSION}-linux64.tar.gz
  10. RUN pip install --no-cache gevent psycopg2 redis
  11. USER superset

Using Chrome

  1. FROM apache/superset:2.0.1
  2. USER root
  3. RUN apt-get update && \
  4. wget -q https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb && \
  5. apt-get install -y --no-install-recommends ./google-chrome-stable_current_amd64.deb && \
  6. rm -f google-chrome-stable_current_amd64.deb
  7. RUN export CHROMEDRIVER_VERSION=$(curl --silent https://chromedriver.storage.googleapis.com/LATEST_RELEASE_102) && \
  8. wget -q https://chromedriver.storage.googleapis.com/${CHROMEDRIVER_VERSION}/chromedriver_linux64.zip && \
  9. unzip chromedriver_linux64.zip -d /usr/bin && \
  10. chmod 755 /usr/bin/chromedriver && \
  11. rm -f chromedriver_linux64.zip
  12. RUN pip install --no-cache gevent psycopg2 redis
  13. USER superset

Don’t forget to set WEBDRIVER_TYPE and WEBDRIVER_OPTION_ARGS in your config if you use Chrome.

Troubleshooting

There are many reasons that reports might not be working. Try these steps to check for specific issues.

Confirm feature flag is enabled and you have sufficient permissions

If you don’t see “Alerts & Reports” under the Manage section of the Settings dropdown in the Superset UI, you need to enable the ALERT_REPORTS feature flag (see above). Enable another feature flag and check to see that it took effect, to verify that your config file is getting loaded.

Log in as an admin user to ensure you have adequate permissions.

Check the logs of your Celery worker

This is the best source of information about the problem. In a docker compose deployment, you can do this with a command like docker logs superset_worker --since 1h.

Check web browser and webdriver installation

To take a screenshot, the worker visits the dashboard or chart using a headless browser, then takes a screenshot. If you are able to send a chart as CSV or text but can’t send as PNG, your problem may lie with the browser.

Superset docker images that have a tag ending with -dev have the Firefox headless browser and geckodriver already installed. You can test that these are installed and in the proper path by entering your Superset worker and running firefox --headless and then geckodriver. Both commands should start those applications.

If you are handling the installation of that software on your own, or wish to use Chromium instead, do your own verification to ensure that the headless browser opens successfully in the worker environment.

Send a test email

One symptom of an invalid connection to an email server is receiving an error of [Errno 110] Connection timed out in your logs when the report tries to send.

Confirm via testing that your outbound email configuration is correct. Here is the simplest test, for an un-authenticated email SMTP email service running on port 25. If you are sending over SSL, for instance, study how Superset’s codebase sends emails and then test with those commands and arguments.

Start Python in your worker environment, replace all example values, and run:

  1. import smtplib
  2. from email.mime.multipart import MIMEMultipart
  3. from email.mime.text import MIMEText
  4. from_email = 'superset_emails@example.com'
  5. to_email = 'your_email@example.com'
  6. msg = MIMEMultipart()
  7. msg['From'] = from_email
  8. msg['To'] = to_email
  9. msg['Subject'] = 'Superset SMTP config test'
  10. message = 'It worked'
  11. msg.attach(MIMEText(message))
  12. mailserver = smtplib.SMTP('smtpmail.example.com', 25)
  13. mailserver.sendmail(from_email, to_email, msg.as_string())
  14. mailserver.quit()

This should send an email.

Possible fixes:

  • Some cloud hosts disable outgoing unauthenticated SMTP email to prevent spam. For instance, Azure blocks port 25 by default on some machines. Enable that port or use another sending method.
  • Use another set of SMTP credentials that you verify works in this setup.

Browse to your report from the worker

The worker may be unable to reach the report. It will use the value of WEBDRIVER_BASEURL to browse to the report. If that route is invalid, or presents an authentication challenge that the worker can’t pass, the report screenshot will fail.

Check this by attempting to curl the URL of a report that you see in the error logs of your worker. For instance, from the worker environment, run curl http://superset_app:8088/superset/dashboard/1/. You may get different responses depending on whether the dashboard exists - for example, you may need to change the 1 in that URL. If there’s a URL in your logs from a failed report screenshot, that’s a good place to start. The goal is to determine a valid value for WEBDRIVER_BASEURL and determine if an issue like HTTPS or authentication is redirecting your worker.

In a deployment with authentication measures enabled like HTTPS and Single Sign-On, it may make sense to have the worker navigate directly to the Superset application running in the same location, avoiding the need to sign in. For instance, you could use WEBDRIVER_BASEURL="http://superset_app:8088" for a docker compose deployment, and set "force_https": False, in your TALISMAN_CONFIG.

Scheduling Queries as Reports

You can optionally allow your users to schedule queries directly in SQL Lab. This is done by adding extra metadata to saved queries, which are then picked up by an external scheduled (like Apache Airflow).

To allow scheduled queries, add the following to SCHEDULED_QUERIES in your configuration file:

  1. SCHEDULED_QUERIES = {
  2. # This information is collected when the user clicks "Schedule query",
  3. # and saved into the `extra` field of saved queries.
  4. # See: https://github.com/mozilla-services/react-jsonschema-form
  5. 'JSONSCHEMA': {
  6. 'title': 'Schedule',
  7. 'description': (
  8. 'In order to schedule a query, you need to specify when it '
  9. 'should start running, when it should stop running, and how '
  10. 'often it should run. You can also optionally specify '
  11. 'dependencies that should be met before the query is '
  12. 'executed. Please read the documentation for best practices '
  13. 'and more information on how to specify dependencies.'
  14. ),
  15. 'type': 'object',
  16. 'properties': {
  17. 'output_table': {
  18. 'type': 'string',
  19. 'title': 'Output table name',
  20. },
  21. 'start_date': {
  22. 'type': 'string',
  23. 'title': 'Start date',
  24. # date-time is parsed using the chrono library, see
  25. # https://www.npmjs.com/package/chrono-node#usage
  26. 'format': 'date-time',
  27. 'default': 'tomorrow at 9am',
  28. },
  29. 'end_date': {
  30. 'type': 'string',
  31. 'title': 'End date',
  32. # date-time is parsed using the chrono library, see
  33. # https://www.npmjs.com/package/chrono-node#usage
  34. 'format': 'date-time',
  35. 'default': '9am in 30 days',
  36. },
  37. 'schedule_interval': {
  38. 'type': 'string',
  39. 'title': 'Schedule interval',
  40. },
  41. 'dependencies': {
  42. 'type': 'array',
  43. 'title': 'Dependencies',
  44. 'items': {
  45. 'type': 'string',
  46. },
  47. },
  48. },
  49. },
  50. 'UISCHEMA': {
  51. 'schedule_interval': {
  52. 'ui:placeholder': '@daily, @weekly, etc.',
  53. },
  54. 'dependencies': {
  55. 'ui:help': (
  56. 'Check the documentation for the correct format when '
  57. 'defining dependencies.'
  58. ),
  59. },
  60. },
  61. 'VALIDATION': [
  62. # ensure that start_date <= end_date
  63. {
  64. 'name': 'less_equal',
  65. 'arguments': ['start_date', 'end_date'],
  66. 'message': 'End date cannot be before start date',
  67. # this is where the error message is shown
  68. 'container': 'end_date',
  69. },
  70. ],
  71. # link to the scheduler; this example links to an Airflow pipeline
  72. # that uses the query id and the output table as its name
  73. 'linkback': (
  74. 'https://airflow.example.com/admin/airflow/tree?'
  75. 'dag_id=query_${id}_${extra_json.schedule_info.output_table}'
  76. ),
  77. }

This configuration is based on react-jsonschema-form and will add a menu item called “Schedule” to SQL Lab. When the menu item is clicked, a modal will show up where the user can add the metadata required for scheduling the query.

This information can then be retrieved from the endpoint /savedqueryviewapi/api/read and used to schedule the queries that have scheduled_queries in their JSON metadata. For schedulers other than Airflow, additional fields can be easily added to the configuration file above.