Troubleshooting

This section contains some ideas for troubleshooting common problems experienced with data retention.

Failed to start a background worker

`

  1. "<TYPE_OF_BACKGROUND_JOB>": failed to start a background worker

`

You might see this error message in the logs if background workers aren’t properly configured.

To fix this error, make sure that max_worker_processes, max_parallel_workers, and timescaledb.max_background_workers are properly set. timescaledb.max_background_workers should equal the number of databases plus the number of concurrent background workers. max_worker_processes should equal the sum of timescaledb.max_background_workers and max_parallel_workers.

For more information, see the worker configuration docs.

Hypertable retention policy isn’t applying to continuous aggregates

A retention policy set on a hypertable does not apply to any continuous aggregates made from the hypertable. This allows you to set different retention periods for raw and summarized data. To apply a retention policy to a continuous aggregate, set the policy on the continuous aggregate itself.

Dropping chunks times out

When you drop a chunk, it requires an exclusive lock. If a chunk is being accessed by another session, you cannot drop the chunk at the same time. If a drop chunk operation can’t get the lock on the chunk, then it times out and the process fails. To resolve this problem, check what is locking the chunk. In some cases, this could be caused by a continuous aggregate or other process accessing the chunk. When the drop chunk operation can get an exclusive lock on the chunk, it completes as expected.

For more information about locks, see the PostgreSQL lock monitoring documentation.

Scheduled jobs stop running

Your scheduled jobs might stop running for various reasons. On self-hosted TimescaleDB, you can fix this by restarting background workers:

  1. SELECT _timescaledb_internal.start_background_workers();

On Timescale Cloud and Managed Service for TimescaleDB, restart background workers by doing one of the following:

  • Run SELECT timescaledb_pre_restore(), followed by SELECT timescaledb_post_restore().
  • Power the service off and on again. This might cause a downtime of a few minutes while the service restores from backup and replays the write-ahead log.