3 Database upgrade to primary keys

Overview

Since Zabbix 6.0, primary keys are used for all tables in new installations.

This section provides instructions for manually upgrading the history tables in existing installations to primary keys.

Instructions are available for:

Important notes

  • Make sure to back up the database before the upgrade
  • If your database uses partitions, contact your DB administrator or Zabbix support team for help
  • The CSV files can be removed after a successful upgrade to primary keys

MySQL

Export and import must be performed in tmux/screen, so that the session isn’t dropped.

See also: Important notes

MySQL 5.7+/8.0+
  • Rename old tables, create new tables by running history_pk_prepare.sql.
  1. mysql -uzabbix -p<password> zabbix < /usr/share/doc/zabbix-sql-scripts/mysql/history_pk_prepare.sql
  • Export and import data

mysqlsh should be installed. mysqlsh should be able to connect to the DB. If connection is done through socket, it could be needed to explicitly state a path to it.

Connect via mysqlsh:

  1. sudo mysqlsh -uroot -S /run/mysqld/mysqld.sock --no-password -Dzabbix

Run (CSVPATH should/could be tweaked according to needs):

  1. CSVPATH="/var/lib/mysql-files";
  2. util.exportTable("history_old", CSVPATH + "/history.csv", { dialect: "csv" });
  3. util.importTable(CSVPATH + "/history.csv", {"dialect": "csv", "table": "history" });
  4. util.exportTable("history_uint_old", CSVPATH + "/history_uint.csv", { dialect: "csv" });
  5. util.importTable(CSVPATH + "/history_uint.csv", {"dialect": "csv", "table": "history_uint" });
  6. util.exportTable("history_str_old", CSVPATH + "/history_str.csv", { dialect: "csv" });
  7. util.importTable(CSVPATH + "/history_str.csv", {"dialect": "csv", "table": "history_str" });
  8. util.exportTable("history_log_old", CSVPATH + "/history_log.csv", { dialect: "csv" });
  9. util.importTable(CSVPATH + "/history_log.csv", {"dialect": "csv", "table": "history_log" });
  10. util.exportTable("history_text_old", CSVPATH + "/history_text.csv", { dialect: "csv" });
  11. util.importTable(CSVPATH + "/history_text.csv", {"dialect": "csv", "table": "history_text" });
  • Verify that everything works as supposed

  • Drop old tables

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;
MySQL <5.7, MariaDB (or if mysqlsh cannot be used for some reason)

This option is slower and more time consuming, use only if there is a reason not to use mysqlsh.

  • Rename old tables, create new tables by running history_pk_prepare.sql.
  1. mysql -uzabbix -p<password> zabbix < /usr/share/doc/zabbix-sql-scripts/mysql/history_pk_prepare.sql
  • Export and import data

Check if import/export is enabled only for files in the specific path:

  1. mysql> SELECT @@secure_file_priv;
  2. +-----------------------+
  3. | @@secure_file_priv |
  4. +-----------------------+
  5. | /var/lib/mysql-files/ |
  6. +-----------------------+

If the value is a path to directory, export/import could be performed for files in that directory. In this case paths to files in queries should be edited accordingly. Alternatively, secure_file_priv could be disabled (set to empty string) during upgrade. If the value is empty, export/import could be performed to/from files that could be located anywhere.

max_execution_time should be disabled before exporting data to avoid timeout during export.

  1. SET @@max_execution_time=0;
  2. SELECT * INTO OUTFILE '/var/lib/mysql-files/history.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_old;
  3. LOAD DATA INFILE '/var/lib/mysql-files/history.csv' IGNORE INTO TABLE history FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  4. SELECT * INTO OUTFILE '/var/lib/mysql-files/history_uint.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_uint_old;
  5. LOAD DATA INFILE '/var/lib/mysql-files/history_uint.csv' IGNORE INTO TABLE history_uint FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  6. SELECT * INTO OUTFILE '/var/lib/mysql-files/history_str.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_str_old;
  7. LOAD DATA INFILE '/var/lib/mysql-files/history_str.csv' IGNORE INTO TABLE history_str FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  8. SELECT * INTO OUTFILE '/var/lib/mysql-files/history_log.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_log_old;
  9. LOAD DATA INFILE '/var/lib/mysql-files/history_log.csv' IGNORE INTO TABLE history_log FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  10. SELECT * INTO OUTFILE '/var/lib/mysql-files/history_text.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_text_old;
  11. LOAD DATA INFILE '/var/lib/mysql-files/history_text.csv' IGNORE INTO TABLE history_text FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  • Verify that everything works as supposed

  • Drop old tables

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;
Improving performance

Additional hints for improving performance in both cases:

  • Increase bulk_insert_buffer_size buffer in the configuration file inside [mysqld] section or set it before import with SET:
  1. [mysqld]
  2. bulk_insert_buffer_size=256M
  3. mysql cli > SET SESSION bulk_insert_buffer_size= 1024 * 1024 * 256;
  4. mysql cli > ... import queries ...
  • See “Optimizing InnoDB bulk data loading”: (MySQL 5.7, MySQL 8.0)

  • Disable binary logging (should not be used in case of a slave servers, since will not replicate data):

  1. mysql cli > SET SESSION SQL_LOG_BIN=0;
  2. mysql cli > ... import queries ...

PostgreSQL

Export and import must be performed in tmux/screen, so that the session isn’t dropped.

See also: Important notes

Upgrading tables
  • Rename tables using history_pk_prepare.sql.
  1. sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • Export current history, import it to the temp table, and insert it into new tables while ignoring duplicates
  1. \copy history_old TO '/tmp/history.csv' DELIMITER ',' CSV
  2. CREATE TEMP TABLE temp_history (
  3. itemid bigint NOT NULL,
  4. clock integer DEFAULT '0' NOT NULL,
  5. value DOUBLE PRECISION DEFAULT '0.0000' NOT NULL,
  6. ns integer DEFAULT '0' NOT NULL
  7. );
  8. \copy temp_history FROM '/tmp/history.csv' DELIMITER ',' CSV
  9. INSERT INTO history SELECT * FROM temp_history ON CONFLICT (itemid,clock,ns) DO NOTHING;
  10. \copy history_uint_old TO '/tmp/history_uint.csv' DELIMITER ',' CSV
  11. CREATE TEMP TABLE temp_history_uint (
  12. itemid bigint NOT NULL,
  13. clock integer DEFAULT '0' NOT NULL,
  14. value numeric(20) DEFAULT '0' NOT NULL,
  15. ns integer DEFAULT '0' NOT NULL
  16. );
  17. \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
  18. INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
  19. \copy history_str_old TO '/tmp/history_str.csv' DELIMITER ',' CSV
  20. CREATE TEMP TABLE temp_history_str (
  21. itemid bigint NOT NULL,
  22. clock integer DEFAULT '0' NOT NULL,
  23. value varchar(255) DEFAULT '' NOT NULL,
  24. ns integer DEFAULT '0' NOT NULL
  25. );
  26. \copy temp_history_str FROM '/tmp/history_str.csv' DELIMITER ',' CSV
  27. INSERT INTO history_str (itemid,clock,value,ns) SELECT * FROM temp_history_str ON CONFLICT (itemid,clock,ns) DO NOTHING;
  28. \copy history_log_old TO '/tmp/history_log.csv' DELIMITER ',' CSV
  29. CREATE TEMP TABLE temp_history_log (
  30. itemid bigint NOT NULL,
  31. clock integer DEFAULT '0' NOT NULL,
  32. timestamp integer DEFAULT '0' NOT NULL,
  33. source varchar(64) DEFAULT '' NOT NULL,
  34. severity integer DEFAULT '0' NOT NULL,
  35. value text DEFAULT '' NOT NULL,
  36. logeventid integer DEFAULT '0' NOT NULL,
  37. ns integer DEFAULT '0' NOT NULL
  38. );
  39. \copy temp_history_log FROM '/tmp/history_log.csv' DELIMITER ',' CSV
  40. INSERT INTO history_log SELECT * FROM temp_history_log ON CONFLICT (itemid,clock,ns) DO NOTHING;
  41. \copy history_text_old TO '/tmp/history_text.csv' DELIMITER ',' CSV
  42. CREATE TEMP TABLE temp_history_text (
  43. itemid bigint NOT NULL,
  44. clock integer DEFAULT '0' NOT NULL,
  45. value text DEFAULT '' NOT NULL,
  46. ns integer DEFAULT '0' NOT NULL
  47. );
  48. \copy temp_history_text FROM '/tmp/history_text.csv' DELIMITER ',' CSV
  49. INSERT INTO history_text SELECT * FROM temp_history_text ON CONFLICT (itemid,clock,ns) DO NOTHING;
  • Verify that everything works as supposed

  • Drop old tables

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;

Consider using the following tips to improve insert performance:

TimescaleDB v1.x

Export and import must be performed in tmux/screen, so that the session isn’t dropped.

See also: Important notes

Upgrading tables
  • Rename tables using history_pk_prepare.sql.
  1. sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • Example of upgrading for one table:
  1. -- Verify that there is enough space to allow export of uncompressed data
  2. select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_uint_old');
  3. -- Export data
  4. \copy (select * from history_uint_old) TO '/tmp/history_uint.csv' DELIMITER ',' CSV
  5. CREATE TEMP TABLE temp_history_uint (
  6. itemid bigint NOT NULL,
  7. clock integer DEFAULT '0' NOT NULL,
  8. value numeric(20) DEFAULT '0' NOT NULL,
  9. ns integer DEFAULT '0' NOT NULL
  10. );
  11. -- Import data
  12. \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
  13. -- Create hypertable and populate it
  14. select create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
  15. INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
  16. -- Enable compression
  17. select set_integer_now_func('history_uint', 'zbx_ts_unix_now', true);
  18. alter table history_uint set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');
  19. -- Job id will returned, it should be passed to run_job
  20. select add_compress_chunks_policy('history_uint', (
  21. select (p.older_than).integer_interval from _timescaledb_config.bgw_policy_compress_chunks p
  22. inner join _timescaledb_catalog.hypertable h on (h.id=p.hypertable_id) where h.table_name='history_uint'
  23. )::integer
  24. );
  25. select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema='public' and hypertable_name='history_uint'), scheduled => true);
  26. -- Run compression job
  27. call run_job(<JOB_ID>);
  28. -- May show 'NOTICE: no chunks for hypertable public.history_uint that satisfy compress chunk policy', it is fine.
  • Verify that everything works as supposed

  • Drop old tables

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;

See also: Tips for improving PostgreSQL insert performance

TimescaleDB v2.x

Export and import must be performed in tmux/screen, so that the session isn’t dropped.

See also: Important notes

Upgrading tables
  • Rename tables using history_pk_prepare.sql.
  1. sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • Example of upgrading for one table:
  1. -- Verify that there is enough space to allow export of uncompressed data
  2. select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_uint_old');
  3. -- Export data
  4. \copy (select * from history_uint_old) TO '/tmp/history_uint.csv' DELIMITER ',' CSV
  5. CREATE TEMP TABLE temp_history_uint (
  6. itemid bigint NOT NULL,
  7. clock integer DEFAULT '0' NOT NULL,
  8. value numeric(20) DEFAULT '0' NOT NULL,
  9. ns integer DEFAULT '0' NOT NULL
  10. );
  11. -- Import data
  12. \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
  13. -- Create hypertable and populate it
  14. select create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
  15. INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
  16. -- Enable compression
  17. select set_integer_now_func('history_uint', 'zbx_ts_unix_now', true);
  18. alter table history_uint set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');
  19. -- Substitute your schema in hypertable_schema
  20. -- Job id will returned, it should be passed to run_job
  21. select add_compression_policy('history_uint', (
  22. select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema='public' and hypertable_name='history_uint_old'
  23. )::integer
  24. );
  25. select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema='public' and hypertable_name='history_uint'), scheduled => true);
  26. -- Run compression job
  27. call run_job(<JOB_ID>);
  28. -- May show 'NOTICE: no chunks for hypertable public.history_uint that satisfy compress chunk policy', it is fine.
  • Verify that everything works as supposed

  • Drop old tables

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;

See also: Tips for improving PostgreSQL insert performance

Oracle

Export and import must be performed in tmux/screen, so that the session isn’t dropped.

See also: Important notes

Importing/exporting history tables in one attempt

Additionally, consider performance tips for Oracle Data Pump.

  • Rename tables using history_pk_prepare.sql.
  1. shell> cd /path/to/zabbix-sources/database/oracle
  2. shell> sqlplus zabbix/[email protected]_host/ORCL
  3. sqlplus> @history_pk_prepare.sql
  • Prepare directories for datapump

Example:

  1. # mkdir -pv /export/history
  2. # chown -R oracle:oracle /export
  • Create a directory object, grant permissions to it. Run the following under sysdba role:
  1. create directory history as '/export/history';
  2. grant read,write on directory history to zabbix;
  • Export tables. Replace N with your desired thread count.
  1. expdp zabbix/[email protected]:1521/z \
  2. DIRECTORY=history \
  3. TABLES=history_old,history_uint_old,history_str_old,history_log_old,history_text_old \
  4. PARALLEL=N
  • Import tables. Replace N with your desired thread count.
  1. impdp zabbix/[email protected]:1521/z \
  2. DIRECTORY=history \
  3. TABLES=history_uint_old \
  4. REMAP_TABLE=history_old:history,history_uint_old:history_uint,history_str_old:history_str,history_log_old:history_log,history_text_old:history_text \
  5. data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  • Verify that everything works as supposed

  • Drop old tables

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;
Importing/exporting history tables individually

Additionally, consider performance tips for Oracle Data Pump.

  • Rename tables using history_pk_prepare.sql.
  1. shell> cd /path/to/zabbix-sources/database/oracle
  2. shell> sqlplus zabbix/[email protected]_host/ORCL
  3. sqlplus> @history_pk_prepare.sql
  • Prepare directories for datapump

Example:

  1. # mkdir -pv /export/history /export/history_uint /export/history_str /export/history_log /export/history_text
  2. # chown -R oracle:oracle /export
  • Create a directory object, grant permissions to it. Run the following under sysdba role:
  1. create directory history as '/export/history';
  2. grant read,write on directory history to zabbix;
  3. create directory history_uint as '/export/history_uint';
  4. grant read,write on directory history_uint to zabbix;
  5. create directory history_str as '/export/history_str';
  6. grant read,write on directory history_str to zabbix;
  7. create directory history_log as '/export/history_log';
  8. grant read,write on directory history_log to zabbix;
  9. create directory history_text as '/export/history_text';
  10. grant read,write on directory history_text to zabbix;
  • Export and import each table. Replace N with your desired thread count.
  1. expdp zabbix/[email protected]:1521/xe DIRECTORY=history TABLES=history_old PARALLEL=N
  2. impdp zabbix/[email protected]:1521/xe DIRECTORY=history TABLES=history_old REMAP_TABLE=history_old:history data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  3. expdp zabbix/[email protected]:1521/xe DIRECTORY=history_uint TABLES=history_uint_old PARALLEL=N
  4. impdp zabbix/[email protected]:1521/xe DIRECTORY=history_uint TABLES=history_uint_old REMAP_TABLE=history_uint_old:history_uint data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  5. expdp zabbix/[email protected]:1521/xe DIRECTORY=history_str TABLES=history_str_old PARALLEL=N
  6. impdp zabbix/[email protected]:1521/xe DIRECTORY=history_str TABLES=history_str_old REMAP_TABLE=history_str_old:history_str data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  7. expdp zabbix/[email protected]:1521/xe DIRECTORY=history_log TABLES=history_log_old PARALLEL=N
  8. impdp zabbix/[email protected]:1521/xe DIRECTORY=history_log TABLES=history_log_old REMAP_TABLE=history_log_old:history_log data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  9. expdp zabbix/[email protected]:1521/xe DIRECTORY=history_text TABLES=history_text_old PARALLEL=N
  10. impdp zabbix/[email protected]:1521/xe DIRECTORY=history_text TABLES=history_text_old REMAP_TABLE=history_text_old:history_text data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  • Verify that everything works as supposed

  • Drop old tables

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;