3 为数据库升级主键

概述

自Zabbix 6.0以来,主键会应用于新安装 Zabbix 数据库的所有表。

在这之前安装过的Zabbix,本章节将提供手动升级所有表主键的说明。

此章节适用于如下数据库:

重要提示

  • 请确保在升级之前对数据库进行备份
  • 如果你的数据库使用分区(partitions), 请联系数据库管理员或是Zabbix支持团队以获取帮助
  • 成功升级到主键后,可删除CSV文件
  • 升级期间,Zabbix不能运行
  • Zabbix 前端可以选择切换到 维护期模式
  • 只有将Zabbix server升级到6.0之后才能做升级主键
  • 对于proxy,可以执行history_pk_prepare.sql升级历史表(未使用的)主键.

MySQL

导入及导出推荐在 tmux/screen 中执行, 这样您的 session 会话将不会被关闭.

另见: 注意事项

MySQL 5.7+/8.0+
  • 重命名旧表, 创建新表并导入该文件: history_pk_prepare.sql.
  1. mysql -uzabbix -p<password> zabbix < /usr/share/doc/zabbix-sql-scripts/mysql/history_pk_prepare.sql
  • 导出及导入数据

安装 mysqlsh . mysqlsh 可以连接数据库. 如果连接是通过 socket 完成的, 可能需要声明他的路径.

通过mysqlsh连接:

  1. sudo mysqlsh -uroot -S /run/mysqld/mysqld.sock --no-password -Dzabbix

运行 (CSV文件路径根据实际情况进行调整):

  1. CSVPATH="/var/lib/mysql-files";
  2. util.exportTable("history_old", CSVPATH + "/history.csv", { dialect: "csv" });
  3. util.importTable(CSVPATH + "/history.csv", {"dialect": "csv", "table": "history" });
  4. util.exportTable("history_uint_old", CSVPATH + "/history_uint.csv", { dialect: "csv" });
  5. util.importTable(CSVPATH + "/history_uint.csv", {"dialect": "csv", "table": "history_uint" });
  6. util.exportTable("history_str_old", CSVPATH + "/history_str.csv", { dialect: "csv" });
  7. util.importTable(CSVPATH + "/history_str.csv", {"dialect": "csv", "table": "history_str" });
  8. util.exportTable("history_log_old", CSVPATH + "/history_log.csv", { dialect: "csv" });
  9. util.importTable(CSVPATH + "/history_log.csv", {"dialect": "csv", "table": "history_log" });
  10. util.exportTable("history_text_old", CSVPATH + "/history_text.csv", { dialect: "csv" });
  11. util.importTable(CSVPATH + "/history_text.csv", {"dialect": "csv", "table": "history_text" });
  • 确保运行一切正常后

  • 使用下面的命令删除旧的表

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;
MySQL、MariaDB 版本小于 5.7, (或者出于某些原因无法使用 mysqlsh)

以下步骤仅在无法使用mysqlsh的情况下操作,因为这种方法速度慢、非常耗时。

您必须以root用户身份(推荐)或任何具有文件权限的用户登录。

MySQL 应在启用 local_infile 变量的情况下启动.

  • 重命名旧表, 创建新表并导入该文件 history_pk_prepare.sql.
  1. mysql -uzabbix -p<password> zabbix < /usr/share/doc/zabbix-sql-scripts/mysql/history_pk_prepare.sql
  • 导出及导入数据

检查是否仅对特定路径的文件启用导入/导出:

  1. mysql> SELECT @@secure_file_priv;
  2. +-----------------------+
  3. | @@secure_file_priv |
  4. +-----------------------+
  5. | /var/lib/mysql-files/ |
  6. +-----------------------+

如果该值是指向目录的路径,则可以对该目录中的文件执行导出/导入。在这种情况下,应当更具实际情况编辑查询到的文件路径。或者,可以在升级期间禁用secure_file_priv这个参数(设置此参数值为空)。如果该值为空,则可以对任何位置的文件执行导出/导入操作。

导出数据前应禁用该参数 max_execution_time 以避免导出时间过长造成的超时问题。

  1. SET @@max_execution_time=0;
  2. SELECT * INTO OUTFILE '/var/lib/mysql-files/history.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_old;
  3. LOAD DATA INFILE '/var/lib/mysql-files/history.csv' IGNORE INTO TABLE history FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  4. SELECT * INTO OUTFILE '/var/lib/mysql-files/history_uint.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_uint_old;
  5. LOAD DATA INFILE '/var/lib/mysql-files/history_uint.csv' IGNORE INTO TABLE history_uint FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  6. SELECT * INTO OUTFILE '/var/lib/mysql-files/history_str.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_str_old;
  7. LOAD DATA INFILE '/var/lib/mysql-files/history_str.csv' IGNORE INTO TABLE history_str FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  8. SELECT * INTO OUTFILE '/var/lib/mysql-files/history_log.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_log_old;
  9. LOAD DATA INFILE '/var/lib/mysql-files/history_log.csv' IGNORE INTO TABLE history_log FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  10. SELECT * INTO OUTFILE '/var/lib/mysql-files/history_text.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_text_old;
  11. LOAD DATA INFILE '/var/lib/mysql-files/history_text.csv' IGNORE INTO TABLE history_text FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
  • 确保运行一切正常后

  • 使用下面的命令删除旧的表

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;

PostgreSQL

导入及导出推荐在 tmux/screen 中执行, 这样您的 session 会话将不会被关闭.

参见: Important notes

升级表
  • 重命名表使用该sql文件 history_pk_prepare.sql.
  1. sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • 导出历史数据到临时表中,并将其插入到新表中,忽略重复记录
  1. \copy history_old TO '/tmp/history.csv' DELIMITER ',' CSV
  2. CREATE TEMP TABLE temp_history (
  3. itemid bigint NOT NULL,
  4. clock integer DEFAULT '0' NOT NULL,
  5. value DOUBLE PRECISION DEFAULT '0.0000' NOT NULL,
  6. ns integer DEFAULT '0' NOT NULL
  7. );
  8. \copy temp_history FROM '/tmp/history.csv' DELIMITER ',' CSV
  9. INSERT INTO history SELECT * FROM temp_history ON CONFLICT (itemid,clock,ns) DO NOTHING;
  10. \copy history_uint_old TO '/tmp/history_uint.csv' DELIMITER ',' CSV
  11. CREATE TEMP TABLE temp_history_uint (
  12. itemid bigint NOT NULL,
  13. clock integer DEFAULT '0' NOT NULL,
  14. value numeric(20) DEFAULT '0' NOT NULL,
  15. ns integer DEFAULT '0' NOT NULL
  16. );
  17. \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
  18. INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
  19. \copy history_str_old TO '/tmp/history_str.csv' DELIMITER ',' CSV
  20. CREATE TEMP TABLE temp_history_str (
  21. itemid bigint NOT NULL,
  22. clock integer DEFAULT '0' NOT NULL,
  23. value varchar(255) DEFAULT '' NOT NULL,
  24. ns integer DEFAULT '0' NOT NULL
  25. );
  26. \copy temp_history_str FROM '/tmp/history_str.csv' DELIMITER ',' CSV
  27. INSERT INTO history_str (itemid,clock,value,ns) SELECT * FROM temp_history_str ON CONFLICT (itemid,clock,ns) DO NOTHING;
  28. \copy history_log_old TO '/tmp/history_log.csv' DELIMITER ',' CSV
  29. CREATE TEMP TABLE temp_history_log (
  30. itemid bigint NOT NULL,
  31. clock integer DEFAULT '0' NOT NULL,
  32. timestamp integer DEFAULT '0' NOT NULL,
  33. source varchar(64) DEFAULT '' NOT NULL,
  34. severity integer DEFAULT '0' NOT NULL,
  35. value text DEFAULT '' NOT NULL,
  36. logeventid integer DEFAULT '0' NOT NULL,
  37. ns integer DEFAULT '0' NOT NULL
  38. );
  39. \copy temp_history_log FROM '/tmp/history_log.csv' DELIMITER ',' CSV
  40. INSERT INTO history_log SELECT * FROM temp_history_log ON CONFLICT (itemid,clock,ns) DO NOTHING;
  41. \copy history_text_old TO '/tmp/history_text.csv' DELIMITER ',' CSV
  42. CREATE TEMP TABLE temp_history_text (
  43. itemid bigint NOT NULL,
  44. clock integer DEFAULT '0' NOT NULL,
  45. value text DEFAULT '' NOT NULL,
  46. ns integer DEFAULT '0' NOT NULL
  47. );
  48. \copy temp_history_text FROM '/tmp/history_text.csv' DELIMITER ',' CSV
  49. INSERT INTO history_text SELECT * FROM temp_history_text ON CONFLICT (itemid,clock,ns) DO NOTHING;
  • 确认以上步骤都正确完成

  • 删除旧表

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;

考虑使用以下步骤来提高插入数据的性能:

  1. #### TimescaleDB v1.x
  2. 导入及导出推荐在 tmux/screen 中执行, 这样您的 session 会话将不会被关闭.
  3. 参见: [Important notes](#important-notes)
  4. ##### 升级表
  5. * 重命名表请使用该sql文件 `history_pk_prepare.sql`.

sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql

  1. * 升级 **one** 表的示例:

-- 确保您有足够的空间来导出未压缩的数据 select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats(‘history_uint_old’);

-- 导出数据 (select * from history_uint_old) TO ‘/tmp/history_uint.csv’ DELIMITER ‘,’ CSV

CREATE TEMP TABLE temp_history_uint ( itemid bigint NOT NULL, clock integer DEFAULT ‘0’ NOT NULL, value numeric(20) DEFAULT ‘0’ NOT NULL, ns integer DEFAULT ‘0’ NOT NULL ); — 导入数据 temp_history_uint FROM ‘/tmp/history_uint.csv’ DELIMITER ‘,’ CSV

-- 创建 hypertable 表并插入数据 select create_hypertable(‘history_uint’, ‘clock’, chunk_time_interval => 86400, migrate_data => true); INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;

-- 开启压缩 select set_integer_now_func(‘history_uint’, ‘zbx_ts_unix_now’, true); alter table history_uint set (timescaledb.compress,timescaledb.compress_segmentby=’itemid’,timescaledb.compress_orderby=’clock,ns’);

-- 将返回的 job id 传递给 run_job select add_compress_chunks_policy(‘history_uint’, ( select (p.older_than).integer_interval from _timescaledb_config.bgw_policy_compress_chunks p inner join _timescaledb_catalog.hypertable h on (h.id=p.hypertable_id) where h.table_name=’history_uint’ )::integer );

select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema=’public’ and hypertable_name=’history_uint’), scheduled => true);

-- 运行压缩 call run_job(<JOB_ID>); — May show ‘NOTICE: no chunks for hypertable public.history_uint that satisfy compress chunk policy’, it is fine.

  1. * 确保上面的步骤都运行正确
  2. * 删除旧表

DROP TABLE history_old; DROP TABLE history_uint_old; DROP TABLE history_str_old; DROP TABLE history_log_old; DROP TABLE history_text_old;

  1. 请参见: [improving PostgreSQL insert performance](https://blog.timescale.com/blog/13-tips-to-improve-postgresql-insert-performance/)

TimescaleDB v2.x

导入及导出推荐在 tmux/screen 中执行, 这样您的 session 会话将不会被关闭.

另见: Important notes

升级表
  • 重命名表使用该sql文件 history_pk_prepare.sql.
  1. sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql
  • 升级 one 表的示例:
  1. -- 确保您有足够的空间来导出未压缩的数据
  2. select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_uint_old');
  3. -- 导出数据
  4. \copy (select * from history_uint_old) TO '/tmp/history_uint.csv' DELIMITER ',' CSV
  5. CREATE TEMP TABLE temp_history_uint (
  6. itemid bigint NOT NULL,
  7. clock integer DEFAULT '0' NOT NULL,
  8. value numeric(20) DEFAULT '0' NOT NULL,
  9. ns integer DEFAULT '0' NOT NULL
  10. );
  11. -- 导入数据
  12. \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
  13. -- 创建 hypertable 表并插入数据
  14. select create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
  15. INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
  16. -- 启用压缩
  17. select set_integer_now_func('history_uint', 'zbx_ts_unix_now', true);
  18. alter table history_uint set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');
  19. -- schema 替换为 hypertable_schema
  20. -- 将返回的 job id 传递给 run_job
  21. select add_compression_policy('history_uint', (
  22. select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema='public' and hypertable_name='history_uint_old'
  23. )::integer
  24. );
  25. select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema='public' and hypertable_name='history_uint'), scheduled => true);
  26. -- 运行压缩
  27. call run_job(<JOB_ID>);
  28. -- May show 'NOTICE: no chunks for hypertable public.history_uint that satisfy compress chunk policy', it is fine.
  • 确保上面的步骤都运行正确

  • 删除旧表

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;

参见: improving PostgreSQL insert performance

Oracle

导入及导出推荐在 tmux/screen 中执行, 这样您的 session 会话将不会被关闭.

另见: Important notes

导入/导出历史表

使用 Oracle Data Pump 还需要考虑 performance tips .

  • 使用该文件重命名旧表 history_pk_prepare.sql.
  1. shell> cd /path/to/zabbix-sources/database/oracle
  2. shell> sqlplus zabbix/[email protected]_host/ORCL
  3. sqlplus> @history_pk_prepare.sql
  • 为 datapump 创建一个目录

例:

  1. # mkdir -pv /export/history
  2. # chown -R oracle:oracle /export
  • 创建一个目录,并授予权限。使用sysdba角色执行如下命令:
  1. create directory history as '/export/history';
  2. grant read,write on directory history to zabbix;
  • 导出表。将N替换为您所需要的线程数
  1. expdp zabbix/[email protected]:1521/z \
  2. DIRECTORY=history \
  3. TABLES=history_old,history_uint_old,history_str_old,history_log_old,history_text_old \
  4. PARALLEL=N
  • 导入表。将N替换为您所需要的线程数
  1. impdp zabbix/[email protected]:1521/z \
  2. DIRECTORY=history \
  3. TABLES=history_uint_old \
  4. REMAP_TABLE=history_old:history,history_uint_old:history_uint,history_str_old:history_str,history_log_old:history_log,history_text_old:history_text \
  5. data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  • 确保运行一切正常后

  • 删除旧表

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;
分别导入/导出历史表

使用 Oracle Data Pump 还需要考虑 performance tips

  • 使用该SQL文件重命名表 history_pk_prepare.sql.
  1. shell> cd /path/to/zabbix-sources/database/oracle
  2. shell> sqlplus zabbix/[email protected]_host/ORCL
  3. sqlplus> @history_pk_prepare.sql
  • 为datapump创建一个目录

例:

  1. # mkdir -pv /export/history /export/history_uint /export/history_str /export/history_log /export/history_text
  2. # chown -R oracle:oracle /export
  • 创建一个目录,并授予权限。使用sysdba角色执行如下命令:
  1. create directory history as '/export/history';
  2. grant read,write on directory history to zabbix;
  3. create directory history_uint as '/export/history_uint';
  4. grant read,write on directory history_uint to zabbix;
  5. create directory history_str as '/export/history_str';
  6. grant read,write on directory history_str to zabbix;
  7. create directory history_log as '/export/history_log';
  8. grant read,write on directory history_log to zabbix;
  9. create directory history_text as '/export/history_text';
  10. grant read,write on directory history_text to zabbix;
  • 导出并导入每张表。将N替换为您所需的线程数.
  1. expdp zabbix/[email protected]:1521/xe DIRECTORY=history TABLES=history_old PARALLEL=N
  2. impdp zabbix/[email protected]:1521/xe DIRECTORY=history TABLES=history_old REMAP_TABLE=history_old:history data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  3. expdp zabbix/[email protected]:1521/xe DIRECTORY=history_uint TABLES=history_uint_old PARALLEL=N
  4. impdp zabbix/[email protected]:1521/xe DIRECTORY=history_uint TABLES=history_uint_old REMAP_TABLE=history_uint_old:history_uint data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  5. expdp zabbix/[email protected]:1521/xe DIRECTORY=history_str TABLES=history_str_old PARALLEL=N
  6. impdp zabbix/[email protected]:1521/xe DIRECTORY=history_str TABLES=history_str_old REMAP_TABLE=history_str_old:history_str data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  7. expdp zabbix/[email protected]:1521/xe DIRECTORY=history_log TABLES=history_log_old PARALLEL=N
  8. impdp zabbix/[email protected]:1521/xe DIRECTORY=history_log TABLES=history_log_old REMAP_TABLE=history_log_old:history_log data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  9. expdp zabbix/[email protected]:1521/xe DIRECTORY=history_text TABLES=history_text_old PARALLEL=N
  10. impdp zabbix/[email protected]:1521/xe DIRECTORY=history_text TABLES=history_text_old REMAP_TABLE=history_text_old:history_text data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
  • 确保上面的步骤都运行正确

  • 删除旧表

  1. DROP TABLE history_old;
  2. DROP TABLE history_uint_old;
  3. DROP TABLE history_str_old;
  4. DROP TABLE history_log_old;
  5. DROP TABLE history_text_old;