zabbix4.0 之mysql优化(Zabbix分区表)

zabbix4.0 之mysql优化(Zabbix分区表)

2019.05.27 17:41:11字数 677阅读 2,496

zabbix最大的瓶颈不在zabbix服务,而是mysql数据库的压力上,优化mysql其实就是优化zabbix的配置了。
zabbix数据库常见的优化处理方法有两种:

  • 清空数据库中history,history_uint,trends_uint中的数据(这种方式非常耗费时间)
  • 使用MySQL表分区来对history这种大表进行分区,但是一定要在数据量小的时候进行分区,当数据量达到好几十G设置几百G了还是采用第一种方法把数据清理了再作表分区

分表操作如下:

相关知识点:

MySQL的表分区不支持外键。Zabbix2.0以上history和trend相关的表没有使用外键,因此可以使用分区。
MySQL表分区就是将一个大表在逻辑上切分成好几个物理分片。使用MySQL表分区有以下几个好处:
1.在有些场景下可以明显增加查询性能,特别是对于那些重度使用的表如果是一个单独的分区或者好几个分区就可以明显增加查询性能,因为比起加载整张表的数据到内存,一个分区的数据和索引更容易加载到内存。查看zabbix数据的general日志,可以发现zabbix对于history相关的几张表调用是非常频繁的,所以如果要优化zabbix的数据库重点要优化history这几张大表。
2.如果查询或者更新主要是使用一个分区,那么性能提升就可以简单地通过顺序访问磁盘上的这个分区而不用使用索引和随机访问整张表。
3. 批量插入和删除执行的时候可以简单地删除或者增加分区,只要当创建分区的时候有计划的创建。ALTER TABLE操作也会很快
4.Housekeeper对于某些数据类型不在需要了。可以通过Administration->General->Housekeeping来关闭不需要的数据类型的housekeeping。比如关闭History类的housekeeping
5.当创建增加新的分区时,确保分区范围没有越界,要不然会返回错误
  一个MySQL表要么完全被分区,要么一点也不要被分区。
  当尝试对一个表进行大量分区时,增大open_files_limit的值
  被分区的表都不支持外键,在进行分区之前需要删除外键
  被分区的表不支持查询缓存
1. 查看表占用空间情况
select table_name, (data_length+index_length)/1024/1024 as total_mb, table_rows from information_schema.tables where table_schema='zabbix'; 
 
image.png
2.Zabbix大表:history,history_log,history_str,history_text,history_uint,trends,trends_uint

共有四个存储过程
partition_create - 这将在给定模式中的给定表上创建一个分区。
partition_drop - 这将删除给定模式中给定表上给定时间戳的分区。
partition_maintenance - 此功能是用户调用的。它负责解析给定的参数,然后根据需要创建/删除分区。
partition_verify - 检查给定模式中给定表上是否启用了分区。如果没有启用,它将创建一个单独的分区。
具体的脚本如下:

DELIMITER $$
CREATE PROCEDURE `partition_create`(SCHEMANAME varchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64), CLOCK int) BEGIN /* SCHEMANAME = The DB schema in which to make changes TABLENAME = The table with partitions to potentially delete PARTITIONNAME = The name of the partition to create */ /* Verify that the partition does not already exist */ DECLARE RETROWS INT; SELECT COUNT(1) INTO RETROWS FROM information_schema.partitions WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_description >= CLOCK; IF RETROWS = 0 THEN /* 1. Print a message indicating that a partition was created. 2. Create the SQL to create the partition. 3. Execute the SQL from #2. */ SELECT CONCAT( "partition_create(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg; SET @sql = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' ); PREPARE STMT FROM @sql; EXECUTE STMT; DEALLOCATE PREPARE STMT; END IF; END$$ DELIMITER ; DELIMITER $$ CREATE PROCEDURE `partition_drop`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE BIGINT) BEGIN /* SCHEMANAME = The DB schema in which to make changes TABLENAME = The table with partitions to potentially delete DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd) */ DECLARE done INT DEFAULT FALSE; DECLARE drop_part_name VARCHAR(16); /* Get a list of all the partitions that are older than the date in DELETE_BELOW_PARTITION_DATE. All partitions are prefixed with a "p", so use SUBSTRING TO get rid of that character. */ DECLARE myCursor CURSOR FOR SELECT partition_name FROM information_schema.partitions WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND CAST(SUBSTRING(partition_name FROM 2) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE; /* Create the basics for when we need to drop the partition. Also, create @drop_partitions to hold a comma-delimited list of all partitions that should be deleted. */ SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION "); SET @drop_partitions = ""; /* Start looping through all the partitions that are too old. */ OPEN myCursor; read_loop: LOOP FETCH myCursor INTO drop_part_name; IF done THEN LEAVE read_loop; END IF; SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name)); END LOOP; IF @drop_partitions != "" THEN /* 1. Build the SQL to drop all the necessary partitions. 2. Run the SQL to drop the partitions. 3. Print out the table partitions that were deleted. */ SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";"); PREPARE STMT FROM @full_sql; EXECUTE STMT; DEALLOCATE PREPARE STMT; SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`; ELSE /* No partitions are being deleted, so print out "N/A" (Not applicable) to indicate that no changes were made. */ SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`; END IF; END$$ DELIMITER ; DELIMITER $$ CREATE PROCEDURE `partition_maintenance`(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT) BEGIN DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16); DECLARE PARTITION_NAME VARCHAR(16); DECLARE OLD_PARTITION_NAME VARCHAR(16); DECLARE LESS_THAN_TIMESTAMP INT; DECLARE CUR_TIME INT; CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL); SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00')); SET @__interval = 1; create_loop: LOOP IF @__interval > CREATE_NEXT_INTERVALS THEN LEAVE create_loop; END IF; SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval * 3600); SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL * (@__interval - 1) * 3600, 'p%Y%m%d%H00'); IF(PARTITION_NAME != OLD_PARTITION_NAME) THEN CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP); END IF; SET @__interval=@__interval+1; SET OLD_PARTITION_NAME = PARTITION_NAME; END LOOP; SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY), '%Y%m%d0000'); CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE); END$$ DELIMITER ; DELIMITER $$ CREATE PROCEDURE `partition_verify`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11)) BEGIN DECLARE PARTITION_NAME VARCHAR(16); DECLARE RETROWS INT(11); DECLARE FUTURE_TIMESTAMP TIMESTAMP; /* * Check if any partitions exist for the given SCHEMANAME.TABLENAME. */ SELECT COUNT(1) INTO RETROWS FROM information_schema.partitions WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_name IS NULL; /* * If partitions do not exist, go ahead and partition the table */ IF RETROWS = 1 THEN /* * Take the current date at 00:00:00 and add HOURLYINTERVAL to it. This is the timestamp below which we will store values. * We begin partitioning based on the beginning of a day. This is because we don't want to generate a random partition * that won't necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could * end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000"). */ SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL, CONCAT(CURDATE(), " ", '00:00:00')); SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00'); -- Create the partitioning query SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " PARTITION BY RANGE(`clock`)"); SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ", PARTITION_NAME, " VALUES LESS THAN (", UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));"); -- Run the partitioning query PREPARE STMT FROM @__PARTITION_SQL; EXECUTE STMT; DEALLOCATE PREPARE STMT; END IF; END$$ DELIMITER ; DELIMITER $$ CREATE PROCEDURE`partition_maintenance_all`(SCHEMA_NAME VARCHAR(32)) BEGIN CALL partition_maintenance(SCHEMA_NAME, 'history', 90, 24, 14); CALL partition_maintenance(SCHEMA_NAME, 'history_log', 90, 24, 14); CALL partition_maintenance(SCHEMA_NAME, 'history_str', 90, 24, 14); CALL partition_maintenance(SCHEMA_NAME, 'history_text', 90, 24, 14); CALL partition_maintenance(SCHEMA_NAME, 'history_uint', 90, 24, 14); CALL partition_maintenance(SCHEMA_NAME, 'trends', 730, 24, 14); CALL partition_maintenance(SCHEMA_NAME, 'trends_uint', 730, 24, 14); END$$ DELIMITER ; 
3.上面内容包含了创建分区的存储过程,将上面内容复制到partition.sql中
a.然后执行如下
mysql  -uzabbix -pzabbix zabbix  < partition.sql b.添加crontable,每天执行01点01分执行,如下: crontab -l > crontab.txt cat >> crontab.txt <<EOF #zabbix partition_maintenance 01 01 * * * mysql -uzabbix -pzabbix zabbix -e"CALL partition_maintenance_all('zabbix')" &>/dev/null EOF cat crontab.txt |crontab 注意: mysql的zabbix用户的密码部分按照实际环境配置 c.首先执行一次(由于首次执行的时间较长,请使用nohup执行),如下: nohup mysql -uzabbix -pzabbix zabbix -e "CALLpartition_maintenance_all('zabbix')" &> /root/partition.log& 
4.查看分区是否成功
1.查看分区情况
mysql> show create table history_uint;
| history_uint | CREATE TABLE `history_uint` ( `itemid` bigint(20) unsigned NOT NULL, `clock` int(11) NOT NULL DEFAULT '0', `value` bigint(20) unsigned NOT NULL DEFAULT '0', `ns` int(11) NOT NULL DEFAULT '0', KEY `history_uint_1` (`itemid`,`clock`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin /*!50100 PARTITION BY RANGE (`clock`) (PARTITION p201904160000 VALUES LESS THAN (1555430400) ENGINE = InnoDB, PARTITION p201904170000 VALUES LESS THAN (1555516800) ENGINE = InnoDB, PARTITION p201904180000 VALUES LESS THAN (1555603200) ENGINE = InnoDB, PARTITION p201904190000 VALUES LESS THAN (1555689600) ENGINE = InnoDB, PARTITION p201904200000 VALUES LESS THAN (1555776000) ENGINE = InnoDB, PARTITION p201904210000 VALUES LESS THAN (1555862400) ENGINE = InnoDB, PARTITION p201904220000 VALUES LESS THAN (1555948800) ENGINE = InnoDB, PARTITION p201904230000 VALUES LESS THAN (1556035200) ENGINE = InnoDB, PARTITION p201904240000 VALUES LESS THAN (1556121600) ENGINE = InnoDB, PARTITION p201904250000 VALUES LESS THAN (1556208000) ENGINE = InnoDB, PARTITION p201904260000 VALUES LESS THAN (1556294400) ENGINE = InnoDB, PARTITION p201904270000 VALUES LESS THAN (1556380800) ENGINE = InnoDB, PARTITION p201904280000 VALUES LESS THAN (1556467200) ENGINE = InnoDB, PARTITION p201904290000 VALUES LESS THAN (1556553600) ENGINE = InnoDB) */ | 1 row in set (0.00 sec) 2.查看MYSQL目录下的表文件,可以看到经过分区后的表的数据库文件由原来打个ibd文件变成了按照日期划分的多个ibd文件 [root@VM_0_3_centos zabbix]# pwd /data/mysql/zabbix [root@VM_0_3_centos zabbix]# ls -lh|grep history_uint -rw-r----- 1 mysql mysql 8.5K Apr 16 16:16 history_uint.frm -rw-r----- 1 mysql mysql 124M Apr 16 17:11 history_uint#P#p201904160000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904170000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904180000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904190000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904200000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904210000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904220000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904230000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904240000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904250000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904260000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904270000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904280000.ibd -rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904290000.ibd 
5.界面设置,管理----一般----设置---管家
 
image.png
6.mysql优化

a.修改单独表空间

  vi /etc/my.cnf
  在[mysqld]下设置 innodb_file_per_table=1 参考:https://blog.51cto.com/yuweibing/1656425 

b.增大innodb_log_file_size的方法
zabbix在使用数据库的过程中,特别是删除历史数据的过程中,会涉及到大数据操作,如果逻辑日志文件太小,会造成执行不成功,日志回滚的问题

编辑my.cnf  ,  增加   innodb_log_file_size=20M
7 问题

运行一段时间后发i西安zabbix server端服务器下面多个agent出现报警:Zabbix agent on zabbix server is unreachable for 5 minutes
查看zabbix server日志如下

 32757:20190430:095456.588 [Z3005] query failed: [1526] Table has no partition for value 1556589295 [insert into history_uint (itemid,clock,ns,value) values (29578,1556589295,330422837,0); ] 32757:20190430:095457.602 [Z3005] query failed: [1526] Table has no partition for value 1556589297 [insert into history_uint (itemid,clock,ns,value) values (29274,1556589297,446885606,200),(29270,1556589297,447245729,0),(29457,1556589297,493976343,0),(29517,1556589297,498202149,1); ] 32758:20190430:095458.603 [Z3005] query failed: [1526] Table has no partition for value 1556589298 [insert into history_uint (itemid,clock,ns,value) values (29518,1556589298,496847550,154),(29675,1556589298,502218685,200),(29671,1556589298,502513014,0); ] 32751:20190430:104927.004 executing housekeeper 32751:20190430:104927.018 housekeeper [deleted 0 hist/trends, 0 items/triggers, 0 events, 0 problems, 0 sessions, 0 alarms, 0 audit items in 0.013509 sec, idle for 1 hour(s)] 

提示sql插不进去对应的表,,,
query failed: [1526] Table has no partition 没有对应发分区,由于次数据库已经对Histroy,history_log,history_str,History_txt等历史记录表进行了表分区,并添加了计划任务
正常每天都会创建一天,不会出现这样的为你,出现上述日志表明分区表未被创建,所以手动执行一下恢复正常。
检查问题:
计划任务内容写的有问题,没有正常执行,更正即可

参考:
https://cloud.tencent.com/developer/article/1006301
https://www.kancloud.cn/devops-centos/centos-linux-devops/375488
https://zabbix.org/wiki/Docs/howto/MySQL_Table_Partitioning_(variant)

猜你喜欢

转载自www.cnblogs.com/magita/p/12922958.html