> 技术文档 > MySQL高可用部署

MySQL高可用部署


一、MHA(一主多从模式)

1.环境准备(所有节点

# 关闭防火墙和 SELinuxsystemctl stop firewalldsystemctl disable firewalldsed -i \'s/SELINUX=enforcing/SELINUX=disabled/g\' /etc/selinux/configsetenforce 0# 配置主机名和 hosts 文件# 主库(master)hostnamectl set-hostname mysql-mastercat >> /etc/hosts << EOF192.168.10.71 mha-manager192.168.10.72 mysql-master192.168.10.73 mysql-slave1192.168.10.74 mysql-slave2EOF# 从库1(slave1)和从库2(slave2)执行相同操作,修改hostnamectl为对应主机名# 安装依赖包yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager

2. 部署 MySQL 主从复制(Master+Slave)

参照Mysql主从复制部署_mysql replication部署操作-CSDN博客

3.部署 MHA Manager(管理节点)

(1)安装 MHA Manager

wget https://github.com/yoshinorim/mha4mysql-manager/releases/download/v0.58/mha4mysql-manager-0.58-0.el7.noarch.rpmyum install -y mha4mysql-node-0.58-0.el7.noarch.rpmyum install -y mha4mysql-manager-0.58-0.el7.noarch.rpm

(2)配置 SSH 无密码登录

参考ssh免密登录-CSDN博客

(3)创建 MHA 配置文件

mkdir -p /etc/mha/masterhavi /etc/mha/masterha/app1.cnf

配置内容:

[server default]manager_workdir=/var/log/masterha/app1manager_log=/var/log/masterha/app1/manager.logmaster_binlog_dir=/var/lib/mysqluser=mha_admin # MHA管理用户password=admin # MHA管理用户密码ping_interval=1 # 检测间隔(秒)repl_user=yang # 复制用户repl_password=admin # 复制用户密码ssh_user=root # SSH用户secondary_check_script=masterha_secondary_check -s mysql-slave1 -s mysql-slave2 # 双节点检测,名称同下master_ip_failover_script=/usr/local/bin/master_ip_failover# report_script=/usr/local/bin/send_report #邮件告警脚本[server1]hostname=mysql-mastercandidate_master=1check_repl_delay=0[server2]hostname=mysql-slave1port=3306candidate_master=1 # 候选主库check_repl_delay=0 # 不检查复制延迟[server3]hostname=mysql-slave2no_master=1

4.部署 MHA Node(所有 MySQL 节点)

# 1.安装 MHA Node# 在所有MySQL节点(master、slave1、slave2)执行wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58-0.el7.noarch.rpmyum install -y mha4mysql-node-0.58-0.el7.noarch.rpm# 2.创建 MHA 管理用户# 在所有MySQL节点执行CREATE USER \'mha_admin\'@\'%\' IDENTIFIED WITH mysql_native_password BY \'password\';GRANT ALL PRIVILEGES ON *.* TO \'mha_admin\'@\'%\';FLUSH PRIVILEGES;

 5.配置 VIP 切换脚本

在 mha-manager 节点上创建 /usr/local/bin/master_ip_failover

vi /usr/local/bin/master_ip_failover
chmod +x /usr/local/bin/master_ip_failover

脚本内容:

#!/usr/bin/env perluse strict;use warnings FATAL => \'all\';use Getopt::Long;my ($command, $ssh_user, $orig_master_host, $orig_master_ip,$orig_master_port, $new_master_host, $new_master_ip, $new_master_port);my $vip = \'192.168.10.100/24\'; #设置Virtual IPmy $gateway = \'192.168.10.251\'; #网关my $interface = \'ens192\'; #指定VIP所在网卡my $key = \"1\";my $ssh_start_vip = \"/sbin/ifconfig $interface:$key $vip;/sbin/arping -I $interface -c 3 -s $vip $gateway >/dev/null 2>&1\";my $ssh_stop_vip = \"/sbin/ifconfig $interface:$key down\";GetOptions(\'command=s\' => \\$command,\'ssh_user=s\' => \\$ssh_user,\'orig_master_host=s\' => \\$orig_master_host,\'orig_master_ip=s\' => \\$orig_master_ip,\'orig_master_port=i\' => \\$orig_master_port,\'new_master_host=s\' => \\$new_master_host,\'new_master_ip=s\' => \\$new_master_ip,\'new_master_port=i\' => \\$new_master_port,);exit &main();sub main {print \"\\n\\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\\n\\n\";if ( $command eq \"stop\" || $command eq \"stopssh\" ) {# $orig_master_host, $orig_master_ip, $orig_master_port are passed.# If you manage master ip address at global catalog database,# invalidate orig_master_ip here.my $exit_code = 1;eval {print \"Disabling the VIP on old master: $orig_master_host \\n\";&stop_vip();$exit_code = 0;};if ($@) {warn \"Got Error: $@\\n\";exit $exit_code;}exit $exit_code;}elsif ( $command eq \"start\" ) {# all arguments are passed.# If you manage master ip address at global catalog database,# activate new_master_ip here.# You can also grant write access (create user, set read_only=0, etc) here.my $exit_code = 10;eval {print \"Enabling the VIP - $vip on the new master - $new_master_host \\n\";&start_vip();$exit_code = 0;};if ($@) {warn $@;exit $exit_code;}exit $exit_code;}elsif ( $command eq \"status\" ) {print \"Checking the Status of the script.. OK \\n\";`ssh $ssh_user\\@$orig_master_host \\\" $ssh_start_vip \\\"`;exit 0;}else {&usage();exit 1;}}# A simple system call that enable the VIP on the new mastersub start_vip() {`ssh $ssh_user\\@$new_master_host \\\" $ssh_start_vip \\\"`;}# A simple system call that disable the VIP on the old_mastersub stop_vip() {`ssh $ssh_user\\@$orig_master_host \\\" $ssh_stop_vip \\\"`;}sub usage {print\"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\\n\";}

 注意:

脚本常用命令如下:

# 将 VIP 移到新主库/usr/local/bin/master_ip_failover \\ --command=start \\ --ssh_user=root \\ --orig_master_host=mysql-slave2 \\ --orig_master_ip=192.168.10.74 \\ --orig_master_port=3306 \\ --new_master_host=mysql-master \\ --new_master_ip=192.168.10.72 \\ --new_master_port=3306# 从原主库移除 VIP/usr/local/bin/master_ip_failover \\ --command=stop \\ --ssh_user=root \\ --orig_master_host=mysql-slave1 \\ --orig_master_ip=192.168.10.73 \\ --orig_master_port=3306# 检查 VIP 状态/usr/local/bin/master_ip_failover \\ --command=status \\ --ssh_user=root \\ --orig_master_host=mysql-master

5.验证和启动 MHA

# 检查 SSH 连通性# 在mha-manager节点执行masterha_check_ssh --conf=/etc/mha/masterha/app1.cnf# 检查复制状态masterha_check_repl --conf=/etc/mha/masterha/app1.cnf# 启动 MHA Managernohup masterha_manager --conf=/etc/mha/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover  /var/log/masterha/app1/manager.log 2>&1 &# 查看 MHA 状态masterha_check_status --conf=/etc/mha/masterha/app1.cnf

注意事项:
二进制日志保留:主库需保留足够的 binlog(expire-logs-days设置长一些),避免故障转移时从库需要旧 binlog。
VIP 管理:生产环境建议配合 Keepalived 或 LVS 实现 VIP 自动漂移。
监控告警:监控 MHA Manager 日志(/var/log/masterha/app1/manager.log)和 MySQL 复制状态。
参数优化:根据实际情况调整ping_interval(检测频率)和secondary_check_script(双节点检测)。 

如果要再次运行MHA,需要先删除日志文件/var/log/masterha/app1/app1.failover.complete,并且已经挂掉节点已从主从结构中移除;MHA的设计至少要一主一从。

二、PXC(多主模式)

1.环境准备

# 关闭防火墙和 SELinuxsystemctl stop firewalldsystemctl disable firewalldsed -i \'s/SELINUX=enforcing/SELINUX=disabled/g\' /etc/selinux/configsetenforce 0# 配置主机名和 hosts 文件# 节点1hostnamectl set-hostname pxc-node1echo \"192.168.4.1 pxc-node1\" >> /etc/hostsecho \"192.168.4.2 pxc-node2\" >> /etc/hostsecho \"192.168.4.3 pxc-node3\" >> /etc/hosts# 节点2和节点3执行相同操作,修改hostnamectl为对应主机名

2.安装 PXC 相关软件(所有节点)

# 1.安装依赖包yum install -y epel-releaseyum install -y libev lsof perl-Compress-Raw-Bzip2 perl-Compress-Raw-Zlib perl-DBD-MySQL perl-DBI perl-Digest perl-Digest-MD5 perl-IO-Compress perl-Net-Daemon perl-PlRPC qpress socat openssl openssl-devel# 2.卸载 mariadb(如果已安装)rpm -e mariadb-libs --nodeps# 3.安装 XtraBackupwget https://repo.percona.com/yum/percona-release-latest.noarch.rpmsudo rpm -ivh percona-release-latest.noarch.rpmsudo percona-release enable tools releasesudo yum install percona-xtrabackup-24 -y# 4.创建 MySQL 用户和组groupadd -r mysqluseradd -M -s /bin/false -r -g mysql mysql# 5.安装 Percona XtraDB Clusterwget https://repo.percona.com/yum/percona-release-latest.noarch.rpmyum install -y percona-release-latest.noarch.rpm# 启用仓库percona-release enable-only pxc-57 releaseyum install -y Percona-XtraDB-Cluster-57

3.配置 PXC(所有节点)

(1)创建数据目录并设置权限

mkdir -p /data/local/percona-xtradb-cluster/datachown -R mysql:mysql /data/local/percona-xtradb-cluster/datamkdir -p /data/local/percona-xtradb-cluster/runchown -R mysql:mysql /data/local/percona-xtradb-cluster/runmkdir -p /data/logs/mysqlchown -R mysql:mysql /data/logs/mysqltouch /data/logs/mysql/error.log

(2)修改配置文件

         默认的/etc/my.cnf文件可以删除或重新指定路径,写入以下内容(以节点 1 为例,节点 2 和节点 3 需修改server_id、wsrep_node_name、wsrep_node_address)

socket = /data/local/percona-xtradb-cluster/run/mysql.sockdatadir = /data/local/percona-xtradb-cluster/datasocket = /data/local/percona-xtradb-cluster/run/mysql.sockpid-file = /data/local/percona-xtradb-cluster/run/mysql.pidwsrep_cluster_address=gcomm://192.168.4.1,192.168.4.2,192.168.4.3pxc_strict_mode=ENFORCINGwsrep_cluster_name=test-pxcwsrep_node_name=pxc-node1wsrep_node_address=192.168.4.1

4.启动 PXC 集群

# 引导第一个节点(以节点 1 为例)systemctl start mysql@bootstrap.service# 启动其他节点(节点 2 和节点 3)systemctl start mysqlsystemctl disable mysql

5.初始化配置和测试

获取初始密码并修改:
        在任意节点上通过grep password /data/logs/mysql/error.log获取临时密码,然后登录 MySQL 修改密码。
创建 SST 传输账号:
        登录 MySQL 后执行以下命令创建 SST 传输所需的账号

grant all privileges on *.* to\'sst\'@\'localhost\' identified by \'password\';

 查看集群状态:
        在任意节点上登录 MySQL,执行show status like \'wsrep%\';命令,查看集群状态相关信息,确保wsrep_cluster_size显示正确的节点数,wsrep_local_state值为 4(表示正常),wsrep_ready为ON。
测试集群:
        可以在一个节点上创建数据库、表并插入数据,然后在其他节点上查看是否能同步数据,或者停止某个节点的 MySQL 服务,观察其他节点的集群状态和数据是否正常,再重新启动该节点,查看数据是否能恢复同步。