MySQL高可用架构-MMM环境安装笔记(二)

6)启动agent和monitor服务

最后分别在db-master1,db-master2,db-slave上启动agent
[root@db-master1 ~]# /etc/init.d/mysql-mmm-agent start //将start替换成status,则查看agent进程起来没有
Daemon bin: ‘/usr/sbin/mmm_agentd’
Daemon pid: ‘/var/run/mmm_agentd.pid’
Starting MMM Agent daemon… Ok

[root@db-master2 ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: ‘/usr/sbin/mmm_agentd’
Daemon pid: ‘/var/run/mmm_agentd.pid’
Starting MMM Agent daemon… Ok

[root@db-slave ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: ‘/usr/sbin/mmm_agentd’
Daemon pid: ‘/var/run/mmm_agentd.pid’
Starting MMM Agent daemon… Ok

接着在mmm-monit上启动monitor程序
[root@mmm-monit ~]# mkdir /var/run/mysql-mmm
[root@mmm-monit ~]# /etc/init.d/mysql-mmm-monitor start
Daemon bin: ‘/usr/sbin/mmm_mond’
Daemon pid: ‘/var/run/mmm_mond.pid’
Starting MMM Monitor daemon: Ok
…………………………………………………………………………………………..

如果monitor程序启动出现报错:

解决办法:
[root@mmm-monit ~]# rpm -q perl-CPAN
package perl-CPAN is not installed
[root@mmm-monit ~]# yum install perl-CPAN
………………………………………..
执行上面的”perl -MCPAN -e shell”命令后,出现下面的安装命令
……
cpan[1]> install MIME::Entity //依次输入这些安装命令
cpan[2]> install MIME::Parser
cpan[3]> install Crypt::PasswdMD5
cpan[4]> install Term::ReadPassword
cpan[5]> install Crypt::CBC
cpan[6]> install Crypt::Blowfish
cpan[7]> install Daemon::Generic
cpan[8]> install DateTime
cpan[9]> install SOAP::Lite

或者直接执行下面的安装命令的命令也行:
[root@mmm-monit ~]# perl -MCPAN -e ‘install HTML::Template’
[root@mmm-monit ~]# perl -MCPAN -e ‘install MIME::Entity’
[root@mmm-monit ~]# perl -MCPAN -e ‘install Crypt::PasswdMD5’
[root@mmm-monit ~]# perl -MCPAN -e ‘install Term::ReadPassword’
[root@mmm-monit ~]# perl -MCPAN -e ‘install Crypt::CBC’
[root@mmm-monit ~]# perl -MCPAN -e ‘install Crypt::Blowfish’
[root@mmm-monit ~]# perl -MCPAN -e ‘install Daemon::Generic’
[root@mmm-monit ~]# perl -MCPAN -e ‘install DateTime’
[root@mmm-monit ~]# perl -MCPAN -e ‘install SOAP::Lite’

以及确认排除如下问题:
mmm_mon.conf文件里check的bin_path路径写错
[root@mmm-monit ~]# cat /etc/mysql-mmm/mmm_mon.conf|grep bin_path
bin_path /usr/libexec/mysql-mmm
将上面的bin_path改为/usr/lib/mysql-mmm 即可解决!即:
[root@mmm-monit ~]# cat /etc/mysql-mmm/mmm_mon.conf|grep bin_path
bin_path /usr/lib/mysql-mmm

是mmm_mon.conf文件里check的status_path路径写错
[root@mmm-monit ~]# cat /etc/mysql-mmm/mmm_mon.conf |grep status_path
status_path /var/lib/mysql-mmm/mmm_mond.status
将上面的status_path改为/var/lib/misc//mmm_mond.status 即可解决!即:
[root@mmm-monit ~]# cat /etc/mysql-mmm/mmm_mon.conf|grep status_path
status_path /var/lib/misc/mmm_mond.status

然后再次启动monitor进程
[root@mmm-monit ~]# /etc/init.d/mysql-mmm-monitor restart
……..
2017/06/01 20:57:14 DEBUG Sending command ‘SET_STATUS(ONLINE, reader(182.48.115.235), db-master1)’ to db-master2 (182.48.115.237:9989)
2017/06/01 20:57:14 DEBUG Received Answer: OK: Status applied successfully!|UP:885492.82
2017/06/01 20:57:14 DEBUG Sending command ‘SET_STATUS(ONLINE, writer(182.48.115.234), db-master1)’ to db-master1 (182.48.115.236:9989)
2017/06/01 20:57:14 DEBUG Received Answer: OK: Status applied successfully!|UP:65356.14
2017/06/01 20:57:14 DEBUG Sending command ‘SET_STATUS(ONLINE, reader(182.48.115.239), db-master1)’ to db-slave (182.48.115.238:9989)
2017/06/01 20:57:14 DEBUG Received Answer: OK: Status applied successfully!|UP:945625.05
2017/06/01 20:57:15 DEBUG Listener: Waiting for connection…
2017/06/01 20:57:17 DEBUG Sending command ‘SET_STATUS(ONLINE, reader(182.48.115.235), db-master1)’ to db-master2 (182.48.115.237:9989)
2017/06/01 20:57:17 DEBUG Received Answer: OK: Status applied successfully!|UP:885495.95
2017/06/01 20:57:17 DEBUG Sending command ‘SET_STATUS(ONLINE, writer(182.48.115.234), db-master1)’ to db-master1 (182.48.115.236:9989)
2017/06/01 20:57:17 DEBUG Received Answer: OK: Status applied successfully!|UP:65359.27
2017/06/01 20:57:17 DEBUG Sending command ‘SET_STATUS(ONLINE, reader(182.48.115.239), db-master1)’ to db-slave (182.48.115.238:9989)
2017/06/01 20:57:17 DEBUG Received Answer: OK: Status applied successfully!|UP:945628.17
2017/06/01 20:57:18 DEBUG Listener: Waiting for connection…
………

只要上面在启动过程中的check检查中没有报错信息,并且有successfully信息,则表示monitor进程正常。
[root@mmm-monit ~]# ps -ef|grep monitor
root 30651 30540 0 20:59 ? 00:00:00 perl /usr/lib/mysql-mmm/monitor/checker ping_ip
root 30654 30540 0 20:59 ? 00:00:00 perl /usr/lib/mysql-mmm/monitor/checker mysql
root 30656 30540 0 20:59 ? 00:00:00 perl /usr/lib/mysql-mmm/monitor/checker ping
root 30658 30540 0 20:59 ? 00:00:00 perl /usr/lib/mysql-mmm/monitor/checker rep_backlog
root 30660 30540 0 20:59 ? 00:00:00 perl /usr/lib/mysql-mmm/monitor/checker rep_threads

7)在monitor主机上检查集群主机的状态

[root@mmm-monit ~]# mmm_control checks all
db-master2 ping [last change: 2017/06/01 20:59:39] OK
db-master2 mysql [last change: 2017/06/01 20:59:39] OK
db-master2 rep_threads [last change: 2017/06/01 20:59:39] OK
db-master2 rep_backlog [last change: 2017/06/01 20:59:39] OK: Backlog is null
db-master1 ping [last change: 2017/06/01 20:59:39] OK
db-master1 mysql [last change: 2017/06/01 20:59:39] OK
db-master1 rep_threads [last change: 2017/06/01 20:59:39] OK
db-master1 rep_backlog [last change: 2017/06/01 20:59:39] OK: Backlog is null
db-slave ping [last change: 2017/06/01 20:59:39] OK
db-slave mysql [last change: 2017/06/01 20:59:39] OK
db-slave rep_threads [last change: 2017/06/01 20:59:39] OK
db-slave rep_backlog [last change: 2017/06/01 20:59:39] OK: Backlog is null

8)在monitor主机上检查集群环境在线状况

[root@mmm-monit ~]# mmm_control show
db-master1(182.48.115.236) master/ONLINE. Roles: writer(182.48.115.234)
db-master2(182.48.115.237) master/ONLINE. Roles: reader(182.48.115.235)
db-slave(182.48.115.238) slave/ONLINE. Roles: reader(182.48.115.239)

然后到mmm agent机器上查看,就会发现vip已经绑定
[root@db-master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:5f:58:dc brd ff:ff:ff:ff:ff:ff
inet 182.48.115.236/27 brd 182.48.115.255 scope global eth0
inet 182.48.115.234/32 scope global eth0
inet6 fe80::5054:ff:fe5f:58dc/64 scope link
valid_lft forever preferred_lft forever

[root@db-master2 mysql-mmm]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:1b:6e:53 brd ff:ff:ff:ff:ff:ff
inet 182.48.115.237/27 brd 182.48.115.255 scope global eth0
inet 182.48.115.235/32 scope global eth0
inet6 fe80::5054:ff:fe1b:6e53/64 scope link
valid_lft forever preferred_lft forever

[root@db-slave ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:ca:d5:f8 brd ff:ff:ff:ff:ff:ff
inet 182.48.115.238/27 brd 182.48.115.255 scope global eth0
inet 182.48.115.239/27 brd 182.48.115.255 scope global secondary eth0:1
inet6 fe80::5054:ff:feca:d5f8/64 scope link
valid_lft forever preferred_lft forever

从上面输出信息中可以看出,虚拟ip已经绑定到各agent上。其中:
182.48.115.234顺利添加到182.48.115.236上作为主对外提供写服务
182.48.115.235顺利添加到182.48.115.237上作为主对外提供读服务
182.48.115.239顺利添加到182.48.115.238上作为主对外提供读服务

9)online(上线)所有主机

这里主机已经在线,如果没有在线,可以使用下面的命令将相关主机online
[root@mmm-monit ~]# mmm_control set_online db-master1
OK: This host is already ONLINE. Skipping command.
[root@mmm-monit ~]# mmm_control set_online db-master2
OK: This host is already ONLINE. Skipping command.
[root@mmm-monit ~]# mmm_control set_online db-slave
OK: This host is already ONLINE. Skipping command.

提示主机已经在线,已经跳过命令执行。到这里整个集群就配置完成.

以下文章点击率最高

Loading…

发表评论