一、环境
系统:CentOS 6.4x64最小化安装
node1 192.168.3.61 node1.test.com
node2 192.168.3.62 node2.test.com
vip 192.168.3.63
二、基础配置
a.配置ssh互信
node1:
[root@node1 ~]# ssh-keygen [root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
node2:
[root@node2 ~]# ssh-keygen [root@node2 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
b.在node1和node2进行同样的操作,这里只给出node1的操作
配置hosts本地解析
[root@node1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.3.61 node1.test.com node1 192.168.3.62 node2.test.com node2
关闭防火墙和selinux
[root@node1 ~]# service iptables stop iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ] [root@node1 ~]# getenforce Disabled
安装epel源
[root@node1 ~]# rpm -ivh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm Retrieving http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm warning: /var/tmp/rpm-tmp.J6fbZA: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY Preparing... ########################################### [100%] 1:epel-release ########################################### [100%] [root@node1 ~]# sed -i 's@#b@b@g' /etc/yum.repos.d/epel.repo [root@node1 ~]# sed -i 's@mirrorlist@#mirrorlist@g' /etc/yum.repos.d/epel.repo
配置ntp同步
[root@node1 ~]# echo "*/10 * * * * /usr/sbin/ntpdate asia.pool.ntp.org &>/dev/null" >/var/spool/cron/root [root@node1 ~]# ntpdate asia.pool.ntp.org 11 Jun 14:42:40 ntpdate[1529]: step time server 120.119.31.1 offset 167.549052 sec [root@node1 ~]# hwclock -w
三、corosync的安装和配置
a.安装corosync
node1:
[root@node1 ~]# yum install corosync -y
node2:
[root@node2 ~]# yum install corosync -y
b.配置corosync
[root@node1 ~]# cd /etc/corosync/ [root@node1 corosync]# cp corosync.conf.example corosync.conf [root@node1 corosync]# egrep -v "^$|^#|^[[:space:]]+#" /etc/corosync/corosync.conf compatibility: whitetank totem { version: 2 secauth: on #开启认证 threads: 0 interface { ringnumber: 0 bindnetaddr: 192.168.3.0 #心跳线网段 mcastaddr: 239.255.11.49 #组播地址 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: no debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } amf { mode: disabled } service { #开始pacemaker ver: 0 name: pacemaker } aisexec { user: root group: root }
c.配置秘钥文件
[root@node1 corosync]# mv /dev/{random,random.bak} [root@node1 corosync]# ln -s /dev/urandom /dev/random #主要作用是缩短key的生成时间 #生成key文件 [root@node1 corosync]# corosync-keygen Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/random. Press keys on your keyboard to generate entropy. Writing corosync key to /etc/corosync/authkey. [root@node1 corosync]# ll total 24 -r-------- 1 root root 128 Jun 11 15:05 authkey #这是刚生成的key文件 -rw-r--r-- 1 root root 2769 Jun 11 14:59 corosync.conf -rw-r--r-- 1 root root 2663 Oct 15 2014 corosync.conf.example -rw-r--r-- 1 root root 1073 Oct 15 2014 corosync.conf.example.udpu drwxr-xr-x 2 root root 4096 Oct 15 2014 service.d drwxr-xr-x 2 root root 4096 Oct 15 2014 uidgid.d
d.将node1的配置文件盒key文件复制到node2节点上
[root@node1 corosync]# scp authkey corosync.conf node2:/etc/corosync/ authkey 100% 128 0.1KB/s 00:00 corosync.conf 100% 2769 2.7KB/s 00:00
因为我们在corosync配置文件中启动了pacemaker,所有我们要等安装完pacemaker后再启动corosync
四、pacemaker的安装和配置
a.安装pacemaker
node1:
[root@node1 corosync]# yum install pacemaker -y
node2:
[root@node2 ~]# yum install pacemaker -y
b.安装crmsh
node1和node2的操作是一样的
[root@node1 corosync]# wget http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/RedHat_RHEL-6/x86_64/crmsh-2.1-1.2.x86_64.rpm [root@node1 corosync]# yum install python-dateutil python-lxml redhat-rpm-config pssh -y [root@node1 corosync]# rpm -ivh crmsh-2.1-1.2.x86_64.rpm warning: crmsh-2.1-1.2.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 17280ddf: NOKEY Preparing... ########################################### [100%] 1:crmsh ########################################### [100%]
d.启动corosync
[root@node1 ~]# ssh node2 service corosync start Starting Corosync Cluster Engine (corosync): [ OK ] [root@node1 ~]# service corosync start Starting Corosync Cluster Engine (corosync): [ OK ]
e.查看启动信息
(1).查看corosync引擎是否正常启动
[root@node1 ~]# egrep "Corosync Cluster Engine|configuration file" /var/log/cluster/corosync.log Jun 11 15:23:10 corosync [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service. Jun 11 15:23:10 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
(2).查看初始化节点成员信息
[root@node1 ~]# grep TOTEM /var/log/cluster/corosync.log Jun 11 15:35:57 corosync [TOTEM ] Initializing transport (UDP/IP Multicast). Jun 11 15:35:57 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Jun 11 15:35:57 corosync [TOTEM ] The network interface [192.168.3.61] is now up. Jun 11 15:35:57 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. Jun 11 15:35:58 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
(3).检查启动过程中是否有错误产生
[root@node1 ~]# grep ERROR: /var/log/cluster/corosync.log Jun 11 15:23:10 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon. Jun 11 15:23:10 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN
(4).查看pacemaker是否正常启动
[root@node1 ~]# grep pcmk_startup /var/log/cluster/corosync.log Jun 11 15:23:10 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Jun 11 15:23:10 corosync [pcmk ] Logging: Initialized pcmk_startup Jun 11 15:23:10 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 Jun 11 15:23:10 corosync [pcmk ] info: pcmk_startup: Service: 9 Jun 11 15:23:10 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.test.com
(5).查看集群状态
[root@node1 ~]# crm status Last updated: Thu Jun 11 15:36:15 2015 Last change: Thu Jun 11 15:36:09 2015 Stack: classic openais (with plugin) Current DC: node2.test.com - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 0 Resources configured Online: [ node1.test.com node2.test.com ]
从结果能看出node1和node2都在线
五、安装DRBD
这里使用编译安装的方式进行安装,node1和node2的操作一样
DRBD的编译安装需要安装kernel-devel,kernel-heaeds这2个rpm包,且版本要和uname -r保持一致,我们从系统光盘中提取出这2个包,DRBD的下载地址http://oss.linbit.com/drbd/8.4/drbd-8.4.4.tar.gz
a.安装DRBD
[root@node1 ~]# uname -r 2.6.32-358.el6.x86_64 [root@node1 ~]# ll |grep rpm -r--r--r-- 1 root root 8548160 Jun 11 15:44 kernel-devel-2.6.32-358.el6.x86_64.rpm -r--r--r-- 1 root root 2426756 Jun 11 15:44 kernel-headers-2.6.32-358.el6.x86_64.rpm [root@node1 ~]# rpm -ivh kernel-devel-2.6.32-358.el6.x86_64.rpm kernel-headers-2.6.32-358.el6.x86_64.rpm Preparing... ########################################### [100%] 1:kernel-headers ########################################### [ 50%] 2:kernel-devel ########################################### [100%] [root@node1 ~]# yum install gcc make flex -y [root@node1 ~]# tar xf drbd-8.4.4.tar.gz && cd drbd-8.4.4 [root@node1 drbd-8.4.4]# ./configure --prefix=/usr/local/drbd --with-km --with-pacemaker --with-heartbeat [root@node1 drbd-8.4.4]# make KDIR=/usr/src/kernels/2.6.32-358.el6.x86_64/ [root@node1 drbd-8.4.4]# make install [root@node1 drbd-8.4.4]# mkdir -p /usr/local/drbd/var/run/drbd [root@node1 drbd-8.4.4]# cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/init.d/ [root@node1 drbd-8.4.4]# chkconfig --add drbd [root@node1 drbd-8.4.4]# chkconfig drbd off #安装DRBD模块 [root@node1 drbd-8.4.4]# cd drbd [root@node1 drbd]# make clean rm -rf .tmp_versions Module.markers Module.symvers modules.order rm -f *.[oas] *.ko .*.cmd .*.d .*.tmp *.mod.c .*.flags .depend .kernel* rm -f compat/*.[oas] compat/.*.cmd [root@node1 drbd]# make KDIR=/usr/src/kernels/2.6.32-358.el6.x86_64/ [root@node1 drbd]# cp drbd.ko /lib/modules/2.6.32-358.el6.x86_64/kernel/lib/ [root@node1 drbd]# depmod [root@node1 drbd]# modprobe drbd [root@node1 drbd]# lsmod |grep drbd drbd 340519 0 libcrc32c 1246 1 drbd
b.配置DRBD
[root@node1 ~]# egrep -v "^$|^#|^[[:space:]]+#" /usr/local/drbd/etc/drbd.d/global_common.conf global { usage-count no; } common { protocol C; handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; } startup { } options { } disk { on-io-error detach; rate 200M; no-disk-flushes; no-md-flushed; } net { cram-hmac-alg "sha1"; shared-secret "123456"; sndbuf-size 512k; max-buffers 8000; unplug-watermark 1024; max-epoch-size 8000; after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; } }
c.node1和node2的/dev/sdb拆分成2部分,/dev/sdb1=48G,/dev/sdb2=剩余空间,并格式化成ext4文件系统
node1:
[root@node1 ~]# fdisk -l |grep /dev/sdb Disk /dev/sdb: 53.7 GB, 53687091200 bytes /dev/sdb1 1 6267 50339646 83 Linux #大小48G /dev/sdb2 6268 6527 2088450 83 Linux #大小是剩余的空间,用来存放meta数据 [root@node1 ~]# mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 3147760 inodes, 12584911 blocks 629245 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 385 block groups 32768 blocks per group, 32768 fragments per group 8176 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 39 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@node1 ~]# tune2fs -c -1 /dev/sdb1 tune2fs 1.41.12 (17-May-2010) Setting maximal mount count to -1
node2:
[root@node2 drbd]# fdisk -l |grep dev/sdb Disk /dev/sdb: 53.7 GB, 53687091200 bytes /dev/sdb1 1 6267 50339646 83 Linux /dev/sdb2 6268 6527 2088450 83 Linux [root@node2 ~]# mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 3147760 inodes, 12584911 blocks 629245 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 385 block groups 32768 blocks per group, 32768 fragments per group 8176 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 24 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@node2 ~]# tune2fs -c -1 /dev/sdb1 tune2fs 1.41.12 (17-May-2010) Setting maximal mount count to -1
d.增加资源
[root@node1 ~]# cat /usr/local/drbd/etc/drbd.d/web.res resource web { on node1.test.com { device /dev/drbd1; disk /dev/sdb1; address 192.168.3.61:7789; meta-disk /dev/sdb2 [0]; } on node2.test.com { device /dev/drbd1; disk /dev/sdb1; address 192.168.3.62:7789; meta-disk /dev/sdb2 [0]; } }
e.将配置文件和资源复制到node2上
[root@node1 ~]# cd /usr/local/drbd/etc/drbd.d/ [root@node1 drbd.d]# scp global_common.conf web.res node2:/usr/local/drbd/etc/drbd.d/ global_common.conf 100% 2542 2.5KB/s 00:00 web.res 100% 255 0.3KB/s 00:00
f.在node1和node2上初始化DRBD资源
node1:
[root@node1 ~]# drbdadm create-md web Writing meta data... initializing activity log NOT initializing bitmap New drbd meta data block successfully created.
node2:
[root@node2 ~]# drbdadm create-md web Writing meta data... initializing activity log NOT initializing bitmap New drbd meta data block successfully created.
g.启动DRBD
node1:
[root@node1 ~]# /etc/init.d/drbd start Starting DRBD resources: [ create res: web prepare disk: web adjust disk: web adjust net: web ]
node2:
[root@node2 ~]# /etc/init.d/drbd start Starting DRBD resources: [ create res: web prepare disk: web adjust disk: web adjust net: web ]
h.查看DRBD状态
[root@node1 ~]# ln -s /usr/local/drbd/sbin/* /usr/bin/ [root@node1 ~]# drbd-overview 1:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
i.设置node1为主节点
[root@node1 ~]# drbdadm -- --overwrite-data-of-peer primary web [root@node1 ~]# drbd-overview 1:web/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r---n- [>....................] sync'ed: 0.9% (48744/49156)M #将DRBD设备挂载到/mnt下,写入一个测试数据 [root@node1 ~]# mount /dev/drbd1 /mnt [root@node1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 18G 1.7G 16G 10% / tmpfs 495M 22M 473M 5% /dev/shm /dev/sda1 194M 28M 156M 16% /boot /dev/sr0 48G 180M 45G 1% /mnt /dev/drbd1 48G 180M 45G 1% /mnt [root@node1 ~]# touch /mnt/test.txt [root@node1 ~]# ll /mnt/ total 16 drwx------ 2 root root 16384 Jun 11 16:26 lost+found -rw-r--r-- 1 root root 0 Jun 11 16:45 test.txt [root@node1 ~]# umount /mnt #通过命令能看到DRBD正在同步 [root@node1 ~]# drbd-overview 1:web/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r----- [====>...............] sync'ed: 27.2% (35808/49156)M #同步完成 [root@node1 ~]# drbd-overview 1:web/0 Connected Primary/Secondary UpToDate/UpToDate C r----- #在node2上挂载/dev/sdb1,查看数据是否同步 [root@node2 ~]# drbdadm down web [root@node2 ~]# mount /dev/sdb1 /mnt [root@node2 ~]# ll /mnt total 16 drwx------ 2 root root 16384 Jun 11 16:26 lost+found -rw-r--r-- 1 root root 0 Jun 11 16:45 test.txt #数据已同步过来 [root@node2 ~]# umount /mnt [root@node2 ~]# drbdadm up web
六.mysql安装和配置
node1:
#安装基础软件包 [root@node1 ~]# yum -y install make gcc-c++ cmake bison-devel ncurses-devel #创建用户 [root@node1 ~]# groupadd mysql [root@node1 ~]# useradd -g mysql mysql -s /sbin/nologin #解压软件包,这里我们是事先下载好的mysql-5.5.37.tar.gz [root@node1 ~]# tar xf mysql-5.5.37.tar.gz #创建用来存放Mysql数据的目录,因为我们这里使用DRBD做高可用,所以我们的目录应该创建在DRBD设备上 [root@node1 ~]# mkdir /data [root@node1 ~]# mount /dev/drbd1 /data/ [root@node1 ~]# mkdir -p /data/mysql/data #将数据目录创建在DRBD设备上 #安装mysql [root@node1 mysql-5.5.37]# cmake \ > -DCMAKE_INSTALL_PREFIX=/usr/local/mysql-5.5.37 \ > -DMYSQL_DATADIR=/data/mysql/data \ > -DSYSCONFDIR=/etc \ > -DWITH_MYISAM_STORAGE_ENGINE=1 \ > -DWITH_INNOBASE_STORAGE_ENGINE=1 \ > -DWITH_MEMORY_STORAGE_ENGINE=1 \ > -DWITH_READLINE=1 \ > -DMYSQL_UNIX_ADDR=/var/lib/mysql/mysql.sock \ > -DMYSQL_TCP_PORT=3306 \ > -DENABLED_LOCAL_INFILE=1 \ > -DWITH_PARTITION_STORAGE_ENGINE=1 \ > -DEXTRA_CHARSETS=all \ > -DDEFAULT_CHARSET=utf8 \ > -DDEFAULT_COLLATION=utf8_general_ci [root@node1 mysql-5.5.37]# make && make install #数据目录初始化 [root@node1 mysql-5.5.37]# scripts/mysql_install_db --datadir=/data/mysql/data/ --user=mysql --basedir=/usr/local/mysql-5.5.37/ #复制mysql配置文件 [root@node1 mysql-5.5.37]# cp -rf support-files/my-large.cnf /etc/my.cnf #创建启动脚本 [root@node1 ~]# cd /usr/local/mysql-5.5.37/ [root@node1 mysql-5.5.37]# cp support-files/mysql.server /etc/init.d/mysqld [root@node1 mysql-5.5.37]# chmod +x /etc/init.d/mysqld #配置软链接 [root@node1 mysql-5.5.37]# ln -s /usr/local/mysql-5.5.37/ /usr/local/mysql [root@node1 mysql-5.5.37]# ln -s /usr/local/mysql-5.5.37/bin/* /usr/bin/ #启动mysql,确保DRBD设备挂载在/data目录下 [root@node1 mysql-5.5.37]# mount /dev/drbd1 /data/ [root@node1 mysql-5.5.37]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 18G 2.6G 15G 16% / tmpfs 495M 22M 473M 5% /dev/shm /dev/sda1 194M 28M 156M 16% /boot /dev/drbd1 48G 182M 45G 1% /data [root@node1 ~]# /etc/init.d/mysqld start Starting MySQL... SUCCESS! [root@node1 ~]# netstat -anpt |grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2313/mysqld #连接到数据库 [root@node1 ~]# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.37-log Source distribution Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.01 sec) #创建一个新的数据库 mysql> create database weyee; Query OK, 1 row affected (0.02 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | | weyee | +--------------------+ 5 rows in set (0.00 sec) #node1的mysql安装到此结束
node2:
按照我们的架构,node2只需要安装好mysql就能启动服务了
#安装基础软件包 [root@node2 ~]# yum -y install make gcc-c++ cmake bison-devel ncurses-devel #创建用户 [root@node2 ~]# groupadd mysql [root@node2 ~]# useradd -g mysql mysql -s /sbin/nologin #创建用来存放Mysql数据的目录 [root@node2 ~]# drbdadm down web [root@node2 ~]# mkdir /data [root@node2 ~]# mount /dev/sdb1 /data/ [root@node2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 18G 1.6G 16G 10% / tmpfs 495M 37M 458M 8% /dev/shm /dev/sda1 194M 28M 156M 16% /boot /dev/sdb1 48G 210M 45G 1% /data [root@node2 ~]# ll /data/mysql/data/ total 28704 -rw-rw---- 1 mysql mysql 18874368 Jun 12 09:55 ibdata1 -rw-rw---- 1 mysql mysql 5242880 Jun 12 09:55 ib_logfile0 -rw-rw---- 1 mysql mysql 5242880 Jun 12 09:55 ib_logfile1 drwx------ 2 mysql root 4096 Jun 11 17:30 mysql -rw-rw---- 1 mysql mysql 192 Jun 12 09:56 mysql-bin.000001 -rw-rw---- 1 mysql mysql 19 Jun 12 09:55 mysql-bin.index -rw-r----- 1 mysql root 1798 Jun 12 09:55 node1.test.com.err -rw-rw---- 1 mysql mysql 5 Jun 12 09:55 node1.test.com.pid drwx------ 2 mysql mysql 4096 Jun 11 17:30 performance_schema drwx------ 2 mysql root 4096 Jun 11 17:30 test drwx------ 2 mysql mysql 4096 Jun 12 09:56 weyee #安装mysql [root@node2 ~]# tar xf mysql-5.5.37.tar.gz [root@node2 ~]# cd mysql-5.5.37 [root@node2 mysql-5.5.37]# cmake \ > -DCMAKE_INSTALL_PREFIX=/usr/local/mysql-5.5.37 \ > -DMYSQL_DATADIR=/data/mysql/data \ > -DSYSCONFDIR=/etc \ > -DWITH_MYISAM_STORAGE_ENGINE=1 \ > -DWITH_INNOBASE_STORAGE_ENGINE=1 \ > -DWITH_MEMORY_STORAGE_ENGINE=1 \ > -DWITH_READLINE=1 \ > -DMYSQL_UNIX_ADDR=/var/lib/mysql/mysql.sock \ > -DMYSQL_TCP_PORT=3306 \ > -DENABLED_LOCAL_INFILE=1 \ > -DWITH_PARTITION_STORAGE_ENGINE=1 \ > -DEXTRA_CHARSETS=all \ > -DDEFAULT_CHARSET=utf8 \ > -DDEFAULT_COLLATION=utf8_general_ci [root@node2 mysql-5.5.37]# make && make install #从node1上将启动脚本和my.cnf文件复制过来 [root@node1 ~]# scp /etc/init.d/mysqld node2:/etc/init.d/ mysqld 100% 11KB 10.7KB/s 00:00 [root@node1 ~]# scp /etc/my.cnf node2:/etc my.cnf 100% 4675 4.6KB/s 00:00 #配置软链接 [root@node2 ~]# ln -s /usr/local/mysql-5.5.37/ /usr/local/mysql [root@node2 ~]# ln -s /usr/local/mysql-5.5.37/bin/* /usr/bin/ #正常情况下我们现在能够启动mysql,而且能看到在node1上创建的weyee数据库 #启动数据库 [root@node2 ~]# /etc/init.d/mysqld start Starting MySQL.. SUCCESS! [root@node2 ~]# netstat -anpt |grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 14324/mysqld #查看数据库结果 [root@node2 ~]# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.37-log Source distribution Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | | weyee | #在node1上创建的数据库weyee已显示出来 +--------------------+ 5 rows in set (0.04 sec)
到此node1和node2上的mysql安装都已完成,将mysql全部停止,启动node2的DRBD服务
[root@node1 ~]# /etc/init.d/mysqld stop Shutting down MySQL. SUCCESS! [root@node2 ~]# /etc/init.d/mysqld stop Shutting down MySQL. SUCCESS! #启动DRBD服务 [root@node2 ~]# umount /data [root@node2 ~]# drbdadm up web [root@node2 ~]# drbd-overview 1:web/0 Connected Secondary/Primary UpToDate/UpToDate C r-----
七、配置crm资源管理
a.关闭DRBD,设置开机不启动
node1:
[root@node1 ~]# service drbd stop [root@node1 ~]# chkconfig drbd off
node2:
[root@node2 ~]# service drbd stop [root@node2 ~]# chkconfig drbd off
b.配置DRBD资源
[root@node1 ~]# crm crm(live)# configure crm(live)configure# property stonith-enabled=false crm(live)configure# property no-quorum-policy=ignore crm(live)configure# verify crm(live)configure# commit crm(live)configure# primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource=web op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30 crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true crm(live)configure# show node node1.test.com node node2.test.com xml <primitive id="mysqldrbd" class="ocf" provider="heartbeat" type="drbd"> \ <instance_attributes id="mysqldrbd-instance_attributes"> \ <nvpair name="drbd_resource" value="web" id="mysqldrbd-instance_attributes-drbd_resource"/> \ </instance_attributes> \ <operations> \ <op name="start" timeout="240" interval="0" id="mysqldrbd-start-0"/> \ <op name="stop" timeout="100" interval="0" id="mysqldrbd-stop-0"/> \ <op name="monitor" role="Master" interval="20" timeout="30" id="mysqldrbd-monitor-20"/> \ <op name="monitor" role="Slave" interval="30" timeout="30" id="mysqldrbd-monitor-30"/> \ </operations> \ </primitive> ms ms_mysqldrbd mysqldrbd \ meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true property cib-bootstrap-options: \ dc-version=1.1.11-97629de \ cluster-infrastructure="classic openais (with plugin)" \ expected-quorum-votes=2 \ stonith-enabled=false \ no-quorum-policy=ignore
剩余部分查看http://freeloda.blog.51cto.com/2033581/1275528