|
前言* DRBD(DistributedReplicatedBlockDevice)是一個基于塊設(shè)備級別在遠(yuǎn)程服務(wù)器直接同步和鏡像數(shù)據(jù)的開源軟件,類似于RAID1數(shù)據(jù)鏡像,通常配合keepalived、heartbeat等HA軟件來實(shí)現(xiàn)高可用性。這里簡單記錄僅供參考。 一、實(shí)施環(huán)境 系統(tǒng)版本:CentOS 5.8 DRBD版本: drbd-8.3.15 Keepalived:keepalived-1.1.15 Master:192.168.149.128 Backup:192.168.149.129 二、初始化配置 1) 在128、129兩臺服務(wù)器/etc/hosts里面都添加如下配置: 192.168.149.128 node1 192.168.149.129 node2 2) 優(yōu)化系統(tǒng)kernel參數(shù),直接上sysctl.conf配置如下: net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.tcp_max_tw_buckets = 10000 net.ipv4.tcp_sack = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 16384 4194304 net.core.wmem_default = 8388608 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.netdev_max_backlog = 262144 net.core.somaxconn = 262144 net.ipv4.tcp_max_orphans = 3276800 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_synack_retries = 1 net.ipv4.tcp_syn_retries = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 94500000 915000000 927000000 net.ipv4.tcp_fin_timeout = 1 net.ipv4.tcp_keepalive_time = 30 net.ipv4.ip_local_port_range = 1024 65530 net.ipv4.icmp_echo_ignore_all = 1 3) 兩臺服務(wù)器分別添加一塊設(shè)備,用于DRBD主設(shè)備存儲,我這里為/dev/sdb 30G硬盤; 執(zhí)行如下命令: mkfs.ext3 /dev/sdb ;dd if=/dev/zero of=/dev/sdb bs=1M count=1;sync 三、DRBD安裝配置 yum -y install drbd83* kmod-drbd83 ; modprobe drbd 安裝完成并加載drbd模塊后,vi修改/etc/drbd.conf配置文件,本文內(nèi)容如下: global {
usage-count yes;
}
common {
syncer { rate 100M; }
}
resource r0 {
protocol C;
startup {
}
disk {
on-io-error detach;
#size 1G;
}
net {
}
on node1 {
device /dev/drbd0;
disk /dev/sdb;
address 192.168.149.128:7898;
meta-disk internal;
}
on node2 {
device /dev/drbd0;
disk /dev/sdb;
address 192.168.149.129:7898;
meta-disk internal;
}
}
配置修改完畢后執(zhí)行如下命令初始化: drbdadm create-md r0 ;/etc/init.d/drbd restart ;/etc/init.d/status 如下圖: 以上步驟,需要在兩臺服務(wù)器都執(zhí)行,兩臺都配置完畢后,在node2從上面執(zhí)行如下命令:/etc/init.d/drbd status 看到如下信息,表示目前兩臺都為從,我們需要設(shè)置node1為master,命令如下: drbdadm -- --overwrite-data-of-peer primary all mkfs.ext3 /dev/drbd0 mkdir /app ;mount /dev/drbd0 /app 自此,DRBD配置完畢,我們可以往/app目錄寫入任何東西,當(dāng)master出現(xiàn)宕機(jī)或者其他故障,手動切換到backup,數(shù)據(jù)沒有任何丟失,相當(dāng)于兩臺服務(wù)器做網(wǎng)絡(luò)RAID1。 四、Keepalived配置 wget http://www./software/keepalived-1.1.15.tar.gz; tar -xzvf keepalived-1.1.15.tar.gz ;cd keepalived-1.1.15 ; ./configure ; make ;make install DIR=/usr/local/ ;cp $DIR/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/ ; cp $DIR/etc/sysconfig/keepalived /etc/sysconfig/ ; mkdir -p /etc/keepalived ; cp $DIR/sbin/keepalived /usr/sbin/ 兩臺服務(wù)器均安裝keepalived,并進(jìn)行配置,首先在node1(master)上配置,keepalived.conf內(nèi)容如下: ! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_mysql {
script "/data/sh/check_mysql.sh"
interval 5
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.149.100
}
track_script {
check_mysql
}
}
然后創(chuàng)建check_mysql.sh檢測腳本,內(nèi)容如下: #!/bin/sh A=`ps -C mysqld --no-header |wc -l` if [ $A -eq 0 ];then /bin/umount /app/ drbdadm secondary r0 killall keepalived fi 添加node2(backup)上配置,keepalived.conf內(nèi)容如下: ! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_sync_group VI{
group {
VI_1
}
notify_master /data/sh/master.sh
notify_backup /data/sh/backup.sh
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 52
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.149.100
}
}
創(chuàng)建master.sh檢測腳本,內(nèi)容如下: #!/bin/bash drbdadm primary r0 /bin/mount /dev/drbd0 /app/ /etc/init.d/mysqld start 創(chuàng)建backup.sh檢測腳本,內(nèi)容如下: #!/bin/bash /etc/init.d/mysqld stop /bin/umount /dev/drbd0 drbdadm secondary r0 發(fā)生腦裂恢復(fù)步驟如下: Master執(zhí)行命令: drbdadm secondary r0 drbdadm -- --discard-my-data connect r0 Backup上執(zhí)行命令: drbdadm connect r0 文章僅供參考,本文參考,特別感謝分享奉獻(xiàn)的IT人。 http://oldboy.blog.51cto.com/2561410/1240412
|
|
|