快速搭建DB2 purescale集群測試環境(VWMARE虛擬機)

測試環境: 在WIN10 桌面操作系統,安裝Vmware workstation14 ,搭建IBM Purescale環境 虛擬兩個主機,node01做共享存儲。創建實例一個cf在node01上,一個member在node02上
測試軟件: vmware workstation 14 pro
Linux 64位 系統 :[紅帽企業Linux.6.4.服務器版].rhel-server-6.4-x86_64-dvd.iso
DB2 數據庫軟件 :v10.5_linuxx64_server.tar.gz

1、最小化安裝linux系統(node01、node02)(記得在虛擬機node01上添加一塊新硬盤用於共享存儲)
2、修改ip,注意網關!(node01【21】、node02【22】)
vi /etc/sysconfig/network-scripts/ifcfg-eth0
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.179.21 ##【21】【22】
NETMASK=255.255.255.0
GATEWAY=192.168.179.2
##切記一定要設置網關,主機也要設置。
3、關閉防火牆:(node01、node02)
chkconfig iptables off
4、關閉selinux:(node01、node02)
vi /etc/selinux/config ##修改SELINUXTYPE=disabled
vi /etc/grub.conf ##找到最後一個單詞是quiet的一行,在quiet後面加上 selinux=0 ,修改之後重啟系統!
5、使用yum安裝需要的包(node01、node02)
1.cd /mnt/
2.mkdir cdrom
3.mount /dev/cdrom /mnt/cdrom
4.vi /etc/yum.repos.d/rhel-source.repo
[rhel-source]
baseurl=file:///mnt/cdrom/ ##此項修改成這樣
enabled=1 ##此項修改成這樣
5.yum clean all
6.yum repolist
7.yum安裝文件包:先切換到cd /mnt/cdrom/Packages/目錄下
1、yum -y install gcc automake autoconf libtool make openssh-clients iscsi-initiator-utils libnes libmthca libipathverbs libcxgb3 libibcm libaio ibsim ibutils rdma pam dapl* compat-libstdc++* perl-Config-General scsi-target-utils librdmacm-devel*
2、 yum -y install libstdc++* glibc* gcc* kernel* ntp* sg3* binutils* openssh* cpp* ksh* pam*
3、 yum -y yum install -y pam-devel.i686 pam_ldap.i686 pam.i686
(檢查確定所有包全部安裝)
6、創建組和用戶(node01、node02)
groupadd -g 1001 db2fadm1
groupadd -g 1002 db2iadm1
useradd -g db2fadm1 -u 1001 -m -d /home/db2fenc1 -p db2fenc1 db2fenc1
useradd -g db2iadm1 -u 1002 -m -d /home/db2inst1 -p db2inst1 db2inst1
mkdir /root/.ssh
su – db2inst1 -c “mkdir -p /home/db2inst1/.ssh”
修改db2inst1的密碼:passwd db2inst1
7、建立互信
修改node01、node02上的ip與主機名映射
vi /etc/hosts
127.0.0.1 localhost
192.168.179.21 node01
192.168.179.22 node02

在node01執行如下命令:
ssh-keygen -t rsa
cp -v /root/.ssh/id_rsa.pub /root/.ssh/id01
scp /root/.ssh/id01 node02:/root/.ssh
su – db2inst1
ssh-keygen -t rsa
cp -v /home/db2inst1/.ssh/id_rsa.pub /home/db2inst1/.ssh/id01
scp /home/db2inst1/.ssh/id01 node02:/home/db2inst1/.ssh

在node02執行如下命令:
ssh-keygen -t rsa
cp -v /root/.ssh/id_rsa.pub /root/.ssh/id02
scp /root/.ssh/id02 node01:/root/.ssh/
su – db2inst1
ssh-keygen -t rsa
cp -v /home/db2inst1/.ssh/id_rsa.pub /home/db2inst1/.ssh/id02
scp /home/db2inst1/.ssh/id02 node01:/home/db2inst1/.ssh/

在node01、node02執行如下命令(在root下)
cd /root/.ssh
cat id01 id02 >authorized_keys
chmod 600 authorized_keys
su – db2inst1
cd /home/db2inst1/.ssh
cat id01 id02 >authorized_keys
測試是否配置成功,確保不用輸入密碼和yes,則配置成功(node01、node02)
ssh node01 ls
ssh node02 ls
su – db2inst1
ssh node01 ls
ssh node02 ls

8、設置共享存儲
1、在node01、node02上添加tgtd服務開機自啟動:
chkconfig –add tgtd
chkconfig –level 2345 tgtd on
chkconfig –list
2、在node01上:(建立虛擬機時多添加的一塊20G大小的新盤/dev/sdb(若上面沒添加,現在關機添加也可以),用於共享存儲),為了硬盤盤符加載統一,也應該在Node02  新增一個25M的硬盤/dev/sdb,確保node01,node02 通過iscsi共享出來的盤符都為/dev/sdc)
vi /etc/tgt/targets.conf
添加如下:

backing-store /dev/sdb
initiator-address 192.168.179.21
initiator-address 192.168.179.22

設置之後,在node01上重啟服務:service tgtd restart

3、在node01、node02上:
vi /etc/rc.local
添加如下:
iscsiadm –mode discoverydb –type sendtargets –portal 192.168.179.21 –discover
iscsiadm –m node –targetname iqn.2014-05.localdomain:node01 –portal 192.168.179.21:3260 –login
設置之後重啟機器或者手動執行以上添加的命令,重啟或手動執行之後:fdisk -l 應該可以看到多出一塊盤/dev/sdc。

9、配置環境變量、安裝db2軟件(node01、node02)
配置環境變量:
vi /root/.bash_profile
添加如下:
export PATH=/root/bin:/usr/sbin/rsct/bin:/opt/ibm/db2/v10.5/bin:$PATH
export PATH=/usr/lpp/mmfs/bin:$PATH
export DB2USENONIB=TRUE
export DB2_CFS_GPFS_NO_REFRESH_DATA=true
export IBM_RDAC=NO
export CT_MANAGEMENT_SCOPE=2

安裝db2軟件: ./db2_install -> yes -> SERVER -> yes

10、設置GPFS文件系統(node01上)
cd /opt/ibm/db2/V10.5/instance/
./db2cluster_prepare -instance_shared_dev /dev/sdc ##我的共享出來的盤是/dev/sdc

cd /opt/ibm/db2/V10.5/bin
./db2cluster -cfs -add -host node02
./db2cluster -cfs -add -license
mmstartup -a
mmgetstate -a(所有node active)
mmmount all -a
以上操作之後,兩機器:df -h 會看到多出一個類似/db2sd_20140526123355文件系統。

11、創建實例、設置環境變量、啟動實例(隨意)
cd /opt/ibm/db2/V10.5/instance

創建實例:./db2icrt -cf node01 -cfnet node01 -m node02 -mnet node02 -instance_shared_dir /db2sd_20140526123355/ -tbdev 192.168.179.2 -u db2fenc1 db2inst1

設置環境變量:
db2set DB2_SD_SOCKETS_RESTRICTIONS=false
db2set DB2_CA_TRANSPORT_METHOD=SOCKETS

啟動實例:db2start
另:以上過程僅供在虛擬機上測試,多多指教,謝謝!

<target iqn.1994-05.com.redhat:iscsidisk>
backing-store /dev/sdb
initiator-address 192.168.179.133
initiator-address 192.168.179.134
</target>

iscsiadm –mode discoverydb –type sendtargets –portal 192.168.179.135 –discover

192.168.179.133:3260,1 iqn.1994-05.com.redhat:iscsidisk

iscsiadm –m node –targetname iqn.1994-05.com.redhat:iscsidisk –portal 192.168.179.135:3260 –login

./db2icrt -cf node01 -cfnet node01 -m node02 -mnet node02 -instance_shared_dir /db2sd_20180719223517/ -tbdev 192.168.179.2 -u db2fenc1 db2inst1

因為兩機同時執行 poweroff 時,雙機意外地重啟,並沒有如期關機。最好不得不強制關機。發現 共享磁盤不能正常加載。fdisk -l 也不能看到共享的GPFS文件夾。

最後,停掉/etc/rc.local文件里的iscsiadm的兩個命令的自動加載,然後兩次兩機還是關機,模擬真正的斷電關機。(要注意,經過兩次測試。感覺雙機兩個節點不可以同時poweroff).,然後重新通電開機。然後,兩機再同時手工執行那兩條ISCSIADM命令。再執行FDISK -L ,發現兩台機可以再次找到那共享磁盤。但是,只發現NODE01已經正常加載GPFS共享文件夾,NODE02沒有。聯想網上一篇文檔所提示一樣。同樣執行下面的命令:
mmstartup -a
mmgetstate -a(所有node active)
mmmount all -a

然後,雙機再執行 fdisk -l ,再執行df -h ,就可以發現node01 ,node02都已經加載上/db2sd_20180721073322

然後在NODE01 su – db2inst1

再執行db2start ,但發現不成功並報錯::
[db2inst1@node01 ~]$ db2start
ADM12026W The DB2 server has detected that a valid license for the product “DB2 Enterprise Server Edition” has not been registered.

08/01/2018 06:32:48 129 0 SQL1677N DB2START or DB2STOP processing failed due to a DB2 cluster services error.
08/01/2018 06:32:50 0 0 SQL1685N An error was encountered during DB2START processing of DB2 member with identifier “0” because the database manager failed to start one or more CFs.
08/01/2018 06:32:50 1 0 SQL1685N An error was encountered during DB2START processing of DB2 member with identifier “1” because the database manager failed to start one or more CFs.
SQL1032N No start database manager command was issued. SQLSTATE=57019
[db2inst1@node01 ~]$
[db2inst1@node01 ~]$

[db2inst1@node01 ~]$ db2instance -list
ID TYPE STATE HOME_HOST CURRENT_HOST ALERT PARTITION_NUMBER LOGICAL_PORT NETNAME
— —- —– ——— ———— —– —————- ———— ——-
0 MEMBER STOPPED node02 node02 NO 0 0 node02
1 MEMBER STOPPED node01 node01 NO 0 0 node01
128 CF STOPPED node01 node01 NO – 0 node01
129 CF STOPPED node02 node02 NO – 0 node02

HOSTNAME STATE INSTANCE_STOPPED ALERT
——– —– —————- —–
node01 ACTIVE NO YES
node02 ACTIVE NO YES
There is currently an alert for a member, CF, or host in the data-sharing instance. For more information on the alert, its impact, and how to clear it, run the following command: ‘db2cluster -cm -list -alert’.
[db2inst1@node01 ~]$

所以halt/init0/poweroff/shutdown -h now 系統前先關閉GPFS,再關閉系統

mmumount all -a

mmshutdown -a

mmgetstate -a

db2start INSTANCE ON node01
db2start INSTANCE ON node02

db2start CF 128
db2start CF 129

db2 update dbm cfg using CF_TRANSPORT_METHOD RDMA

打開在節點開機後自動啟動 GPFS mmchconfig autoload=yes
關掉在節點開機後自動啟動GPFS mmchconfig autoload=no

以下文章點擊率最高

Loading…

     

如果這文章對你有幫助,請掃左上角微信支付-支付寶,給於打賞,以助博客運營