快速搭建DB2 purescale集群测试环境(VWMARE虚拟机)

测试环境: 在WIN10 桌面操作系统,安装Vmware workstation14 ,搭建IBM Purescale环境 虚拟两个主机,node01做共享存储。创建实例一个cf在node01上,一个member在node02上
测试软件: vmware workstation 14 pro
Linux 64位 系统 :[红帽企业Linux.6.4.服务器版].rhel-server-6.4-x86_64-dvd.iso
DB2 数据库软件 :v10.5_linuxx64_server.tar.gz

1、最小化安装linux系统(node01、node02)(记得在虚拟机node01上添加一块新硬盘用于共享存储)
2、修改ip,注意网关!(node01【21】、node02【22】)
vi /etc/sysconfig/network-scripts/ifcfg-eth0
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.179.21 ##【21】【22】
NETMASK=255.255.255.0
GATEWAY=192.168.179.2
##切记一定要设置网关,主机也要设置。
3、关闭防火墙:(node01、node02)
chkconfig iptables off
4、关闭selinux:(node01、node02)
vi /etc/selinux/config ##修改SELINUXTYPE=disabled
vi /etc/grub.conf ##找到最后一个单词是quiet的一行,在quiet后面加上 selinux=0 ,修改之后重启系统!
5、使用yum安装需要的包(node01、node02)
1.cd /mnt/
2.mkdir cdrom
3.mount /dev/cdrom /mnt/cdrom
4.vi /etc/yum.repos.d/rhel-source.repo
[rhel-source]
baseurl=file:///mnt/cdrom/ ##此项修改成这样
enabled=1 ##此项修改成这样
5.yum clean all
6.yum repolist
7.yum安装文件包:先切换到cd /mnt/cdrom/Packages/目录下
1、yum -y install gcc automake autoconf libtool make openssh-clients iscsi-initiator-utils libnes libmthca libipathverbs libcxgb3 libibcm libaio ibsim ibutils rdma pam dapl* compat-libstdc++* perl-Config-General scsi-target-utils librdmacm-devel*
2、 yum -y install libstdc++* glibc* gcc* kernel* ntp* sg3* binutils* openssh* cpp* ksh* pam*
3、 yum -y yum install -y pam-devel.i686 pam_ldap.i686 pam.i686
(检查确定所有包全部安装)
6、创建组和用户(node01、node02)
groupadd -g 1001 db2fadm1
groupadd -g 1002 db2iadm1
useradd -g db2fadm1 -u 1001 -m -d /home/db2fenc1 -p db2fenc1 db2fenc1
useradd -g db2iadm1 -u 1002 -m -d /home/db2inst1 -p db2inst1 db2inst1
mkdir /root/.ssh
su – db2inst1 -c “mkdir -p /home/db2inst1/.ssh”
修改db2inst1的密码:passwd db2inst1
7、建立互信
修改node01、node02上的ip与主机名映射
vi /etc/hosts
127.0.0.1 localhost
192.168.179.21 node01
192.168.179.22 node02

在node01执行如下命令:
ssh-keygen -t rsa
cp -v /root/.ssh/id_rsa.pub /root/.ssh/id01
scp /root/.ssh/id01 node02:/root/.ssh
su – db2inst1
ssh-keygen -t rsa
cp -v /home/db2inst1/.ssh/id_rsa.pub /home/db2inst1/.ssh/id01
scp /home/db2inst1/.ssh/id01 node02:/home/db2inst1/.ssh

在node02执行如下命令:
ssh-keygen -t rsa
cp -v /root/.ssh/id_rsa.pub /root/.ssh/id02
scp /root/.ssh/id02 node01:/root/.ssh/
su – db2inst1
ssh-keygen -t rsa
cp -v /home/db2inst1/.ssh/id_rsa.pub /home/db2inst1/.ssh/id02
scp /home/db2inst1/.ssh/id02 node01:/home/db2inst1/.ssh/

在node01、node02执行如下命令(在root下)
cd /root/.ssh
cat id01 id02 >authorized_keys
chmod 600 authorized_keys
su – db2inst1
cd /home/db2inst1/.ssh
cat id01 id02 >authorized_keys
测试是否配置成功,确保不用输入密码和yes,则配置成功(node01、node02)
ssh node01 ls
ssh node02 ls
su – db2inst1
ssh node01 ls
ssh node02 ls

8、设置共享存储
1、在node01、node02上添加tgtd服务开机自启动:
chkconfig –add tgtd
chkconfig –level 2345 tgtd on
chkconfig –list
2、在node01上:(建立虚拟机时多添加的一块20G大小的新盘/dev/sdb(若上面没添加,现在关机添加也可以),用于共享存储),为了硬盘盘符加载统一,也应该在Node02  新增一个25M的硬盘/dev/sdb,确保node01,node02 通过iscsi共享出来的盘符都为/dev/sdc)
vi /etc/tgt/targets.conf
添加如下:

backing-store /dev/sdb
initiator-address 192.168.179.21
initiator-address 192.168.179.22

设置之后,在node01上重启服务:service tgtd restart

3、在node01、node02上:
vi /etc/rc.local
添加如下:
iscsiadm –mode discoverydb –type sendtargets –portal 192.168.179.21 –discover
iscsiadm –m node –targetname iqn.2014-05.localdomain:node01 –portal 192.168.179.21:3260 –login
设置之后重启机器或者手动执行以上添加的命令,重启或手动执行之后:fdisk -l 应该可以看到多出一块盘/dev/sdc。

9、配置环境变量、安装db2软件(node01、node02)
配置环境变量:
vi /root/.bash_profile
添加如下:
export PATH=/root/bin:/usr/sbin/rsct/bin:/opt/ibm/db2/v10.5/bin:$PATH
export PATH=/usr/lpp/mmfs/bin:$PATH
export DB2USENONIB=TRUE
export DB2_CFS_GPFS_NO_REFRESH_DATA=true
export IBM_RDAC=NO
export CT_MANAGEMENT_SCOPE=2

安装db2软件: ./db2_install -> yes -> SERVER -> yes

10、设置GPFS文件系统(node01上)
cd /opt/ibm/db2/V10.5/instance/
./db2cluster_prepare -instance_shared_dev /dev/sdc ##我的共享出来的盘是/dev/sdc

cd /opt/ibm/db2/V10.5/bin
./db2cluster -cfs -add -host node02
./db2cluster -cfs -add -license
mmstartup -a
mmgetstate -a(所有node active)
mmmount all -a
以上操作之后,两机器:df -h 会看到多出一个类似/db2sd_20140526123355文件系统。

11、创建实例、设置环境变量、启动实例(随意)
cd /opt/ibm/db2/V10.5/instance

创建实例:./db2icrt -cf node01 -cfnet node01 -m node02 -mnet node02 -instance_shared_dir /db2sd_20140526123355/ -tbdev 192.168.179.2 -u db2fenc1 db2inst1

设置环境变量:
db2set DB2_SD_SOCKETS_RESTRICTIONS=false
db2set DB2_CA_TRANSPORT_METHOD=SOCKETS

启动实例:db2start
另:以上过程仅供在虚拟机上测试,多多指教,谢谢!

<target iqn.1994-05.com.redhat:iscsidisk>
backing-store /dev/sdb
initiator-address 192.168.179.133
initiator-address 192.168.179.134
</target>

iscsiadm –mode discoverydb –type sendtargets –portal 192.168.179.135 –discover

192.168.179.133:3260,1 iqn.1994-05.com.redhat:iscsidisk

iscsiadm –m node –targetname iqn.1994-05.com.redhat:iscsidisk –portal 192.168.179.135:3260 –login

./db2icrt -cf node01 -cfnet node01 -m node02 -mnet node02 -instance_shared_dir /db2sd_20180719223517/ -tbdev 192.168.179.2 -u db2fenc1 db2inst1

因为两机同时执行 poweroff 时,双机意外地重启,并没有如期关机。最好不得不强制关机。发现 共享磁盘不能正常加载。fdisk -l 也不能看到共享的GPFS文件夹。

最后,停掉/etc/rc.local文件里的iscsiadm的两个命令的自动加载,然后两次两机还是关机,模拟真正的断电关机。(要注意,经过两次测试。感觉双机两个节点不可以同时poweroff).,然后重新通电开机。然后,两机再同时手工执行那两条ISCSIADM命令。再执行FDISK -L ,发现两台机可以再次找到那共享磁盘。但是,只发现NODE01已经正常加载GPFS共享文件夹,NODE02没有。联想网上一篇文档所提示一样。同样执行下面的命令:
mmstartup -a
mmgetstate -a(所有node active)
mmmount all -a

然后,双机再执行 fdisk -l ,再执行df -h ,就可以发现node01 ,node02都已经加载上/db2sd_20180721073322

然后在NODE01 su – db2inst1

再执行db2start ,但发现不成功并报错::
[db2inst1@node01 ~]$ db2start
ADM12026W The DB2 server has detected that a valid license for the product “DB2 Enterprise Server Edition” has not been registered.

08/01/2018 06:32:48 129 0 SQL1677N DB2START or DB2STOP processing failed due to a DB2 cluster services error.
08/01/2018 06:32:50 0 0 SQL1685N An error was encountered during DB2START processing of DB2 member with identifier “0” because the database manager failed to start one or more CFs.
08/01/2018 06:32:50 1 0 SQL1685N An error was encountered during DB2START processing of DB2 member with identifier “1” because the database manager failed to start one or more CFs.
SQL1032N No start database manager command was issued. SQLSTATE=57019
[db2inst1@node01 ~]$
[db2inst1@node01 ~]$

[db2inst1@node01 ~]$ db2instance -list
ID TYPE STATE HOME_HOST CURRENT_HOST ALERT PARTITION_NUMBER LOGICAL_PORT NETNAME
— —- —– ——— ———— —– —————- ———— ——-
0 MEMBER STOPPED node02 node02 NO 0 0 node02
1 MEMBER STOPPED node01 node01 NO 0 0 node01
128 CF STOPPED node01 node01 NO – 0 node01
129 CF STOPPED node02 node02 NO – 0 node02

HOSTNAME STATE INSTANCE_STOPPED ALERT
——– —– —————- —–
node01 ACTIVE NO YES
node02 ACTIVE NO YES
There is currently an alert for a member, CF, or host in the data-sharing instance. For more information on the alert, its impact, and how to clear it, run the following command: ‘db2cluster -cm -list -alert’.
[db2inst1@node01 ~]$

所以halt/init0/poweroff/shutdown -h now 系统前先关闭GPFS,再关闭系统

mmumount all -a

mmshutdown -a

mmgetstate -a

db2start INSTANCE ON node01
db2start INSTANCE ON node02

db2start CF 128
db2start CF 129

db2 update dbm cfg using CF_TRANSPORT_METHOD RDMA

打开在节点开机后自动启动 GPFS mmchconfig autoload=yes
关掉在节点开机后自动启动GPFS mmchconfig autoload=no

以下文章点击率最高

Loading…

发表评论