Centos7.6安装Oracle11g  RAC

Centos7.6安装Oracle11g RAC

𝓓𝓸𝓷 Lv6

读书破万卷,下笔如有神。——杜甫

一、安装前准备工作

Centos7.6 https://vault.centos.org/7.6.1810/isos/x86_64/

  • 数据库规划
主机名 实例名 grid实例名 数据库名 数据存储磁盘组 crs存储磁盘组 归档存储磁盘组
server01 mydb1 +ASM1 mydb DATA OCRVOTING FRA
server02 mydb2 +ASM2 mydb DATA OCRVOTING FRA
  • 网络设定
主机 IP地址 操作系统 备注
FreeNas 192.168.1.200 FreeNas11 Web访问,商品446
rac1 192.168.1.21 Centos6.7 Public IP
rac2 192.168.1.22 Centos6.7 Public IP
rac1 10.10.10.21 Centos6.7 Private IP
rac2 10.10.10.22 Centos6.7 Private IP
rac1 192.168.1.23 Centos6.7 VIP IP
rac2 192.168.1.24 Centos6.7 VIP IP
rac-scan 192.168.1.25 Centos6.7 SCAN IP
  • 共享存储设备
磁盘管理 磁盘名称 用途
ASM DATA 数据磁盘
ASM OCRVOTING OCR,VOTING磁盘
ASM FRA 恢复区
  • Oracle建议值
Block Device ASM Name Size Comments
/dev/sda OCR_VOTE01 1 GB ASM Diskgroup for OCR and Voting Disks
/dev/sdb OCR_VOTE02 1 GB ASM Diskgroup for OCR and Voting Disks
/dev/sdc OCR_VOTE03 1 GB ASM Diskgroup for OCR and Voting Disks
  • Swap大小规划
1
2
grep MemTotal /proc/meminfo
grep SwapTotal /proc/meminfo
Available RAM Swap Space Required
Between 2.5 GB and 32 GB Equal to the size of RAM
More than 32 GB 32 GB of RAM
  • 检查TMP空间

Ensure that you have at least 1 GB of space in /tmp. If this space is not available, then increase the size, or delete unnecessary files in /tmp.

1
df -h /tmp
  • 介质准备

CentOS-7-x86_64-DVD-1810.iso

p13390677_112040_Linux-x86-64_1of7.zip

p13390677_112040_Linux-x86-64_2of7.zip

p13390677_112040_Linux-x86-64_2of7.zip

rlwrap-0.43-2.el7.x86_64.rpm

二、配置网络
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
(1) 节点1
[root@server01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=fa2870bb-e0df-4745-8cee-2cd8ce3a6770
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.1.21
PREFIX=24
IPV6_PRIVACY=no


[root@server01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens37
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens37
UUID=fa2870bb-e0df-4745-8cee-2cd8ce3a6770
DEVICE=ens37
ONBOOT=yes
IPADDR=10.0.0.21
PREFIX=24
IPV6_PRIVACY=no

(2) 节点2
[root@server02 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=3cdc3cda-8bac-44ff-9e50-828882f77d1d
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.1.22
PREFIX=24
IPV6_PRIVACY=no


[root@server02 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens37
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens37
UUID=3cdc3cda-8bac-44ff-9e50-828882f77d1d
DEVICE=ens37
ONBOOT=yes
IPADDR=10.0.0.22
PREFIX=24
IPV6_PRIVACY=no


(3) 重启网络服务:
[root@server01 ~]# systemctl restart network
[root@server02 ~]# systemctl restart network

(4) 命令详解:
DEVICE="eth0" //指出设备名称
BOOTPROTO="tatic" //获取ip类型 dhcp或static DHCP动态分配或静态设置
HWADDR="00:22:15:3A:F4:7E" //MAC地址
BROADCAST=192.168.1.255 //广播地址
IPADDR=192.168.1.4 //ip地址
NETMASK=255.255.255.0 //子网掩码
GATEWAY=192.168.1.3 //网关
NM_CONTROLLED="no" //是否允许Network Manager管理,当然NO啦!废掉Network Manager了都!
ONBOOT="yes"//系统启动的时候网络接口是否有效TYPE="Ethernet"//网络类型
ARPCHECK=no
三、关闭Selinux/防火墙
1
2
3
4
5
6
7
8
9
10
11
12
13
(1)关闭Selinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
getenforce

(2)关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service

---开放指定端口
firewall-cmd --zone=public --add-port=1521/tcp --permanent
---查看开放端口
firewall-cmd --list-ports --permanent
四、配置YUM
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
两种方法:
(1) 配置本地YUM:
[root@server01 ~]# mkdir /opt/yum
[root@server01 ~]# cp -r /run/media/admin/CentOS\ 7\ x86_64/* /opt/yum/

[root@server01 ~]# cat > /etc/yum.repos.d/CentOS-Base.repo <<EOF
[base]
name=CentOS-Base
baseurl=file:///opt/yum
gpgcheck=0
enabled=1

EOF

(2) 配置远程YUM:
[root@server01 soft]# cd /etc/yum.repos.d/
[root@server01 soft]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

[root@centos yum.repos.d]# wget http://mirrors.163.com/.help/CentOS6-Base-163.repo

[root@centos yum.repos.d]# yum list

[root@centos yum.repos.d]# yum clean all && yum makecache
五、安装操作系统依赖包
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
(1) 检查YUM包
rpm包检查方法:
rpm -q --qf '%{name}-%{VERSION}-%{RELEASE} (%{ARCH})\n' \
binutils \
compat-libcap1 \
compat-libstdc++ \
gcc \
gcc-c++ \
glibc \
glibc-devel \
ksh \
libgcc \
libstdc++ \
libstdc++-devel \
libaio \
libaio-devel \
libXi \
libXtst \
make \
sysstat \
| grep installed

(2) 批量安装
yum -y install gcc gcc-c++ make binutils compat-libstdc++ compat-libcap1 elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel libaio libaio-devel libstdc++ libstdc++-devel libXi libXtst unixODBC unixODBC-devel pdksh ksh sysstat kernel dracut


(3) 手动安装
[root@server01 soft]# rpm -ivh compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm
[root@server02 soft]# rpm -ivh compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm
六、关闭透明页

Note:

If Transparent HugePages is removed from the kernel, then the /sys/kernel/mm/transparent_hugepage or /sys/kernel/mm/redhat_transparent_hugepage files do not exist.

Red Hat Enterprise Linux kernels:

1
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled

Other kernels:

1
# cat /sys/kernel/mm/transparent_hugepage/enabled

The following is a sample output that shows Transparent HugePages memory being used as the [always] flag is enabled.

1
[always] never

To disable Transparent HugePages, perform the following steps:

  1. Add the following entry to the kernel boot line in the /etc/grub.conf 或 /etc/default/grub file:

    1
    transparent_hugepage=never

    For example:

    1
    2
    3
    4
    title Oracle Linux Server (2.6.32-300.25.1.el6uek.x86_64)
    root (hd0,0)
    kernel /vmlinuz-2.6.32-300.25.1.el6uek.x86_64 ro root=LABEL=/ transparent_hugepage=never
    initrd /initramfs-2.6.32-300.25.1.el6uek.x86_64.img
  2. Restart the system to make the changes permanent.

1
2
3
4
---批量命令执行
sed -i 's/quiet/quiet transparent_hugepage=never numa=off/' /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot
七、安装rlwrap工具
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(1) 安装rlwrap
[root@server01 ~]# yum -y install readline-devel
[root@server01 ~]# cd /opt/soft
[root@server01 soft]# tar xvf rlwrap-0.45.2.tar.gz -C /usr/local/
[root@server01 soft]# cd /usr/local/
[root@server01 local]# ln -s rlwrap-0.45.2 rlwrap
[root@server01 local]# cd rlwrap
[root@server01 rlwrap]# ./configure && make && make install

(2) 配置环境变量
[root@server01 rlwrap]# cat >> /etc/profile <<EOF


alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'

EOF

[root@server01 rlwrap]# source /etc/profile
八、创建用户及目录
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
(1) 创建Oracle、Grid用户
groupadd -g 700 oinstall;
groupadd -g 701 dba;
groupadd -g 702 oper;
groupadd -g 703 asmadmin;
groupadd -g 704 asmoper;
groupadd -g 705 asmdba;
useradd -g oinstall -G asmdba,asmadmin,asmoper,dba -u 800 grid;
useradd -g oinstall -G dba,oper,asmdba -u 900 oracle;



[root@server01 ~]# id oracle;id grid
uid=900(oracle) gid=700(oinstall) groups=700(oinstall),701(dba),702(oper),705(asmdba)
uid=800(grid) gid=700(oinstall) groups=700(oinstall),701(dba),703(asmadmin),704(asmoper),705(asmdba)

[root@server02 ~]# id oracle;id grid
uid=900(oracle) gid=700(oinstall) groups=700(oinstall),701(dba),702(oper),705(asmdba)
uid=800(grid) gid=700(oinstall) groups=700(oinstall),701(dba),703(asmadmin),704(asmoper),705(asmdba)


(2) 设置用户密码
echo -n oracle |passwd --stdin oracle;
echo -n grid |passwd --stdin grid;

(3) 创建用户目录
mkdir -p /opt/app/oracle/product/11.2.0/db_1 ;
mkdir -p /opt/app/grid;
mkdir -p /opt/app/11.2.0/grid;

chown -R oracle.oinstall /opt/app/oracle;
chmod -R 775 /opt/app/oracle;

chown grid.oinstall /opt/app/grid;
chmod 775 /opt/app/grid;

chown -R grid.oinstall /opt/app/11.2.0 ;
chmod -R 775 /opt/app/11.2.0;

mkdir -p /opt/app/oraInventory;
chown grid.oinstall /opt/app/oraInventory;
九、配置环境变量
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
(1)配置Oracle环境变量
节点1:
[oracle@rac1 ~]$ vi .bash_profile


if [ -t 0 ]; then
stty intr ^C
fi

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_TERM=xterm
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_BASE=/opt/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_SID=mydb1
export PATH=$PATH:$ORACLE_HOME/bin
umask 022



[oracle@rac1 ~]$ source .bash_profile

节点2:
[oracle@rac2 ~]$ vi .bash_profile

if [ -t 0 ]; then
stty intr ^C
fi

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_TERM=xterm
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_BASE=/opt/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_SID=mydb2
export PATH=$PATH:$ORACLE_HOME/bin
umask 022


[oracle@rac2 ~]$ source .bash_profile



(2)配置Grid用户环境变量:
节点1:
[grid@rac1 ~]$ vi .bash_profile

if [ -t 0 ]; then
stty intr ^C
fi

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1
export ORACLE_BASE=/opt/app/grid
export ORACLE_HOME=/opt/app/11.2.0/grid
export PATH=$PATH:$ORACLE_HOME/bin
umask 022

[grid@rac1 ~]$ source .bash_profile

节点2:
[grid@rac2 ~]$ vi .bash_profile


if [ -t 0 ]; then
stty intr ^C
fi

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM2
export ORACLE_BASE=/opt/app/grid
export ORACLE_HOME=/opt/app/11.2.0/grid
export PATH=$PATH:$ORACLE_HOME/bin
umask 022


[grid@rac2 ~]$ source .bash_profile
十、设置资源限制/内核参数
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
(1)设置资源限制
---limits限制
节点1:
[root@server01 ~]# cat >> /etc/security/limits.conf <<EOF

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768

EOF


节点2:
[root@server02 ~]# cat >> /etc/security/limits.conf <<EOF

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768

EOF


---login限制
节点1:
[root@server01 ~]# cat >> /etc/pam.d/login <<EOF


session required pam_limits.so

EOF

节点2:
[root@server02 ~]# cat >> /etc/pam.d/login <<EOF


session required pam_limits.so

EOF



---profile限制
节点1:
[root@server01 ~]# cat >> /etc/profile <<EOF


if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi


EOF

[root@server01 ~]# source /etc/profile


节点2:
[root@server02 ~]# cat >> /etc/profile <<EOF


if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi


EOF


[root@server02 ~]# source /etc/profile



(2)修改内核参数
节点1:
[root@server01 ~]# cat >> /etc/sysctl.conf <<EOF

net.ipv4.ip_local_port_range= 9000 65500
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1073741824
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_max=1048576
fs.aio-max-nr = 1048576


EOF

[root@server01 ~]# sysctl -p


节点2:
[root@server02 ~]# cat >> /etc/sysctl.conf <<EOF

net.ipv4.ip_local_port_range= 9000 65500
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1073741824
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_max=1048576
fs.aio-max-nr = 1048576


EOF

[root@server02 ~]# sysctl -p

#SHMALL参数:设置共享内存总页数。这个值太小有可能导致数据库启动报错。很多人调整系统内核参数的时候只关注SHMMAX参数,而忽略了SHMALL参数的设置。这个值推荐设置为物理内存大小除以分页大小。
# getconf PAGE_SIZE
通过getconf获取分页的大小,用来计算SHMALL的合理设置值:
SQL> select 2*1024*1024*1024/4096 from dual;
2*1024*1024*1024/4096
----------------------
524288
对于2G的内存,4K分页大小的系统而言,SHMALL的值应该设置为8388608

#SHMMAX参数:Linux进程可以分配的单独共享内存段的最大值。一般设置为内存总大小的一半。这个值的设置应该大于SGA_MAX_TARGET或MEMORY_MAX_TARGET的值,因此对于安装Oracle数据库的系统,shmmax的值应该比内存的二分之一大一些。
SQL> select 2*1024*1024*1024/2 from dual;
2*1024*1024*1024/4096
----------------------
1073741824

十一、配置Hosts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
节点1:
[root@server01 ~]# cat > /etc/hosts <<EOF
127.0.0.1 localhost
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

#public
192.168.1.21 server01
192.168.1.22 server02

#private
10.0.0.21 server01-priv
10.0.0.22 server02-priv

#vip
192.168.1.23 server01-vip
192.168.1.24 server02-vip

#rac-scan
192.168.1.25 rac-scan

EOF

注意: 需要修改环路地址为127.0.0.1 localhost

节点2:
[root@server02 ~]# cat > /etc/hosts <<EOF
127.0.0.1 localhost
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

#public
192.168.1.21 server01
192.168.1.22 server02

#private
10.0.0.21 server01-priv
10.0.0.22 server02-priv

#vip
192.168.1.23 server01-vip
192.168.1.24 server02-vip

#rac-scan
192.168.1.25 rac-scan

EOF

注意: 需要修改环路地址为127.0.0.1 localhost

十二、配置信任关系

如果不手工配置互信关系,可以使用Oracle官方工具去创建信任关系

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
(1) .配置Oracle用户信用
节点1::
[oracle@rac1 ~]$ ssh-keygen -t rsa
[oracle@rac1 ~]$ ssh-keygen -t dsa
[oracle@rac1 ~]$ cd .ssh
[oracle@rac1 .ssh]$ cat *.pub > /tmp/authorized_keys
[oracle@rac1 .ssh]$ scp /tmp/authorized_keys rac2:`pwd`
[oracle@rac1 .ssh]$ rm -f /tmp/authorized_keys
节点2:
[oracle@rac2 ~]$ cd .ssh/
[oracle@rac2 .ssh]$ ssh-keygen -t rsa
[oracle@rac2 .ssh]$ ssh-keygen -t dsa
[oracle@rac2 .ssh]$ cat *.pub >> authorized_keys
[oracle@rac2 .ssh]$ scp authorized_keys rac1:`pwd`

(2) .配置Grid用户信用:
节点1::
[grid@rac1 ~]$ ssh-keygen -t rsa
[grid@rac1 ~]$ ssh-keygen -t dsa
[grid@rac1 ~]$ cd .ssh
[grid@rac1 .ssh]$ cat *.pub > /tmp/authorized_keys
[grid@rac1 .ssh]$ scp /tmp/authorized_keys rac2:`pwd`
[grid@rac1 .ssh]$ rm -f /tmp/authorized_keys
节点2:
[grid@rac2 ~]$ cd .ssh
[grid@rac2 ~]$ ssh-keygen -t rsa
[grid@rac2 ~]$ ssh-keygen -t dsa
[grid@rac2 .ssh]$ cat *.pub >> authorized_keys
[grid@rac2 .ssh]$ scp authorized_keys rac1:`pwd`

(3) 测试信用关系:
[oracle@rac1 ~]$ ssh rac1
[oracle@rac1 ~]$ ssh rac2
[oracle@rac2 ~]$ ssh rac2
[oracle@rac2 ~]$ ssh rac1

[oracle@rac1 ~]$ ssh rac2-pri
[oracle@rac2 ~]$ ssh rac1-pri



[grid@rac1 ~]$ ssh rac1
[grid@rac1 ~]$ ssh rac2
[grid@rac2 ~]$ ssh rac2
[grid@rac2 ~]$ ssh rac1

[grid@rac1 ~]$ ssh rac2-pri
[grid@rac2 ~]$ ssh rac1-pri

十三、配置时钟同步

可以Oracle官方的CTSS同步两节点时间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
两种方法:
配置NTP:
(1) 检查NTP包
[root@server01 ~]# rpm -qa | grep ntp
fontpackages-filesystem-1.41-1.1.el6.noarch
ntpdate-4.2.6p5-5.el6.centos.x86_64
ntp-4.2.6p5-5.el6.centos.x86_64

(2) 同步rac1系统时间与CMOS时间
查看系统时间:
[root@server01 ~]# date
Sun Jul 16 21:09:36 CST 2017
查看硬件时钟:
[root@server01 ~]# hwclock
Sun 16 Jul 2017 01:09:44 PM CST -0.233209 seconds
系统时钟与硬件时钟不同步,我们通过以下命令来把硬件时钟与系统时钟同步:
[root@server01 ~]# clock --systohc
[root@server01 ~]# hwclock
Sun 16 Jul 2017 09:11:42 PM CST -0.243015 seconds
[root@server01 ~]# date
Sun Jul 16 21:11:45 CST 2017

强制把系统时间写入CMOS命令:
[root@server01 ~]# clock -w

手工修改时间命令:
[root@server01 ~]# date -s 13:12:00
[root@server01 ~]# clock -w

(3) 配置rac1服务端ntp.cof
[root@server01 ~]# vim /etc/ntp.conf

server 127.127.1.0
fudge 127.127.1.0 stratum 11
#driftfile /var/lib/ntp/drift
broadcastdelay 0.008
logfile /var/log/ntp.log

(4) 配置rac2客户端ntp.conf
server 192.168.1.21
server 127.127.1.0
fudge 127.127.1.0 stratum 10

#driftfile /var/lib/ntp/drift
authenticate no
broadcastdelay 0.008
logfile /var/log/ntp.log

(5) 节点1和节点2修改ntpd参数
节点1:
[root@server01 ~]# vim /etc/sysconfig/ntpd
SYNC_HWCLOCK=yes
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

节点2:
[root@server02 ~]# vim /etc/sysconfig/ntpd
SYNC_HWCLOCK=yes
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

(6) 在节点1和节点2上执行chkconfig,配置NTP开机自动启动:
节点1:
[root@server01 ~]# chkconfig --list |grep ntp
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
ntpdate 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@server01 ~]# chkconfig ntpd on

节点2:
[root@server02 ~]# chkconfig --list |grep ntp
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
ntpdate 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@server02 ~]# chkconfig ntpd on

(7) 开启ntpd服务:
节点1:
[root@server01 ~]# service ntpd start
Starting ntpd: [ OK ]

节点2:
[root@server02 ~]# service ntpd start
Starting ntpd: [ OK ]


(7) 检查ntp状态

[root@server01 ~]# ntpstat
synchronised to local net at stratum 12
time correct to within 1948 ms
polling server every 64 s

[root@server02 ~]# ntpstat
synchronised to local net at stratum 11
time correct to within 3948 ms
polling server every 64 s


[root@server01 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*LOCAL(0) .LOCL. 11 l 36 64 17 0.000 0.000 0.000


[root@server02 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
rac1 LOCAL(0) 12 u 16 64 17 0.394 0.100 0.014
*LOCAL(0) .LOCL. 10 l 19 64 17 0.000 0.000 0.000


(8) 查看ntp日志

[root@server02 ~]# tail -200 /var/log/ntp.log
[root@server02 ~]# tail -200 /var/log/messages

(9) 说明
用ntpdate让客户端同步时间服务器
在配置前,先使用ntpdate手动同步下时间,免得本机与外部时间服务器时间差距太大,让ntpd不能正常同步。命令的结果会显示客户端与服务器的时间偏移,若时间间隔大于1000秒,使用ntpdate 进行调整。
时间服务器当前时间:
[root@openfiler ~]# date
Fri Jan 13 15:50:54 CST 2017
节点1和节点2时间:
[oracle@rac1 ~]$ date;ssh rac2 date
Fri Jan 13 07:53:53 CST 2017
Fri Jan 13 07:53:53 CST 2017

[root@server02 ~]# ntpdate -d 192.168.1.21
13 Jan 10:00:26 ntpdate[6959]: step time server 192.168.1.21 offset 1544.267089 sec
ntpd默认超过1000秒,就不会自动同步。

[root@server02 ~]# ntpdate -u 192.168.1.21
13 Jan 10:27:24 ntpdate[6986]: step time server 192.168.1.21 offset 1544.267112 sec

同步时间报错:
[root@server02 ~]# ntpdate 192.168.1.21
13 Jan 19:43:38 ntpdate[1985]: the NTP socket is in use, exiting
[root@server02 ~]#
[root@server02 ~]#
[root@server02 ~]# service ntpd stop
Shutting down ntpd: [ OK ]
[root@server02 ~]# ntpdate 192.168.1.21
13 Jan 19:43:58 ntpdate[1997]: adjust time server 192.168.1.21 offset -0.000006 sec
[root@server02 ~]# service ntpd start
Starting ntpd: [ OK ]

原因:
ntpdate和ntpd服务只能启动一个


配置Oracle自带的时钟服务CTTSS:

使用Oracle自带的时间同步CTSS:

使用集群时间同步服务在集群中提供同步服务,需要卸载网络时间协议 (NTP) 及其配置。
要停用 NTP 服务,必须停止当前的 ntpd 服务,从初始化序列中禁用该服务,并删除 ntp.conf 文件。要在 Oracle Enterprise Linux 上完成这些步骤,以 root 用户身份在两个 Oracle RAC 节点上运行以下命令:
[root@server01 ~]# service ntpd stop
[root@server01 ~]# chkconfig ntpd off
[root@server01 ~]# mv /etc/ntp.conf /etc/ntp.conf.org
[root@server01 ~]# rm /var/run/ntpd.pid

当安装程序发现 NTP 协议处于非活动状态时,安装集群时间同步服务将以活动模式自动进行安装并通过所有节点的时间。如果发现配置了 NTP,则以观察者模式启动集群时间同步服务,Oracle Clusterware 不会在集群中进行活动的时间同步。
在安装后,要确认 ctssd 处于活动状态,请作为网格安装所有者 (grid) 输入以下命令:
[grid@rcahadb1 ~]$ crsctl check ctss
CRS-4701: 集群时间同步服务处于活动模式。
CRS-4702: 偏移量 (毫秒): 0
[grid@rcahadb2 ~]$ crsctl check ctss
CRS-4701: 集群时间同步服务处于活动模式。
CRS-4702: 偏移量 (毫秒): 0
说明:Oracle 集群时间同步服务 (CTSS)配置,作者未做经过测试,仅供参考。
十四、DNS配置

如果没有DNS服务器,可以使用下方欺骗式配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@server01 ~]# mv /usr/bin/nslookup /usr/bin/nslookup.original
[root@server01 ~]# vi /usr/bin/nslookup

#!/bin/bash
HOSTNAME=${1}
if [[ $HOSTNAME = "rac-scan" ]]; then
echo "Server: 192.168.1.23 "
echo "Address: 192.168.1.23 #53"
echo "Non-authoritative answer:"
echo "Name: rac-scan"
echo "Address: 192.168.1.25 "
else
/usr/bin/nslookup.original $HOSTNAME
fi



[root@server01 ~]# chmod 755 /usr/bin/nslookup

[root@server02 ~]# mv /usr/bin/nslookup /usr/bin/nslookup.original
[root@server02 ~]# vi /usr/bin/nslookup

#!/bin/bash
HOSTNAME=${1}
if [[ $HOSTNAME = "rac-scan" ]]; then
echo "Server: 192.168.1.24 "
echo "Address: 192.168.1.24 #53"
echo "Non-authoritative answer:"
echo "Name: rac-scan"
echo "Address: 192.168.1.25 "
else
/usr/bin/nslookup.original $HOSTNAME
fi



[root@server02 ~]# chmod 755 /usr/bin/nslookup

测试:
[root@server02 ~]# nslookup rac-scan
十五、配置共享存储
1.使用FreeNas配置共享存储

mark

2. ISCSI使用介绍
  • 安装
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
[root@server01 Packages]# yum list |grep -i iscsi
iscsi-initiator-utils.x86_64 6.2.0.874-10.el7 @anaconda
iscsi-initiator-utils-iscsiuio.x86_64 6.2.0.874-10.el7 @anaconda
libiscsi.x86_64 1.9.0-7.el7 @anaconda
libvirt-daemon-driver-storage-iscsi.x86_64

[root@server01 ~]# cd /opt/yum/Packages/
[root@server01 Packages]# ll |grep -i iscsi
-rw-r--r--. 1 root root 431292 Sep 27 11:34 iscsi-initiator-utils-6.2.0.874-10.el7.x86_64.rpm
-rw-r--r--. 1 root root 93832 Sep 27 11:34 iscsi-initiator-utils-iscsiuio-6.2.0.874-10.el7.x86_64.rpm
-rw-r--r--. 1 root root 61360 Sep 27 11:34 libiscsi-1.9.0-7.el7.x86_64.rpm
-rw-r--r--. 1 root root 213548 Sep 27 11:34 libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7.x86_64.rpm


[root@server01 Packages]# rpm -ivh iscsi-initiator-utils-6.2.0.874-10.el7.x86_64.rpm
warning: iscsi-initiator-utils-6.2.0.874-10.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Preparing... ################################# [100%]
package iscsi-initiator-utils-6.2.0.874-10.el7.x86_64 is already installed


[root@server02 Packages]# rpm -ivh iscsi-initiator-utils-6.2.0.874-10.el7.x86_64.rpm
warning: iscsi-initiator-utils-6.2.0.874-10.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Preparing... ################################# [100%]
package iscsi-initiator-utils-6.2.0.874-10.el7.x86_64 is already installed


[root@server01 Packages]# iscsi-iname
iqn.1994-05.com.redhat:5e85877f708d

[root@server02 Packages]# iscsi-iname
iqn.1994-05.com.redhat:56ab2ce2e0f7

[root@server01 Packages]# service iscsid status
Redirecting to /bin/systemctl status iscsid.service
● iscsid.service - Open-iSCSI
Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: man:iscsid(8)
man:iscsiuio(8)
man:iscsiadm(8)

[root@server02 Packages]# service iscsid status
Redirecting to /bin/systemctl status iscsid.service
● iscsid.service - Open-iSCSI
Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: man:iscsid(8)
man:iscsiuio(8)
man:iscsiadm(8)


[root@server01 Packages]# systemctl status iscsid
● iscsid.service - Open-iSCSI
Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: man:iscsid(8)
man:iscsiuio(8)
man:iscsiadm(8)

  • 设置自动启动
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
在开机时启用一个服务:systemctl enable iscsid.service
在开机时禁用一个服务:systemctl disable iscsid.service
查看服务是否开机启动:systemctl is-enabled iscsid.service
查看已启动的服务列表:systemctl list-unit-files|grep enabled

[root@server01 Packages]# systemctl is-enabled iscsid.service
disabled
[root@server01 Packages]# systemctl enable iscsid.service
[root@server01 Packages]# systemctl is-enabled iscsid.service
enabled


[root@server02 Packages]# systemctl is-enabled iscsid.service
disabled
[root@server02 Packages]# systemctl enable iscsid.service
[root@server02 Packages]# systemctl is-enabled iscsid.service
enabled
  • 查看ISCSI发现记录
1
2
iscsiadm -m node
iscsiadm -m node -L all
  • 登录ISCSI
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@server01 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.200
192.168.1.200:3260,1 iqn.2021-09.org.freenas.oracle19c
192.168.1.200:3260,1 iqn.2021-09.org.freenas.oracle11g

[root@server01 Packages]# systemctl status iscsid.service
● iscsid.service - Open-iSCSI
Loaded: loaded (/usr/lib/systemd/system/iscsid.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-09-17 14:39:45 CST; 3min 37s ago
Docs: man:iscsid(8)
man:iscsiuio(8)
man:iscsiadm(8)
Main PID: 62468 (iscsid)
Status: "Ready to process requests"
Tasks: 1
CGroup: /system.slice/iscsid.service
└─62468 /sbin/iscsid -f

Sep 17 14:39:45 server01 systemd[1]: Starting Open-iSCSI...
Sep 17 14:39:45 server01 systemd[1]: Started Open-iSCSI.



[root@mysql Packages]# systemctl iscsid start
Starting iscsi: [ OK ]

  • 查看磁盘
1
2
[root@mysql Packages]# fdisk -l
如果fdisk -l发现不了硬盘,可以reboot
  • 查看信息
1
[root@mysql Packages]# iscsiadm -m node -o show 
  • 登录某一个iqn
1
2
3
iscsiadm --mode node --targetname iqn.2021-09.org.freenas.oracle11g  --portal 192.168.1.200 --login

iscsiadm -m node -T iqn.2021-09.org.freenas.oracle11g -p 192.168.1.200:3260 -l
  • 删除ISCSI存储
1
2
3
4
5
6
(1) 登出iscsi存储:
iscsiadm -m node -T iqn.2021-09.org.freenas.oracle11g -p 192.168.1.200 -u
(2) 登出iscsi所有登录:
iscsiadm -m node --logoutall=all
(3) 删除iscsi发现记录:
iscsiadm -m node -o delete -T iqn.2021-09.org.freenas.oracle11g -p 192.168.1.200
1
2
3
4
5
6
7
8
9
10
(1) 发现iscsi
[root@server01 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.200
192.168.1.234:3260,1 iqn.2005-10.org.freenas.ctl
(2) 登录iscsi
iscsiadm -m node -T iqn.2005-10.org.freenas.ctl -p 192.168.1.200:3260 -l
(3) 登出
iscsiadm -m node -T iqn.2005-10.org.freenas.ctl -p 192.168.1.200 -u
iscsiadm -m node --logoutall=all
(4) 删除发现
iscsiadm -m node -o delete -T iqn.2005-10.org.freenas.ctl -p 192.168.1.200
3.配置Oracle共享存储
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
(1) 安装
[root@server01 Packages]# yum list |grep -i iscsi
iscsi-initiator-utils.x86_64 6.2.0.874-19.el7 @anaconda
iscsi-initiator-utils-iscsiuio.x86_64 6.2.0.874-19.el7 @anaconda
libiscsi.x86_64 1.9.0-7.el7 @anaconda
libvirt-daemon-driver-storage-iscsi.x86_64

[root@server01 ~]# cd /opt/yum/Packages/
[root@server01 Packages]# ll |grep -i iscsi
-rw-r--r--. 1 root root 433204 Sep 17 11:52 iscsi-initiator-utils-6.2.0.874-19.el7.x86_64.rpm
-rw-r--r--. 1 root root 95924 Sep 17 11:52 iscsi-initiator-utils-iscsiuio-6.2.0.874-19.el7.x86_64.rpm
-rw-r--r--. 1 root root 61360 Sep 17 11:52 libiscsi-1.9.0-7.el7.x86_64.rpm
-rw-r--r--. 1 root root 235128 Sep 17 11:52 libvirt-daemon-driver-storage-iscsi-4.5.0-36.el7.x86_64.rpm


[root@server01 Packages]# rpm -ivh iscsi-initiator-utils-6.2.0.874-19.el7.x86_64.rpm
warning: iscsi-initiator-utils-6.2.0.874-19.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Preparing... ################################# [100%]
package iscsi-initiator-utils-6.2.0.874-19.el7.x86_64 is already installed

[root@server02 Packages]# rpm -ivh iscsi-initiator-utils-6.2.0.874-19.el7.x86_64.rpm
warning: iscsi-initiator-utils-6.2.0.874-19.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Preparing... ################################# [100%]
package iscsi-initiator-utils-6.2.0.874-19.el7.x86_64 is already installed


(2) 设置开机自动启动
[root@server01 Packages]# systemctl is-enabled iscsid.service
disabled
[root@server01 Packages]# systemctl enable iscsid.service
[root@server01 Packages]# systemctl is-enabled iscsid.service
enabled


[root@server02 Packages]# systemctl is-enabled iscsid.service
disabled
[root@server02 Packages]# systemctl enable iscsid.service
[root@server02 Packages]# systemctl is-enabled iscsid.service
enabled


(3) 登录ISCSI
[root@server01 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.200
192.168.1.200:3260,1 iqn.2021-09.org.freenas.oracle19c
192.168.1.200:3260,1 iqn.2021-09.org.freenas.oracle11g

(4) 查看磁盘
[root@server01 ~]# fdisk -l


温馨提示:
如果fdisk -l发现不了硬盘,可以reboot
fdisk -l会查看到所有磁盘,为了方便,通过步骤2删除所有登录,然后只登录指定的iqn,也可以删除不需要的发现

(2) 登出所有登录,只登录某11g的那个iqn
[root@server01 ~]# iscsiadm -m node --logoutall=all

(3) 登录某一个iqn
iscsiadm --mode node --targetname iqn.2021-09.org.freenas.oracle11g --portal 192.168.1.200 --login
iscsiadm -m node -T iqn.2021-09.org.freenas.oracle11g -p 192.168.1.200:3260 -l

(4) 删除某个iqn,即使重启也不会出现
如需要删除指定的iqn,可以使用以下命令:
iscsiadm -m node -o delete -T iqn.2021-09.org.freenas.oracle19c -p 192.168.1.200
iscsiadm -m node -o delete -T iqn.2021-09.org.freenas.oracle11g -p 192.168.1.200
4.配置 UDEV 绑定SCSI ID
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
(1)查看磁盘:
[root@server01 ~]# fdisk -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d27a6

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 616447 307200 83 Linux
/dev/sda2 616448 83886079 41634816 8e Linux LVM

Disk /dev/mapper/centos-root: 38.3 GB, 38335938560 bytes, 74874880 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 4294 MB, 4294967296 bytes, 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdb: 1073 MB, 1073758208 bytes, 2097184 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes


Disk /dev/sdd: 1073 MB, 1073758208 bytes, 2097184 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes


Disk /dev/sdc: 1073 MB, 1073758208 bytes, 2097184 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes


Disk /dev/sde: 4294 MB, 4294983680 bytes, 8388640 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes


Disk /dev/sdf: 4294 MB, 4294983680 bytes, 8388640 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes


Disk /dev/sdg: 10.7 GB, 10737434624 bytes, 20971552 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes


Disk /dev/sdh: 10.7 GB, 10737434624 bytes, 20971552 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes


(2)创建规则
for i in b c d e f g h;
do
echo "KERNEL==\"sd*\", SUBSYSTEM==\"block\", PROGRAM==\"/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\",
RESULT==\"`/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", RUN+=\"/bin/sh -c 'mknod /dev/asm-disk$i b \$major \$minor; chown grid:asmadmin /dev/asm-disk$i; chmod 0660 /dev/asm-disk$i '\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done


for i in b c d e f g;
do
echo "KERNEL==\"sd?\",SUBSYSTEM==\"block\", PROGRAM==\"/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\",RESULT==\"`/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", SYMLINK+=\"asmdisk$i\",OWNER=\"grid\", GROUP=\"asmadmin\",MODE=\"0660\"" >> /etc/udev/rules.d/99-dm-devices.rules
done



[root@server01 ~]# more /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/de
v/$name",
RESULT=="36589cfc000000a4bf63af9f1c3bcba19", RUN+="/bin/sh -c 'mknod /dev/asm-diskb b $major $minor; chown grid:as
madmin /dev/asm-diskb; chmod 0660 /dev/asm-diskb '"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/de
v/$name",
RESULT=="36589cfc0000009c255a5acae0dc286d6", RUN+="/bin/sh -c 'mknod /dev/asm-diskc b $major $minor; chown grid:as
madmin /dev/asm-diskc; chmod 0660 /dev/asm-diskc '"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/de
v/$name",
RESULT=="36589cfc00000085f08f49e93c5d1bffe", RUN+="/bin/sh -c 'mknod /dev/asm-diskd b $major $minor; chown grid:as
madmin /dev/asm-diskd; chmod 0660 /dev/asm-diskd '"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/de
v/$name",
RESULT=="36589cfc00000060afbb1a786a17951f7", RUN+="/bin/sh -c 'mknod /dev/asm-diske b $major $minor; chown grid:as
madmin /dev/asm-diske; chmod 0660 /dev/asm-diske '"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/de
v/$name",
RESULT=="36589cfc000000b0f61853b173012667d", RUN+="/bin/sh -c 'mknod /dev/asm-diskf b $major $minor; chown grid:as
madmin /dev/asm-diskf; chmod 0660 /dev/asm-diskf '"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/de
v/$name",
RESULT=="36589cfc00000064f0808278ad18e0119", RUN+="/bin/sh -c 'mknod /dev/asm-diskg b $major $minor; chown grid:as
madmin /dev/asm-diskg; chmod 0660 /dev/asm-diskg '"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/de
v/$name",
RESULT=="36589cfc000000692a4376018dafc7afe", RUN+="/bin/sh -c 'mknod /dev/asm-diskh b $major $minor; chown grid:as
madmin /dev/asm-diskh; chmod 0660 /dev/asm-diskh '"

(3)开启UDEV服务
[root@server01 ~]# udevadm trigger --type=devices --action=change
[root@server01 ~]# udevadm control --reload-rules
[root@server01 ~]# ll /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Sep 24 11:27 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Sep 24 11:27 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Sep 24 11:27 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Sep 24 11:27 /dev/asm-diske
brw-rw---- 1 grid asmadmin 8, 80 Sep 24 11:27 /dev/asm-diskf
brw-rw---- 1 grid asmadmin 8, 96 Sep 24 11:27 /dev/asm-diskg
brw-rw---- 1 grid asmadmin 8, 112 Sep 24 11:27 /dev/asm-diskh


[root@server01 ~]# udevadm test /sys/block/sdb

(4)查看ASM磁盘
节点1:
[root@server01 ~]# ll /dev/asm*
brw-rw----. 1 grid asmadmin 8, 16 Jul 16 22:32 /dev/asm-diskb
brw-rw----. 1 grid asmadmin 8, 32 Jul 16 22:32 /dev/asm-diskc
brw-rw----. 1 grid asmadmin 8, 48 Jul 16 22:32 /dev/asm-diskd
brw-rw----. 1 grid asmadmin 8, 64 Jul 16 22:32 /dev/asm-diske
brw-rw----. 1 grid asmadmin 8, 80 Jul 16 22:32 /dev/asm-diskf
brw-rw----. 1 grid asmadmin 8, 96 Jul 16 22:32 /dev/asm-diskg
brw-rw----. 1 grid asmadmin 8, 112 Jul 16 22:32 /dev/asm-diskh

节点2:
[root@server02 ~]# ll /dev/asm*
brw-rw----. 1 grid asmadmin 8, 16 Jul 16 22:32 /dev/asm-diskb
brw-rw----. 1 grid asmadmin 8, 32 Jul 16 22:32 /dev/asm-diskc
brw-rw----. 1 grid asmadmin 8, 48 Jul 16 22:32 /dev/asm-diskd
brw-rw----. 1 grid asmadmin 8, 64 Jul 16 22:32 /dev/asm-diske
brw-rw----. 1 grid asmadmin 8, 80 Jul 16 22:32 /dev/asm-diskf
brw-rw----. 1 grid asmadmin 8, 96 Jul 16 22:32 /dev/asm-diskg
brw-rw----. 1 grid asmadmin 8, 112 Jul 16 22:32 /dev/asm-diskh


(5)配置udev自动启动
systemctl status systemd-udevd.service
systemctl start systemd-udevd.service
systemctl enable systemd-udevd.service

十六、安装grid

如果重新安装,则需要手工清空磁盘信息:

[root@server01 ~]# dd if=/dev/zero of=/dev/asm-diskb bs=1024 count=512
512+0 records in
512+0 records out
524288 bytes (524 kB) copied, 0.0805612 s, 6.5 MB/s

1.安装

在节点1执行安装`

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
(1) 校验
[root@server01 soft]# cd /opt/soft/
[root@server01 soft]# unzip p13390677_112040_Linux-x86-64_3of7.zip

[root@server01 soft]# chown -R grid.oinstall grid
[root@server01 soft]# chmod -R 775 grid

[root@server01 soft]# cd grid/rpm/
[root@server01 soft]# export CVUQDISK_GRP=oinstall
[root@server01 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm


cvuqdisk两个节点都需要安装
Install the operating system package cvuqdisk. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive the error message "Package cvuqdisk not installed" when you run Cluster Verification Utility.


[root@server01 soft]# su - grid
[grid@server01 ~]$ cd /opt/soft/grid
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose

(2) 安装
MacOS使用XQuartz调出图形界面:
admin@MacOSdeiMac ~ % ssh -X grid@192.168.1.21
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:2cFhKGm2gZUFQd8o1Mj71gzRC5l40tQmcZrHNyxxYLg.
Please contact your system administrator.
Add correct host key in /Users/admin/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/admin/.ssh/known_hosts:1
ECDSA host key for 192.168.1.21 has changed and you have requested strict checking.
Host key verification failed.

admin@MacOSdeiMac ~ % ssh-keygen -R 192.168.1.21
# Host 192.168.1.21 found: line 1
/Users/admin/.ssh/known_hosts updated.
Original contents retained as /Users/admin/.ssh/known_hosts.old

admin@MacOSdeiMac ~ % ssh -X grid@192.168.1.21
The authenticity of host '192.168.1.21 (192.168.1.21)' can't be established.
ECDSA key fingerprint is SHA256:2cFhKGm2gZUFQd8o1Mj71gzRC5l40tQmcZrHNyxxYLg.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.1.21' (ECDSA) to the list of known hosts.
grid@192.168.1.21's password:

[grid@server01 ~]$ cd /opt/soft/grid
[grid@server01 grid]$ ./runInstaller



[grid@rac1 grid]$ export LANG=en_US
[grid@rac1 grid]$ export DISPLAY=192.168.1.137:0.0
[grid@rac1 grid]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 15214 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4095 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-07-16_10-43-16PM. Please wait ...

mark

mark

mark

mark

mark

image-20210929112217216

image-20210929112331638

image-20210929112520511

image-20210929112627312

image-20210929112708861

mark

mark

mark

mark

mark

mark

mark

mark

mark

image-20210929113336864

image-20210929113551906

mark

image-20210929114409395

跑脚本的顺序一定要对,先在RAC1执行第一个脚本,再到RAC2执行第一个脚本,然后到RAC1执行第二脚本,最后再去RAC2执行第二个脚本

2.安装Patch

在执行上面sh脚本之前,需要先打一个Patch,Patch for Bug# 18370031 for Linux-x86-64 platform 解决ohasd failed to start问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
节点1和节点2都需要安装18370031补丁
[root@server01 ~]# cd /opt/soft/Patch/
[root@server01 Patch]# ll
total 290912
-rw-r--r--. 1 root root 174911877 Sep 28 14:22 p18370031_112040_Linux-x86-64.zip
-rw-r--r--. 1 root root 122976179 Sep 28 14:22 p6880880_112000_Linux-x86-64.zip

[root@server01 Patch]# mv /opt/app/11.2.0/grid/OPatch /opt/app/11.2.0/grid/OPatch.20210928
[root@server01 Patch]# unzip p6880880_112000_Linux-x86-64.zip -d /opt/app/11.2.0/grid/

[root@server01 Patch]# ll /opt/app/11.2.0/grid/ |grep -i opatch
drwxr-x---. 16 root root 4096 Jul 30 22:29 OPatch
drwxr-xr-x. 8 grid oinstall 212 Sep 28 14:13 OPatch.20210928

[root@server01 Patch]# chown -R grid.oinstall /opt/app/11.2.0/grid/OPatch
[root@server01 Patch]# chmod -R 775 /opt/app/11.2.0/grid/OPatch

[root@server01 Patch]# unzip p18370031_112040_Linux-x86-64.zip
[root@server01 Patch]# chown -R grid.oinstall 18370031
[root@server01 Patch]# chmod -R 775 18370031

[root@server01 ~]# su - grid
[grid@server01 ~]$ /opt/app/11.2.0/grid/OPatch/opatch version
OPatch Version: 11.2.0.3.31

[grid@server01 ~]$ export PATH=$PATH:/opt/app/11.2.0/grid/OPatch

[grid@server01 ~]$ /opt/app/11.2.0/grid/OPatch/opatch lsinventory -detail -oh /opt/app/11.2.0/grid

[grid@server01 ~]$ cd /opt/soft/Patch/18370031/

[grid@server01 18370031]$ opatch apply
Oracle Interim Patch Installer version 11.2.0.3.31
Copyright (c) 2021, Oracle Corporation. All rights reserved.


Oracle Home : /opt/app/11.2.0/grid
Central Inventory : /opt/app/oraInventory
from : /opt/app/11.2.0/grid/oraInst.loc
OPatch version : 11.2.0.3.31
OUI version : 11.2.0.4.0
Log file location : /opt/app/11.2.0/grid/cfgtoollogs/opatch/opatch2021-09-28_15-12-16PM_1.log

Verifying environment and performing prerequisite checks...

--------------------------------------------------------------------------------
Start OOP by Prereq process.
Launch OOP...

Oracle Interim Patch Installer version 11.2.0.3.31
Copyright (c) 2021, Oracle Corporation. All rights reserved.


Oracle Home : /opt/app/11.2.0/grid
Central Inventory : /opt/app/oraInventory
from : /opt/app/11.2.0/grid/oraInst.loc
OPatch version : 11.2.0.3.31
OUI version : 11.2.0.4.0
Log file location : /opt/app/11.2.0/grid/cfgtoollogs/opatch/opatch2021-09-28_15-12-24PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 18370031

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/opt/app/11.2.0/grid')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '18370031' to OH '/opt/app/11.2.0/grid'

Patching component oracle.crs, 11.2.0.4.0...
Patch 18370031 successfully applied.
Log file location: /opt/app/11.2.0/grid/cfgtoollogs/opatch/opatch2021-09-28_15-12-24PM_1.log

OPatch succeeded.










补丁说明:
Installation walk-through - Oracle Grid/RAC 11.2.0.4 on Oracle Linux 7 (文档 ID 1951613.1)

在这个文档中,19404309号补丁是需要打在GI解压后的安装程序中的,不是打在安装后的$GI_HOME中的。

cp grid/cvu_prereq.xml $ORA_SHIPS/grid/stage/cvu




Oracle Grid Infrastructure - Installation Notes

Patch 19404309

Note: It is presumed that the user has already reviewed the Oracle Grid Infrastructure Installation Guide and associated Release Notes; instructions and/or recommendations from those documents will not be repeated here.



After downloading the Oracle Grid Infrastructure software, and before attempting any installation, download Patch 19404309 from , and apply the patch using the instructions in the patch README.



Patch 18370031

Download Patch 18370031 from . Then, start an interactive Oracle Grid Infrastructure installation through the Oracle Universal Installer (OUI), but do not execute root.sh on any node until after the application of Patch 18370031 . When the OUI prompts the user to execute the root.sh scripts*, Patch 18370031 should be applied by following the instructions in Section 2.3, Case 5 - Patching a Software Only GI Home Installation or Before the GI Home Is Configured - of the patch README. Note: The README should be reviewed in full, as it contains other requirements (e.g. upgrading OPatch, etc.).

* If executing a software-only installation, the patch should be applied after the installation concludes, but before any configuration is attempted.

Once Patch 18370031 has been applied, proceed with the remainder of the installation (or configuration).

============================

============================

Oracle Database/RAC - Installation Notes

Note: As the title suggests, this section applies both to installations of Oracle Database and Oracle Real Application Clusters (RAC).



Patch 19404309

Note: It is presumed that the user has already reviewed the Oracle Database, Oracle RAC Installation Guides and associated Release Notes; instructions and/or recommendations from those documents will not be repeated here.



After downloading the Oracle Database/RAC software, and before attempting any installation, download Patch 19404309 from , and apply the patch using the instructions in the patch README.



Patch 19692824

During installation of Oracle Database or Oracle RAC on OL7, the following linking error may be encountered:

Error in invoking target 'agent nmhs' of makefile '<ORACLE_HOME>/sysman/lib/ins_emagent.mk'. See '<installation log>' for details.

If this error is encountered, the user should select Continue . Then, after the installation has completed, the user must download Patch 19692824 from and apply it per the instructions included in the patch README.

3.执行sh脚本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
[root@server01 Patch]# /opt/app/oraInventory/orainstRoot.sh
Changing permissions of /opt/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /opt/app/oraInventory to oinstall.
The execution of the script is complete.


[root@server02 Patch]#
[root@server02 Patch]# /opt/app/oraInventory/orainstRoot.sh
Changing permissions of /opt/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /opt/app/oraInventory to oinstall.
The execution of the script is complete.


[root@server01 Patch]# /opt/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to oracle-ohasd.service
CRS-2672: Attempting to start 'ora.mdnsd' on 'server01'
CRS-2676: Start of 'ora.mdnsd' on 'server01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'server01'
CRS-2676: Start of 'ora.gpnpd' on 'server01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'server01'
CRS-2672: Attempting to start 'ora.gipcd' on 'server01'
CRS-2676: Start of 'ora.cssdmonitor' on 'server01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'server01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'server01'
CRS-2672: Attempting to start 'ora.diskmon' on 'server01'
CRS-2676: Start of 'ora.diskmon' on 'server01' succeeded
CRS-2676: Start of 'ora.cssd' on 'server01' succeeded

ASM created and started successfully.

Disk Group OcrVoting created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 8e678582c6754f61bff84e58c6a31c08.
Successful addition of voting disk 16a25e5d73f44fcbbfd1cb25d48fa77c.
Successful addition of voting disk a12d7d8513c84f2fbf320185abea2263.
Successfully replaced voting disk group with +OcrVoting.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 8e678582c6754f61bff84e58c6a31c08 (/dev/asm-diskb) [OCRVOTING]
2. ONLINE 16a25e5d73f44fcbbfd1cb25d48fa77c (/dev/asm-diskc) [OCRVOTING]
3. ONLINE a12d7d8513c84f2fbf320185abea2263 (/dev/asm-diskd) [OCRVOTING]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'server01'
CRS-2676: Start of 'ora.asm' on 'server01' succeeded
CRS-2672: Attempting to start 'ora.OCRVOTING.dg' on 'server01'
CRS-2676: Start of 'ora.OCRVOTING.dg' on 'server01' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded




[root@server02 Patch]# /opt/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to oracle-ohasd.service
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node server01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

mark

4.安装报错问题处理
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
执行脚本报错:
[root@server01 soft]# /opt/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
Failed to create keys in the OLR, rc = 127, Message:
/opt/app/11.2.0/grid/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory

Failed to create keys in the OLR at /opt/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 7660.
[root@server01 soft]# /opt/app/11.2.0/grid/perl/bin/perl -I/opt/app/11.2.0/grid/perl/lib -I/opt/app/11.2.0/grid/crs/install /opt/app/11.2.0/grid/crs/install/rootcrs.pl execution failed



解决方案:
[root@server01 ~]# yum install compat-libcap1
[root@server02 ~]# yum install compat-libcap1

[root@server01 ~]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force
Can't locate Env.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . /opt/app/11.2.0/grid/crs/install) at /opt/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 703.
BEGIN failed--compilation aborted at /opt/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 703.
Compilation failed in require at /opt/app/11.2.0/grid/crs/install/rootcrs.pl line 305.
BEGIN failed--compilation aborted at /opt/app/11.2.0/grid/crs/install/rootcrs.pl line 305.

[root@server01 ~]# find / -name Env.pm -print
/opt/app/11.2.0/grid/perl/lib/5.10.0/Env.pm
[root@server01 ~]# cp -p /opt/app/11.2.0/grid/perl/lib/5.10.0/Env.pm /usr/share/perl5/vendor_perl/
[root@server01 ~]# cp -p /opt/app/11.2.0/grid/perl/lib/5.10.0/Env.pm /usr/share/perl5/vendor_perl/


或者
/opt/app/11.2.0/grid/perl/bin/perl /opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force
/opt/app/11.2.0/grid/perl/bin/perl /opt/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force



[root@server01 soft]# /opt/app/oraInventory/orainstRoot.sh
Changing permissions of /opt/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /opt/app/oraInventory to oinstall.
The execution of the script is complete.
[root@server01 soft]# /opt/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2021-09-27 13:30:38.485:
[client(73572)]CRS-2101:The OLR was formatted using version 3.

解决方法:
安装18370031补丁
https://fatdba.com/2016/01/06/oracle-gi-11-2-installation-on-rhel-7-error-ohasd-failed-to-start-failed-to-start-the-clusterware-last-20-lines-of-the-alert-log-follow-ohasd-failed-to-start-at-u01app11-2-0gridcrsinstallr/


(1)Deinstall previous GRID configuration

[root@server01 ~]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force

(2)安装补丁

[root@server01 OPatch]$ ./opatch napply -local /u01/app/11.2.0/grid/OPatch/18370031

(3)重新执行root.sh脚本

[root@server01 ~]# /opt/app/11.2.0/grid/root.sh


mark

mark

mark

mark

十七、grid安装后的工作
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(1)备份roo.sh脚本

Oracle recommends that you back up the root.sh script after you complete an installation. If you install other products in the same Oracle home directory, then the installer updates the contents of the existing root.sh script during the installation. If you require information contained in the original root.sh script, then you can recover it from the root.sh file copy.

(2)调整信号量

Oracle recommends that you refer to the operating system documentation for more information about setting semaphore parameters.

Calculate the minimum total semaphore requirements using the following formula:

2 * sum (process parameters of all database instances on the system) + overhead for background processes + system and other application requirements

Set semmns (total semaphores systemwide) to this total.

Set semmsl (semaphores for each set) to 250.

Set semmni (total semaphores sets) to semmns divided by semmsl, rounded up to the nearest multiple of 1024.

十八、安装数据库软件
1
2
3
4
5
6
7
8
[oracle@rac1 database]$ export DISPLAY=192.168.1.2:0.0
[oracle@rac1 database]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 6805 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4095 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-07-17_02-17-29PM. Please wait ...

mark

mark

mark

mark

image-20210929125417513

image-20210929125553780

mark

mark

mark

mark

mark

image-20210929130058234

image-20210929130147090

image-20210929130601896

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Error in invoking target 'agent nmhs' of makefile '/opt/app/oracle/product/11.2.0/db_1/sysman/lib/ins_emagent.mk'.

解决方法(只在节点1上编辑):
[oracle@server01 ~]$ vim /opt/app/oracle/product/11.2.0/db_1/sysman/lib/ins_emagent.mk

加入 -lnnz11

找到$ORACLE_HOME/sysman/lib/ins_emagent.mk文件,在文件里找字符串
$(MK_EMAGENT_NMECTL)
替换为
$(MK_EMAGENT_NMECTL) -lnnz11
注意:lnnz和$(MK_EMAGENT_NMECTL)之间有空格
然后点“重试“按钮就可以了

mark

image-20210929133504504

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@server01 ~]# /opt/app/oracle/product/11.2.0/db_1/root.sh 
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /opt/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.




[root@server02 Patch]# /opt/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /opt/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

mark

mark

十九、创建ASM磁盘组
  • 创建DATA磁盘组和FRA磁盘组
1
[grid@rac1 ~]$ asmca

mark

mark

mark

mark

mark

mark

mark

mark

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[grid@server01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Sep 29 13:43:42 2021

Copyright (c) 1982, 2013, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select name,state,type from v$asm_diskgroup;

NAME STATE TYPE
------------------------------ ----------- ------
OCRVOTING MOUNTED NORMAL
FRA MOUNTED EXTERN
DATA MOUNTED EXTERN


[grid@server01 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE server01
ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE server01
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE server01
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE server01
ora....TING.dg ora....up.type 0/5 0/ ONLINE ONLINE server01
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE server01
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE server01
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE server01
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE server01
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE server01
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE server01
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE server01
ora....01.lsnr application 0/5 0/0 ONLINE ONLINE server01
ora....r01.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....r01.ons application 0/3 0/0 ONLINE ONLINE server01
ora....r01.vip ora....t1.type 0/0 0/0 ONLINE ONLINE server01
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE server02
ora....02.lsnr application 0/5 0/0 ONLINE ONLINE server02
ora....r02.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....r02.ons application 0/3 0/0 ONLINE ONLINE server02
ora....r02.vip ora....t1.type 0/0 0/0 ONLINE ONLINE server02


二十、安装数据库
1
[oracle@rac1 ~]$ dbca

mark

mark

mark

mark

mark

mark

mark

mark

mark

mark

mark

mark

mark

mark

  • 安装70%会很卡很久,需要等待......

mark

mark

二十一、卸载RAC
  • 卸载数据库
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
(1) 卸载
[root@server01 ~]# su - oracle
[oracle@rac1 ~]$ cd $ORACLE_HOME/deinstall

[oracle@rac1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/app/oracle/product/11.2.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/11.2.0/grid
The following nodes are part of this cluster: rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac1,rac2'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_check2017-07-17_04-39-06-PM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_check2017-07-17_04-39-09-PM.log

Use comma as separator when specifying list of values as input

Specify the list of database names that are configured in this Oracle home [mydb]:

###### For Database 'mydb' ######

Specify the type of this database (1.Single Instance Database|2.Oracle Restart Enabled Database|3.RAC Database|4.RAC One Node Database) [3]:
Specify the list of nodes on which this database has instances [rac1, rac2]:
Specify the list of instance names [mydb1, mydb2]:
Specify the local instance name on node rac1 [mydb1]:
Specify the diagnostic destination location of the database [/opt/app/oracle/diag/rdbms/mydb]:
Specify the storage type used by the Database ASM|FS []: ASM


Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /opt/app/oraInventory/logs/emcadc_check2017-07-17_04-41-00-PM.log

Checking configuration for database mydb
Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /opt/app/oraInventory/logs//ocm_check7932.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2
Oracle Home selected for deinstall is: /opt/app/oracle/product/11.2.0/db_1
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
The following databases were selected for de-configuration : mydb
Database unique name : mydb
Storage used : ASM
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
rac1 : Oracle Home exists with CCR directory, but CCR is not configured
rac2 : Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2017-07-17_04-38-50-PM.out'
Any error messages from this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2017-07-17_04-38-50-PM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /opt/app/oraInventory/logs/emcadc_clean2017-07-17_04-41-00-PM.log

Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_clean2017-07-17_04-42-20-PM.log
Database Clean Configuration START mydb
This operation may take few minutes.
Database Clean Configuration END mydb

Network Configuration clean config START

Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_clean2017-07-17_04-43-09-PM.log

De-configuring Listener configuration file on all nodes...
Listener configuration file de-configured successfully.

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /opt/app/oraInventory/logs//ocm_clean7932.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/opt/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done

Delete directory '/opt/app/oracle/product/11.2.0/db_1' on the local node : Done

The Oracle Base directory '/opt/app/oracle' will not be removed on local node. The directory is not empty.

Detach Oracle home '/opt/app/oracle/product/11.2.0/db_1' from the central inventory on the remote nodes 'rac2' : Done

Delete directory '/opt/app/oracle/product/11.2.0/db_1' on the remote nodes 'rac2' : Done

The Oracle Base directory '/opt/app/oracle' will not be removed on node 'rac2'. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2017-07-17_04-38-22PM' on node 'rac1'
Clean install operation removing temporary directory '/tmp/deinstall2017-07-17_04-38-22PM' on node 'rac2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Successfully de-configured the following database instances : mydb
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/opt/app/oracle/product/11.2.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/opt/app/oracle/product/11.2.0/db_1' on the local node.
Successfully detached Oracle home '/opt/app/oracle/product/11.2.0/db_1' from the central inventory on the remote nodes 'rac2'.
Successfully deleted directory '/opt/app/oracle/product/11.2.0/db_1' on the remote nodes 'rac2'.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############


(2) 描述
此操作会卸载Oracle数据库软件及Oracle数据库,不会删除磁盘组信息。
  • 卸载grid
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
(1) 节点1执行deinstall脚本
[root@server01 ~]# su - grid
[grid@rac1 ~]$ cd $ORACLE_HOME/deinstall
[grid@rac1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2017-07-17_10-03-25AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /opt/app/grid
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/11.2.0/grid
The following nodes are part of this cluster: rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac1,rac2'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2017-07-17_10-03-25AM/logs//crsdc.log

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2017-07-17_10-03-25AM/logs/netdc_check2017-07-17_10-03-51-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2017-07-17_10-03-25AM/logs/asmcadc_check2017-07-17_10-05-11-AM.log

Automatic Storage Management (ASM) instance is detected in this Oracle home /opt/app/11.2.0/grid.
ASM Diagnostic Destination : /opt/app/grid
ASM Diskgroups : +OCRVOTING
ASM diskstring : /dev/asm*
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).
If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you want to modify above information (y|n) [n]: y
Specify the ASM Diagnostic Destination [/opt/app/grid]:
Specify the diskstring [/dev/asm*]:
Specify the diskgroups that are managed by this ASM instance [+OCRVOTING]:

De-configuring ASM will drop the diskgroups at cleanup time. Do you want deconfig tool to drop the diskgroups y|n [y]: y


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2
Oracle Home selected for deinstall is: /opt/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2017-07-17_10-03-25AM/logs/deinstall_deconfig2017-07-17_10-03-40-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2017-07-17_10-03-25AM/logs/deinstall_deconfig2017-07-17_10-03-40-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2017-07-17_10-03-25AM/logs/asmcadc_clean2017-07-17_10-08-54-AM.log
ASM Clean Configuration START
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2017-07-17_10-03-25AM/logs/netdc_clean2017-07-17_10-10-36-AM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
Stopping listener: LISTENER
Listener stopped successfully.
Unregistering listener: LISTENER
Listener unregistered successfully.
Listener de-configured successfully.

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END



---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac2".

/tmp/deinstall2017-07-17_10-03-25AM/perl/bin/perl -I/tmp/deinstall2017-07-17_10-03-25AM/perl/lib -I/tmp/deinstall2017-07-17_10-03-25AM/crs/install /tmp/deinstall2017-07-17_10-03-25AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-07-17_10-03-25AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Run the following command as the root user or the administrator on node "rac1".

/tmp/deinstall2017-07-17_10-03-25AM/perl/bin/perl -I/tmp/deinstall2017-07-17_10-03-25AM/perl/lib -I/tmp/deinstall2017-07-17_10-03-25AM/crs/install /tmp/deinstall2017-07-17_10-03-25AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-07-17_10-03-25AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

Press Enter after you finish running the above commands

<----------------------------------------

(2) 节点2执行上面脚本
/tmp/deinstall2017-07-17_10-03-25AM/perl/bin/perl -I/tmp/deinstall2017-07-17_10-03-25AM/perl/lib -I/tmp/deinstall2017-07-17_10-03-25AM/crs/install /tmp/deinstall2017-07-17_10-03-25AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-07-17_10-03-25AM/response/deinstall_Ora11g_gridinfrahome1.rsp"


[root@server02 ~]# /tmp/deinstall2017-07-17_10-03-25AM/perl/bin/perl -I/tmp/deinstall2017-07-17_10-03-25AM/perl/lib -I/tmp/deinstall2017-07-17_10-03-25AM/crs/install /tmp/deinstall2017-07-17_10-03-25AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-07-17_10-03-25AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2017-07-17_10-03-25AM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: 1/192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.1.23/192.168.1.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.1.24/192.168.1.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node


(3) 节点1运行以下脚本
/tmp/deinstall2017-07-17_10-03-25AM/perl/bin/perl -I/tmp/deinstall2017-07-17_10-03-25AM/perl/lib -I/tmp/deinstall2017-07-17_10-03-25AM/crs/install /tmp/deinstall2017-07-17_10-03-25AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-07-17_10-03-25AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode


[root@server01 ~]# /tmp/deinstall2017-07-17_10-03-25AM/perl/bin/perl -I/tmp/deinstall2017-07-17_10-03-25AM/perl/lib -I/tmp/deinstall2017-07-17_10-03-25AM/crs/install /tmp/deinstall2017-07-17_10-03-25AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-07-17_10-03-25AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Using configuration parameter file: /tmp/deinstall2017-07-17_10-03-25AM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: 1/192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.1.23/192.168.1.0/255.255.255.0/eth0, hosting node rac1
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-4611: Successful deletion of voting disk +OcrVoting.
ASM de-configuration trace file location: /tmp/deinstall2017-07-17_10-03-25AM/logs/asmcadc_clean2017-07-17_10-29-46-AM.log
ASM Clean Configuration START
ASM Clean Configuration END

ASM with SID +ASM1 deleted successfully. Check /tmp/deinstall2017-07-17_10-03-25AM/logs/asmcadc_clean2017-07-17_10-29-46-AM.log for details.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

(3) 返回在步骤1按Enter回车
Press Enter after you finish running the above commands

<----------------------------------------

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/opt/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/opt/app/11.2.0/grid' on the local node : Done

Delete directory '/opt/app/oraInventory' on the local node : Done

Failed to delete the directory '/opt/app/grid'. The directory is in use.
Delete directory '/opt/app/grid' on the local node : Failed <<<<

Detach Oracle home '/opt/app/11.2.0/grid' from the central inventory on the remote nodes 'rac2' : Done

Delete directory '/opt/app/11.2.0/grid' on the remote nodes 'rac2' : Done

Delete directory '/opt/app/oraInventory' on the remote nodes 'rac2' : Failed <<<<

The directory '/opt/app/oraInventory' could not be deleted on the nodes 'rac2'.
Delete directory '/opt/app/grid' on the remote nodes 'rac2' : Failed <<<<

The directory '/opt/app/grid' could not be deleted on the nodes 'rac2'.
Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2017-07-17_10-03-25AM' on node 'rac1'
Clean install operation removing temporary directory '/tmp/deinstall2017-07-17_10-03-25AM' on node 'rac2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/opt/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/opt/app/11.2.0/grid' on the local node.
Successfully deleted directory '/opt/app/oraInventory' on the local node.
Failed to delete directory '/opt/app/grid' on the local node.
Successfully detached Oracle home '/opt/app/11.2.0/grid' from the central inventory on the remote nodes 'rac2'.
Successfully deleted directory '/opt/app/11.2.0/grid' on the remote nodes 'rac2'.
Failed to delete directory '/opt/app/oraInventory' on the remote nodes 'rac2'.
Failed to delete directory '/opt/app/grid' on the remote nodes 'rac2'.
Oracle Universal Installer cleanup completed with errors.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1,rac2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

(4) 手工删除
被进程占用的文件,卸载完后可以手工删除:
[root@server01 ~]# rm -rf /opt/app/grid/*
[root@server02 ~]# rm -rf /opt/app/grid/*

[root@server01 ~]# rm -rf /etc/oraInst.loc
[root@server02 ~]# rm -rf /etc/oraInst.loc
  • 手工卸载11g RAC
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
思路来自于经典的《How to Proceed From a Failed 10g or 11.1 Oracle Clusterware (CRS) Installation (Doc ID 239998.1)》,并补充了一些11.2特有的内容。

卸载11.2 RAC的官方方法:
How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation (Doc ID 942166.1)。本次没有采用这个方法,其主要是执行deintall脚本,但是我的环境中,执行时间很久。

最好先执行这个:

crsctl stop crs -f

Cd /etc/oracle/

rm -rf scls_scr oprocd lastgasp o* setasmgid

vi /etc/inittab

去掉ohas的那一行(通常是最后一行)

rm -f /etc/init.d/init.cssd

rm -f /etc/init.d/init.crs

rm -f /etc/init.d/init.crsd

rm -f /etc/init.d/init.evmd

rm -f /etc/rc2.d/K96init.crs

rm -f /etc/rc2.d/S96init.crs

rm -f /etc/rc3.d/K96init.crs

rm -f /etc/rc3.d/S96init.crs

rm -f /etc/rc5.d/K96init.crs

rm -f /etc/rc5.d/S96init.crs

rm -Rf /etc/oracle/scls_scr

rm -f /etc/inittab.crs

cp /etc/inittab.orig /etc/inittab

rm -rf /etc/init.d/ohasd

rm -rf /etc/init.d/init.ohasd

rm -rf /etc/oratab

rm -rf /etc/oraInst.loc

rm -rf /var/tmp/.oracle

rm -rf /tmp/.oracle

rm -rf /u01/app

cd /tmp

rm -rf CVU_11.2.0.3.0_grid logs Logs OraInstall*

mkdir -p /u01/app/11.2.0.3/grid

mkdir -p /u01/app/grid

mkdir -p /u01/app/oracle

chown -R grid:oinstall /u01/app/11.2.0.3/grid

chown -R grid:oinstall /u01/app/grid

chown -R grid:oinstall /u01

mkdir -p /u01/app/oracle/product/11.2.0.3/dbhome_1

chown -R oracle:oinstall /u01/app/oracle

chown -R oracle:oinstall /u01/app/oracle/product/11.2.0.3/dbhome_1

检查是否还有 d.bin 进程:
ps -ef|grep d.bin
如果还有,那么直接kill
系统不会重启的,你想啊,文件都被kill了…………

检查 ifconfig|grep 169.254,如果有类似下面的输出:

eth1:1 Link encap:Ethernet HWaddr 08:00:27:89:81:66

inet addr:169.254.159.3 Bcast:169.254.255.255 Mask:255.255.0.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

那么需要重启一下eth1网卡:

[root@dm01db01 cell]# ifdown eth1

[root@dm01db01 cell]# ifup eth1

[root@dm01db01 cell]# ifconfig|grep 169.254

  • Title: Centos7.6安装Oracle11g RAC
  • Author: 𝓓𝓸𝓷
  • Created at : 2024-07-04 08:50:49
  • Updated at : 2025-03-08 10:15:55
  • Link: https://www.zhangdong.me/oracle11g-rac-installation.html
  • License: This work is licensed under CC BY-NC-SA 4.0.
评论