3PAR storage multi-path (hyperactive) +
RHEL7.2 + ORACLE 11G-RAC environme
nt construction notes
IT siege lion study notes
1 . T op ological s tr u ct u re
2. TEAM operat io n o f n e tw o rk ca r d
Take the Team1 group composed of eno51 and en052as an example:
nmcli con add type team con-nameteam1 ifname team1 config '{"runner": {"name": "activebackup"}}'
nmcli con mod team1 ipv4.addresses'192.168.1.1 / 24 '
nmcli con mod team1 ipv4.methodmanual
nmcli con add type team-slavecon-name team1-port1 ifname eno51 master team1
nmcli con add type team-slavecon-name team1-port2 ifname eno52 master team1
teamdctl team1 state (View status)
After the configuration is complete, modify the startup mode of other network cards
from the originaldhcp to none
3 . M od ify the h o st na m e
hostnamectl set-hostname < hostname >
hostnamectl status to view
4 . C on figure Yum so u rc e
There are two ways to use local sources
(1) Mount the optical drive
mount / dev / cdrom / mnt
(2) If there is no CD-ROM drive, use the following method to mount the ISO file of
the system installation disk
You need to upload the installation ISO file rhel-server-7.2-x86_64-dvd.iso to the
specified directory first, and execute it in the directory
mount -o loop rhel-server-7.2-x86_64-dvd.iso / mnt
Create configuration file cat /etc/yum.repos.d/local.repo
[re]
name = rhel
gpgcheck = 0
enable = 1
baseurl = file: /// min
5. Install the system package
yum install -yelfutils-libelf-devel.x86_64
yum install -ycompat-libstdc ++ - 33.x86_64
yum install -y compat-libcap1.x86_64
yum install -y gcc.x86_64
yum install -y gcc-c ++. x86_64
yum install -y glibc.i686
yum install -y glibc-devel.i686
yum install -y glibc-devel.x86_64
yum install -y ksh - *. x86_64
yum install -y libaio.i686
yum install -y libaio-devel.i686
yum install -y libaio-devel.x86_64
yum install -y smartmontools
yum install -y libgcc.i686
yum install -y libstdc ++. i686
yum install -y libstdc ++ - devel.i686
yum install -y libstdc ++ - devel.x86_64
yum install -y libXi.i686
yum install -y libXi.x86_64
yum install -y libXtst.i686
yum install -y libXtst.x86_64
yum install -y sysstat.x86_64
yum install xorg-x11-xauth
yum install -y xterm
yum install -y ntpdate
yum install -y device-mapper-multipath
The following installation packages need to be installed manually
rpm -ivhcompat-libstdc ++ - 33-3.2.3-72.el7.x86_64.rpm
rpm -e ksh-20120801-22.el7_1.2.x86_64
rpm -ivhpdksh-5.2.14-37.el5_8.1.x86_64.rpm
6 . M od ify HOSTS
10.10.17.1 dyckrac1
10.10.17.2 dyckrac2
10.10.17.3 dyckrac1-vip
10.10.17.4 dyckrac2-vip
10.10.17.5 dyck-scan
192.168.1.1 dyckrac1-priv
192.168.1.2 dyckrac2-priv
7. Create user
/ usr / sbin / groupadd -g 501 oinstall
/ usr / sbin / groupadd -g 502 dba
/ usr / sbin / groupadd -g 504 asmadmin
/ usr / sbin / groupadd -g 506 asmdba
/ usr / sbin / groupadd -g 507 asmoper
/ usr / sbin / useradd -u 501 -g oinstall-G asmadmin, asmdba, asmoper -d / home / grid -m grid
/ usr / sbin / useradd -u 502 -g oinstall-G dba, asmdba -d / home / oracle -m oracle
echo oracle | passwd - stdin oracle
echo grid | passwd --stdin grid
8 . M od ify user e nv i ro n me n t v a ri a bl e s
Use su-oracle command to switch to ORACLE user
Edit the file ~ / .bash_profile to add the following content (the instance name is
filled in according to the actual situation, and the two nodes are modified
respectively)
export ORACLE_SID = orcl1
export ORACLE_BASE = / u01 / app / oracle
exportORACLE_HOME = $ ORACLE_BASE / product / 11.2.0 / db_1
exportPATH = $ PATH: $ ORACLE_HOME / bin: $ ORACLE_HOME / OPatch
Use su-grid command to switch to the ORACLE user
Edit the file ~ / .bash_profile to add the following content (the instance name is
filled in according to the actual situation, and the two nodes are modified
respectively)
export ORACLE_SID = + ASM1
export ORACLE_BASE = / g01 / app / grid
exportORACLE_HOME = / g01 / app / 11.2.0 / grid
exportPATH = $ PATH: $ ORACLE_HOME / bin: $ ORACLE_HOME / OPatch
9 . C on figure GRID u s e r a nd ORACLE us e r SSH mu t u al tr u st
Take GRID user as an example
First go from ROOT user SU to GRID user at node 1
su - grid
mkdir .ssh
touch authorized_keys
Then create the key file
ssh-keygen -t rsa
Execute enter until the end
cat id_rsa.pub >> authorized_keys
Then go to the 2 nodes and perform the same operation above. Then insert the copy
of the content in the id_rsa.pub of the 2 node into the authorized_keys file of
the 1 node .
Copy the content in id_rsa.pub at node 1 and insert it into authorized_keys file
at node 2 .
After execution, execute it on two nodes separately
ssh dyckrac1 date
ssh dyckrac2 date
ssh dyckrac1-priv date
ssh dyckrac2-priv date
In the meantime, you are prompted to enter yes . After the execution is complete,
execute these 4 commands again totest. If the time is directly displayed and no
password is prompted, the mutual trust configuration is successful.
Configure ORACLE user mutual trust in the same way
1 0. Tu rn off th e f i re w al l
systemctl stop firewalld.service && sudo systemctl disable firewalld.service
1 1. Cl ose SELINUX
sed -i -e "s / SELINUX = enforcing / SELINUX = disabled / g" / etc / selinux / config
1 2, mo dify the limits re s tr i ct i on s
Modify the configuration file cat /etc/security/limits.conf
grid soft nproc 16384
grid hard nproc 16384
grid soft nofile 65536
grid hard nofile 65536
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft nofile 65536
oracle hard nofile 65536
1 3. Mo dify kern e l p ar a me t er s
Modify the file /etc/sysctl.conf and add the following content:
fs.file-max = 6815744
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 900065500
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
1 4. Di sable Transparent HugePages (not disabling causes ORACLE performance
problems)
[ root @ rac1 ~] # cat / sys / kernel / mm / transparent_hugepage / enabled
[always] madvise never // Currently enabled
[root @ rac1 ~] # cd / etc / default /
[root @ rac1 default] #cp grub grub.bak
[root @ rac1 ~] # cat / etc / default / grub
GRUB_TIMEOUT = 5
GRUB_DISTlisher = "$ (sed 's, release. * $ ,, g' / etc / system-release)"
GRUB_DEFAULT = saved
GRUB_DISABLE_SUBMENU = true
GRUB_TERMINAL_OUTPUT = "console"
GRUB_CMDLINE_LINUX = "crashkernel = autord.lvm.lv = rhel / root rd.lvm.lv = rhel / swap rhgb
quiet transparent_hugepage = never "
GRUB_DISABLE_RECOVERY = "true"
There is a difference here
On BIOS-based machines (a modification of using traditional BIOS when installing the system):
grub2-mkconfig -o /boot/grub2/grub.cfg
From UEFI-based machines FI, when using UEFI-BIOS when installing the system):
grub2-mkconfig -o / boot / efi / EFI / redhat / grub.cfg
The execution input is as follows, after execution, restart the
configuration file...
Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.el7.x86_64.img
Found linux image: / boot / vmlinuz-0-rescue-a6225ccf9497470bb6051d6392773fc9
Found initrd image: /boot/initramfs-0-rescue-a6225ccf9497470bb6051d6392773fc9.img
done
[root @ rac1 default] # reboot
Check again after restart
[root @ rac1 ~] # cat / sys / kernel / mm / transparent_hugepage / enabled
always madvise [never] // now is closed
1 5. Di sable avahi-daemon
systemctl stopavahi-daemon
systemctldisable avahi-daemon
systemctlstatus avahi-daemon
16. Set RemoveIPC = false (resulting in ASM instance CRASH )
# vi /etc/systemd/logind.conf
RemoveIPC = no
Restart the systemd-logind service or restart the host
daemon-reload systemctl
systemctl restart systemd-logind
For setting reasons, please refer to the official: ALERT: Setting RemoveIPC = yes on
Redhat 7.2 ASM Crashes and DatabaseInstances as Best as Any Application That Uses a Shared
Memory Segment (SHM) or Semaphores (SEM) ( Document ID 2081410.1)
1 7. Co nfigure m u lt i pa t h d is k
Use /usr/lib/udev/scsi_id -g -u -d /dev/sda to find out the WWID of the disk system , close it
in theMULTIAPTHD configuration and list this ID in the mask list
Enable multipathd on both nodes and set to boot
systemctl enable multipathd
mpathconf --enable
Then modify the configuration file as follows (this configuration file is suitable
for the environment where 3PAR storage uses multipath)
The multipath configuration file that comes with LINUX is in /etc/multipath.conf
defaults {
polling_interval 10
user_friendly_names no
find_multipaths yes
path_checker tur
blacklist {
wwid 3600508b1001c5b05f73bd869031e78f5
devnode "^ (ram | raw | loop | fd | md | dm- | sr | scd | st) [0-9] *"
devnode "^ hd [az]"
multipaths {
multipath {
wwid 360002ac000000000000000030001f3c6
aka ocrvote2
multipath {
wwid 360002ac000000000000000020001f3c6
aka ocrvote1
multipath {
wwid 360002ac000000000000000040001f3c6
aka ocrvote3
}
multipath {
wwid 360002ac000000000000000050001f3c6
aka asmdata01
multipath {
wwid 360002ac000000000000000060001f3c6
aka asmdata02
multipath {
wwid 360002ac000000000000000070001f3c6
aka asmdata03
multipath {
wwid 360002ac000000000000000080001f3c6
aka asmdata04
multipath {
wwid 360002ac000000000000000090001f3c6
alias asmdata05
multipath {
wwid 360002ac0000000000000000a0001f3c6
aka asmdata06
}
multipath {
wwid 360002ac0000000000000000b0001f3c6
aka asmdata07
multipath {
wwid 360002ac0000000000000000c0001f3c6
aka asmdata08
multipath {
wwid 360002ac0000000000000000d0001f3c6
aka asmdata09
multipath {
wwid 360002ac0000000000000000e0001f3c6
alias asmdata10
multipath {
wwid 360002ac0000000000000000f0001f3c6
alias asmdata11
multipath {
wwid 360002ac000000000000000100001f3c6
aka asmdata12
}
}
devices {
device {
vendor "3PARdata"
product "VV"
path_grouping_policy group_by_prio
path_selector "round-robin 0"
path_checker tur
features "0"
hardware_handler "1 alloy"
prio alloy
immediate failure
rr_weight uniform
no_path_retry 18
rr_min_io_rq 1
detect_prio yes
# fast_io_fail_tmo 10
# dev_loss_tmo 14
Execute service multipathd reload after modification
1 8. Bi nding dis k UDEV p e rm i ss i o ns
Create the file /etc/udev/rules.d/12-mulitpath-privs.rules with the following content
ENV {DM_NAME} == "ocrvote *", OWNER: = "grid", GROUP: = "asmadmin", MODE: = "660"
ENV {DM_NAME} == "asmdata *", OWNER: = "grid", GROUP: = "asmadmin", MODE: = "660"
1 9. In stallatio n d i re c to r y w i th pe r mi s s io n s
chown -R oracle: oinstall / u01
chown -R grid: oinstall / g01
20. Install GI software
U se th e ROOT use r t o c o py th e s o ft w a re pa c k ag e t o t h e /tmp d ir e ct o ry to
d eco mp ress and i ns t al l GI
1. Unzip the grid software
unzip -qp13390677_112040_Linux-x86-64_3of7.zip
2. Patch the grid software 19404309
unzip -q p19404309_112040_Linux-x86-64.zip
cd b19404309
export ORA_SHIPS = / tmp / soft
cp grid / cvu_prereq.xml $ ORA_SHIPS / grid / stage / cvu
Install a single node. After the installation is complete, execute root.sh as
the root user as required
Atch upgraded OPatch . Replace the OPatch directory in the $
ORACLE_HOME directory with the latestOPatch .
Install patch 1837031
Unzip the patch
Use GRID user to execute $ORACLE_HOME/OPatch/ocm/bin/emocmrsp
to generate ocm.rsp file
Use the grid user to enter the patch package directory
cd / tmp / soft / 18370031
Then execute the following command to upgrade
opatch apply -oh $ ORACLE_HOME -ocmrf $ ORACLE_HOME / OPatch / ocm / bin / ocm.rsp
C opy GRID soft war e to 2 n o de s
On node 1 , execute as GRID user
cd / g01
scp -r app grid @ dyckrac2: / g01
After the copy is complete, execute the following script as the root user
/g01/app/oraInventory/orainstRoot.sh
/g01/app/11.2.0/grid/root.sh
C lon e the ORACLE_HOME d i re c to r y
E xec ut e at 1no d e
cd / g01 / app / oraInventory
rm-rf *
su-grid
cd $ ORACLE_HOME / clone / bin
# The following is a command
perlclone.pl -silent -debug ORACLE_BASE = / g01 / app / gridORACLE_HOME = / g01 / app / 11.2.0 / grid
ORACLE_HOME_NAME = Ora11g_gridinfrahome1 INVENTORY_LOCATION = / g01 / app / oraInventoryOSDBA_GROUP =
-GOUP_LAO_OOOP_OOOPO_OOOPO_OOOP_OOOPO_OOOPO_OOOPOLE_OOOPOPLE , dyckrac2} "'-O'"
LOCAL_NODE = dyckrac1 "'CRS = TRUE -ignoreSysPrereqs
Wait for the execution to succeed, go to 2 nodes to execute
Execute on 2 nodes
cd / g01 / app / oraInventory
rm-rf *
su-grid
cd $ ORACLE_HOME / clone / bin
# The following is a command
perlclone.pl -silent -debug ORACLE_BASE = / g01 / app / gridORACLE_HOME = / g01 / app / 11.2.0 / grid
ORACLE_HOME_NAME = Ora11g_gridinfrahome1 INVENTORY_LOCATION = / g01 / app / oraInventoryOSDBA_GROUP =
-GROOLE_OLEOP_OOD , dyckrac2} "'-O'" LOCAL_NODE = dyckrac2 "'CRS = TRUE-ignoreSysPrereqs
2 1. Co nfigure t h e c lu s te r
Use xstart to log in to 1 node as grid user to execute
$ ORACLE_HOME / crs / config / config.sh to configure the cluster.
Then execute root.sh on node 1
After success, go to 2 nodes to execute root.sh
The last INS-20802 error is ignored
Check the cluster status to perform DB installation
I nst al l DB
unzip -qp13390677_112040_Linux-x86-64_1of7.zip
unzip -qp13390677_112040_Linux-x86-64_2of7.zip
Patch the db software 19404309
unzip -q p19404309_112040_Linux-x86-64.zip
cd / tmp / soft / b19404309
export ORA_SHIPS = / tmp / soft
cp database / cvu_prereq.xml $ ORA_SHIPS / database / stage / cvu
Then log in to XSTART as the ORACLE user to perform the DB cluster installation
An error will be reported during installation
Open another window and directly modify the ins_emagent.mk file
$ vi $ ORACLE_HOME / sysman / lib / ins_emagent.mk
# ===========================
# emdctl
# ===========================
$ (SYSMANBIN) emdctl:
$ (MK_EMAGENT_NMECTL)
As for the:
# ===========================
# emdctl
# ===========================
$ (SYSMANBIN) emdctl:
$ (MK_EMAGENT_NMECTL) -lnnz11
Then click Retry to continue installation