0% found this document useful (0 votes)
87 views5 pages

Rac Setup

The document describes the steps to install an Oracle RAC database across two nodes using a shared storage. It involves setting up two Linux VMs, configuring an Openfiler SAN, installing Grid Infrastructure and ASM on both nodes, and using dbca to create a database spanning both nodes using the shared storage.

Uploaded by

Rama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views5 pages

Rac Setup

The document describes the steps to install an Oracle RAC database across two nodes using a shared storage. It involves setting up two Linux VMs, configuring an Openfiler SAN, installing Grid Infrastructure and ASM on both nodes, and using dbca to create a database spanning both nodes using the shared storage.

Uploaded by

Rama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd

00:0C:29:38:2B:99

00:0C:29:38:2B:A3

Q)error prvf-7617 node connectivity between


service iptables stop
chkconfig iptables off

11.2.0.3

Installation of RAC:
====================
1. We are assuming 2 node rac(2 vm's using vmware) with 1 common storage(Open
filer) , NFS share, ISCSI(Openfiler), NAS & SAN()
2 node rac
software

-> Vmware
-> Linux OS - 2 vm's(node)
RAM - min 4gb - oracle databas how much RAM (1gb)
networks - 2 (1 public, 1 private)
hard disk - 50gb

-> Openfiler -SAN - shared storage - 1


RAM - 1g
network - 1 (1 private0
hard disk - 2 (1(10g) to install openfiler, 2(min 50g - 100G) is to setup the
shared storage for creation of RAC(ocr,voting,database))

-> Grid software s/w 11gG2(ocr&voting)


-> Oracle database s/w 11gG2(crd)

00:0C:29:B9:F0:47
00:0C:29:B9:F0:51

2. set hostnames & Ip address to the cluster nodes( eth0 - public, - eth1 -
private)

hostname -
ifconfig -

NIC1 - eth0 - public node 1 ---- node2 eth0 public


NIC2 - eth1 - private eth1 private

3. edit /etc/hosts with the node details

11gR2 - /etc/hosts or dns server


/etc/resolve.conf

/etc/hosts

#Public Ip's
192.168.1.101 linux1 linux1.oracle.com
192.168.1.102 linux2 linux2.oracle.com

#Private Ip's
192.168.2.101 linux1-priv linux1-priv.oracle.com
192.168.2.102 linux2-priv linux2-priv.oracle.com

#Virtual Ip's (VIP)


192.168.1.201 linux1-vip linux1-vip.oracle.com
192.168.1.202 linux2-vip linux2-vip.oracle.com

#SCAN Ip
192.168.1.151 rac-scan rac-scan.oracle.com
192.168.1.152 rac-scan rac-scan.oracle.com
192.168.1.153 rac-scan rac-scan.oracle.com

#SAN
192.168.2.150 openfiler openfiler.oracle.com

4. Install the required RPM's in the linux server - RHEL 6 * 11GR2

#rpm -ivh packagename.rpm

yum install binutils* compat-libstdc* elfutils* gcc* glibc* ksh* libaio* libgcc*
libstdc* make* pdksh* sysstat* *libcap* unixODBC* rsh* firefox* iscsi* -y

5. Edit the kernel values

# cat /etc/sysctl.conf RAM/2 = database memory

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 25097152
kernel.shmmax = 8536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

#sysctl -p

6. create oracle & grid users


GI s/w = $grid user
RDBMS = $oracle user
#root
mkdir -p /u01/app/oracle/product/11.2.0/db_home
mkdir -p /u01/app/oraInventory
mkdir -p /u01/11.2.0/grid
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 503 oper
groupadd -g 504 asmadmin
groupadd -g 506 asmdba
groupadd -g 507 asmoper
useradd -c "Oracle Grid Infrastructure Owner" -g oinstall -G
asmadmin,asmdba,asmoper,dba grid
useradd -c "Oracle RDBMS Owner" -g oinstall -G dba,oper,asmdba oracle
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory
chown -R grid:oinstall /u01/11.2.0/grid
chown -R grid:oinstall /u01/app/11.2.0/grid
chmod -R 775 /u01/11.2.0/grid
chmod -R 775 /u01/app/11.2.0/grid
chmod -R 775 /u01/app/oracle
chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_home
chmod -R 775 /u01/app/oracle/product/11.2.0/db_home
chown -R grid:oinstall /u01/app/oraInventory
chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/app/11.2.0
chmod -R 775 /u01/app/grid
chmod -R 775 /u01/app/11.2.0
passwd grid
passwd oracle

oracle:oinstall /u01
grid:oinstall /u01

8. edit security files with appropriate

Add the following lines to the /etc/security/limits.conf file:

grid soft nproc 2047


grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

9. Add or edit the following line in the /etc/pam.d/login file, if it does not
already exist:

session required pam_limits.so

[root@dbhost1 ~]# more /etc/sysconfig/network

# Added NOZEROCONF as pre-requisite for 12c

NOZEROCONF=yes

NETWORKING=yes

NETWORKING_IPV6=yes

HOSTNAME=dbhost1.paramlabs.com

10. Enable ssh(secure shell ) passwordless connectivity(grid & oracle)


$ssh-keygen rsa
$ssh-keygen dsa
$pwd
$cd ~/.ssh
$pwd
/home/oracle/.ssh or /home/oracle/.grid
$cd .ssh
$ls
dsa.pub rsa.pub
cat dsa.pub > lix1
cat rsa.pub >> lix1
scp lix1

node 2: do the same lix2


cat lix1 > authorized_key
cat lix2 >> authorized_key
scp authorized_key linux1:

ssh linux2 (yes/no)


ssh linux2ss

11. Openfiler -- keeping asm disk are ready( configure and setup Openfiler and
register the nodes )
setup ASM disks using udev,oracleasm,device multipath..et
NTFS, VFAT, EXT3,EXT4 , XFS
ASM - RAW DEVICES

open filer should configure as per below

yum install iscsi* -y .

vi /etc/iscsi/iscsi.conf

63: discoveryaddress = 192.168.2.150(Openfiler server)

SAN BOX - openfiler


vi /etc/initiators.deny

comment the line (#) the value

service iscsi restart

vi /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2012-05.in.ktux91.centos:iscsi1

#iscsiadm -m discovery -t sendtargets -p 192.168.2.160


#iscsiadm -m node -p 192.168.2.160 -l

3 - ocr & voting


2 - data -
2 - fra

UDEV rules:
----------
vi /etc/udev/rules.d/99-oracle-asm.rules
10g - 1st drive - OS - /dev/sda
50gb - 2nd drive - db - /dev/sdb (we are using /sdb)
KERNEL=="sdb*", SUBSYSTEM=="block", OWNER="grid", GROUP="asmadmin", MODE="0660"

2.129
udevadm control --reload-rules
start_udev

RHEL_7

# udevadm control --reload-rules


# udevadm trigger --type=devices --action=change

----From here DBA job-------

12. run cluvy to check the user equivalence & node reachability.
login as grid user and go to the grid software location and run the below command.

$./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

$cluvy - user equilence -


node reachability

13. Install the Grid s/w as a grid user = Clusterware + ASM = /u01/app/11.2.0/grid/
{}
export LD_BIND_NOW=1
./runInstaller

$asmca - creating asm diskgroup(+DATA, +FRA)


/dev/sdb1
/dev/sdc1

create ASM diskgroups(+DATA , +FRA) using ASMCA

14. Install oracle database (RDBMS) s/w as a oracle user

15. dbca - create database(CRD ) - +DATA +FRA

. oraenv
/etc/oratab

$sqlplus / as sysdba

3 to 4 hrs of time

You might also like