How To Setup 12c R1 ACFS On ASM Flex Architecture-Demo
How To Setup 12c R1 ACFS On ASM Flex Architecture-Demo
1) This demo was performed on a 12.1.0.2 ASM Flex configuration on Standard Cluster
(4 nodes) using ASM role separation:
[grid@cehaovmsp141 /]$ id
uid=1100(grid) gid=1000(oinstall)
groups=1000(oinstall),1100(asmadmin),1300(asmdba),1301(asmoper)
context=root:system_r:unconfined_t:s0-s0:c0.c1023
2) This ASM Flex ASM configuration is using the default cardinally=3, thus ASM
instances are running on only 3 nodes of 4 nodes:
Node #1:
x86_64/bin'
Node #2:
Node #3:
Node #4:
4) ASM ADVM proxy needs to be running on every node in a standard cluster or every
Hub Node in a Flex Cluster.
ora.LISTENER.lsnr
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.OCRVOTE.dg
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
OFFLINE OFFLINE cehaovmsp144 STABLE
ora.net1.network
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.ons
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.proxy_advm
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE cehaovmsp143 169.254.39.109,STABL
E
ora.asm
1 ONLINE ONLINE cehaovmsp142 Started,STABLE
2 ONLINE ONLINE cehaovmsp141 Started,STABLE
3 ONLINE ONLINE cehaovmsp143 Started,STABLE
ora.cehaovmsp141.vip
1 ONLINE ONLINE cehaovmsp141 STABLE
ora.cehaovmsp142.vip
1 ONLINE ONLINE cehaovmsp142 STABLE
ora.cehaovmsp143.vip
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.cehaovmsp144.vip
1 ONLINE ONLINE cehaovmsp144 STABLE
ora.cvu
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.mgmtdb
1 ONLINE ONLINE cehaovmsp143 Open,STABLE
ora.oc4j
Note: If ASM ADVM proxy is not available, please create & configure it following the next
steps:
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
NAME VALUE
-------------------- --------------------
compatible.asm 12.1.0.0.0
compatible.rdbms 12.1.0.0.0
compatible.advm 12.1.0.0.0
Note 1: If you manually create the diskgroup, then you will need to manually mount it on
all the other ASM instances (only for the first time, later CRS will automatically mount
the diskgroup on every ASM instance) as follows:
Note 2: f you create the diskgroup using the ASMCA GUI, then ASMCA will
automatically mount the new diskgroup on all the ASM instances.
Note 3: +ACFSDG diskgroup is not mounted on the cehaovmsp144 node since ASM is
only running on 3 nodes (cehaovmsp141,cehaovmsp142 & cehaovmsp143) due to this
is an ASM Flex configuration with Cardinality = 3.
6) Then a new ADVM volume needs to be created in the new +ACFSDG diskgroup:
ASMCMD>
7) As a next step, the ACFS mount point directory needs to be created on the 4 nodes
(on every node in a standard cluster or every Hub Node in a Flex Cluster):
8) Then, a new ACFS filesystem can be created in the new ADVM volume as grid user
(Grid Infrastructure owner) as follows (from the first node):
9) As a next step, the new ACFS filesystem needs to be registered as a new CRS
resource (from the first node) as root user, as follows:
[grid@cehaovmsp141 ~]$ su -
Password:
10) Then, the ACFS filesystem needs to be started and mounted (from the first node) as
root user:
STABLE
ONLINE ONLINE cehaovmsp144 mounted on /u01acfs,
STABLE
--------------------------------------------------------------------------------
11) Verify/confirm the new ACFS filesystem is mounted on all the nodes:
Node #1:
Node #2:
Node #3:
Node #4:
12) Then set the ownership and permissions to the new ACFS filesystem (from the first
node):
13) Note, the new ownership and permissions were propagated to all the nodes:
Node #1:
Node #2:
Node #3:
Node #4:
14) All the new ACFS/ADVM resources ONLINE & STABLE and they are listed as
follows:
15) Then CRS was stopped and restarted on all the 4 nodes to confirm the new ACFS
filesystem was automatically mounted as follows:
Node #1:
Node #2:
Node #3:
Node #4:
Node #1:
Node #2:
Node #3:
Node #4:
16) Then as a grid OS user, verify the ACFS filesystem was automaticaly mounted
again on all the nodes at OS level as follows:
Node #1:
Node #2:
Node #3:
Node #4:
Node #1:
+ASM1 is running, ASM proxy is running too and ACFS filesystem is mounted:
Node #2:
+ASM2 is running, ASM proxy is running too and ACFS filesystem is mounted:
Node #3:
+ASM3 is running, ASM proxy is running too and ACFS filesystem is mounted:
Node #4:
+ASM4 is NOT running, ASM proxy is running too and ACFS filesystem is mounted:
/u01acfs
grid 1934 1 0 Mar24 ? 00:00:00 apx_pmon_+APX4
grid 1937 1 0 Mar24 ? 00:00:00 apx_psp0_+APX4
grid 1946 1 1 Mar24 ? 00:00:19 apx_vktm_+APX4
grid 1950 1 0 Mar24 ? 00:00:00 apx_gen0_+APX4
grid 1953 1 0 Mar24 ? 00:00:00 apx_mman_+APX4
grid 1960 1 0 Mar24 ? 00:00:00 apx_diag_+APX4
grid 1962 1 0 Mar24 ? 00:00:00 apx_dia0_+APX4
grid 1965 1 0 Mar24 ? 00:00:00 apx_lreg_+APX4
grid 1969 1 0 Mar24 ? 00:00:00 apx_pxmn_+APX4
grid 1971 1 0 Mar24 ? 00:00:00 apx_rbal_+APX4
grid 1977 1 0 Mar24 ? 00:00:00 apx_vdbg_+APX4
grid 1979 1 0 Mar24 ? 00:00:00 apx_vubg_+APX4
grid 2063 1 0 Mar24 ? 00:00:00 oracle+APX4root
(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 2077 1 0 Mar24 ? 00:00:00 apx_vbg0_+APX4
grid 2083 1 0 Mar24 ? 00:00:01 apx_acfs_+APX4
grid 2123 1 0 Mar24 ? 00:00:00 apx_vbg1_+APX4
grid 2125 1 0 Mar24 ? 00:00:00 apx_vbg2_+APX4
grid 2127 1 0 Mar24 ? 00:00:00 apx_vbg3_+APX4
grid 2129 1 0 Mar24 ? 00:00:00 apx_vmb0_+APX4