0% found this document useful (0 votes)
70 views23 pages

How To Setup 12c R1 ACFS On ASM Flex Architecture-Demo

The document details the steps to setup 12c ACFS filesystems on ASM Flex Architecture. It describes loading the ACFS/ADVM modules and enabling the ASM ADVM proxy on each node, and validating the configuration. The ACFS filesystems can then be created on ASM disk groups.

Uploaded by

mspujari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views23 pages

How To Setup 12c R1 ACFS On ASM Flex Architecture-Demo

The document details the steps to setup 12c ACFS filesystems on ASM Flex Architecture. It describes loading the ACFS/ADVM modules and enabling the ASM ADVM proxy on each node, and validating the configuration. The ACFS filesystems can then be created on ASM disk groups.

Uploaded by

mspujari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

1/23

How to setup 12c ACFS filesystems on ASM Flex Architecture


The following procedure explains in detail the required steps to setup 12c ACFS
filesystems on ASM Flex Architecture.

1) This demo was performed on a 12.1.0.2 ASM Flex configuration on Standard Cluster
(4 nodes) using ASM role separation:

[root@cehaovmsp141 ~]# . oraenv


ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/grid

[root@cehaovmsp141 /]# env | grep ORA


ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/12.1.0/grid_1

[grid@cehaovmsp141 /]$ id
uid=1100(grid) gid=1000(oinstall)
groups=1000(oinstall),1100(asmadmin),1300(asmdba),1301(asmoper)
context=root:system_r:unconfined_t:s0-s0:c0.c1023

[root@cehaovmsp141 /]# egrep 'SS_DBA_GRP|SS_OPER_GRP|SS_ASM_GRP'


$ORACLE_HOME/rdbms/lib/config.c

#define SS_DBA_GRP "asmdba"


#define SS_OPER_GRP "asmoper"
#define SS_ASM_GRP "asmadmin"

2) This ASM Flex ASM configuration is using the default cardinally=3, thus ASM
instances are running on only 3 nodes of 4 nodes:

[root@cehaovmsp141 ~]# . oraenv


ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/grid

[root@cehaovmsp141 ~]# asmcmd showclustermode


ASM cluster : Flex mode enabled

[root@cehaovmsp141 ~]# srvctl config asm -detail


ASM home: <CRS home>
Password file: +OCRVOTE/orapwASM
ASM listener: LISTENER
ASM is enabled.
ASM is individually enabled on nodes:
ASM is individually disabled on nodes:

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


2/23

ASM instance count: 3


Cluster ASM listener: ASMNET1LSNR_ASM

[root@cehaovmsp141 ~]# srvctl status asm -detail


ASM is running on cehaovmsp141,cehaovmsp142,cehaovmsp143
ASM is enabled.

3) ACFS/ADVM modules need to be enabled/loaded on every node in a standard cluster


or every Hub Node in a Flex Cluster. For this example, Standard Cluster is configured,
therefore they need to be configured on all the 4 cluster nodes (cehaovmsp141,
cehaovmsp142, cehaovmsp143 & cehaovmsp144):

Node #1:

[root@cehaovmsp141 ~]# . oraenv


ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid

[root@cehaovmsp141 ~]# acfsload start


ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed

[root@cehaovmsp141 ~]# lsmod | grep oracle


oracleacfs 3257895 0
oracleadvm 509980 0
oracleoks 501309 2 oracleacfs,oracleadvm
oracleasm 53663 1

[root@cehaovmsp141 ~]# sh -x acfs_validate_modules.sh


+ acfsdriverstate loaded
ACFS-9203: true
+ acfsdriverstate installed
ACFS-9203: true
+ acfsdriverstate supported
ACFS-9200: Supported
+ acfsdriverstate version
ACFS-9325: Driver OS kernel version = 2.6.39-400.3.0.el5uek(x86_64).
ACFS-9326: Driver Oracle version = 140611.5.
+ acfsroot version_check
ACFS-9316: Valid ADVM/ACFS distribution media detected at:
'/u01/app/12.1.0/grid_1/usm/install/Oracle/EL5UEK/x86_64/2.6.39-400/2.6.39-400-

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


3/23

x86_64/bin'

Node #2:

[root@cehaovmsp142 ~]# . oraenv


ORACLE_SID = [root] ? +ASM2
The Oracle base has been set to /u01/app/grid

[root@cehaovmsp142 ~]# acfsload start


ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed

[root@cehaovmsp142 ~]# lsmod | grep oracle


oracleacfs 3257895 0
oracleadvm 509980 0
oracleoks 501309 2 oracleacfs,oracleadvm
oracleasm 53663 1

[root@cehaovmsp142 ~]# sh -x acfs_validate_modules.sh


+ acfsdriverstate loaded
ACFS-9203: true
+ acfsdriverstate installed
ACFS-9203: true
+ acfsdriverstate supported
ACFS-9200: Supported
+ acfsdriverstate version
ACFS-9325: Driver OS kernel version = 2.6.39-400.3.0.el5uek(x86_64).
ACFS-9326: Driver Oracle version = 140611.5.
+ acfsroot version_check
ACFS-9316: Valid ADVM/ACFS distribution media detected at:
'/u01/app/12.1.0/grid_1/usm/install/Oracle/EL5UEK/x86_64/2.6.39-400/2.6.39-400-
x86_64/bin'

Node #3:

[root@cehaovmsp143 ~]# . oraenv


ORACLE_SID = [root] ? +ASM2
The Oracle base has been set to /u01/app/grid

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


4/23

[root@cehaovmsp143 ~]# acfsload start


ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed

[root@cehaovmsp143 ~]# lsmod | grep oracle


oracleacfs 3257895 0
oracleadvm 509980 0
oracleoks 501309 2 oracleacfs,oracleadvm
oracleasm 53663 1

[root@cehaovmsp143 ~]# sh -x acfs_validate_modules.sh


+ acfsdriverstate loaded
ACFS-9203: true
+ acfsdriverstate installed
ACFS-9203: true
+ acfsdriverstate supported
ACFS-9200: Supported
+ acfsdriverstate version
ACFS-9325: Driver OS kernel version = 2.6.39-400.3.0.el5uek(x86_64).
ACFS-9326: Driver Oracle version = 140611.5.
+ acfsroot version_check
ACFS-9316: Valid ADVM/ACFS distribution media detected at:
'/u01/app/12.1.0/grid_1/usm/install/Oracle/EL5UEK/x86_64/2.6.39-400/2.6.39-400-
x86_64/bin'

Node #4:

[root@cehaovmsp144 ~]# . oraenv


ORACLE_SID = [root] ? +ASM2
The Oracle base has been set to /u01/app/grid

[root@cehaovmsp144 ~]# acfsload start


ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


5/23

[root@cehaovmsp144 ~]# lsmod | grep oracle


oracleacfs 3257895 0
oracleadvm 509980 0
oracleoks 501309 2 oracleacfs,oracleadvm
oracleasm 53663 1
[root@cehaovmsp144 ~]#

[root@cehaovmsp144 ~]# sh -x acfs_validate_modules.sh


+ acfsdriverstate loaded
ACFS-9203: true
+ acfsdriverstate installed
ACFS-9203: true
+ acfsdriverstate supported
ACFS-9200: Supported
+ acfsdriverstate version
ACFS-9325: Driver OS kernel version = 2.6.39-400.3.0.el5uek(x86_64).
ACFS-9326: Driver Oracle version = 140611.5.
+ acfsroot version_check
ACFS-9316: Valid ADVM/ACFS distribution media detected at:
'/u01/app/12.1.0/grid_1/usm/install/Oracle/EL5UEK/x86_64/2.6.39-400/2.6.39-400-
x86_64/bin'

4) ASM ADVM proxy needs to be running on every node in a standard cluster or every
Hub Node in a Flex Cluster.

[root@cehaovmsp141 ~]# srvctl status asm -proxy


ADVM proxy is running on node
cehaovmsp141,cehaovmsp142,cehaovmsp143,cehaovmsp144

[grid@cehaovmsp141 /]$ srvctl config asm -proxy -detail


ASM home: <CRS home>
ADVM proxy is enabled
ADVM proxy is individually enabled on nodes:
ADVM proxy is individually disabled on nodes:

[root@cehaovmsp141 ~]# crsctl stat res -t


--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


6/23

ora.LISTENER.lsnr
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.OCRVOTE.dg
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
OFFLINE OFFLINE cehaovmsp144 STABLE
ora.net1.network
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.ons
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.proxy_advm
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE cehaovmsp143 169.254.39.109,STABL
E
ora.asm
1 ONLINE ONLINE cehaovmsp142 Started,STABLE
2 ONLINE ONLINE cehaovmsp141 Started,STABLE
3 ONLINE ONLINE cehaovmsp143 Started,STABLE
ora.cehaovmsp141.vip
1 ONLINE ONLINE cehaovmsp141 STABLE
ora.cehaovmsp142.vip
1 ONLINE ONLINE cehaovmsp142 STABLE
ora.cehaovmsp143.vip
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.cehaovmsp144.vip
1 ONLINE ONLINE cehaovmsp144 STABLE
ora.cvu
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.mgmtdb
1 ONLINE ONLINE cehaovmsp143 Open,STABLE
ora.oc4j

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


7/23

1 ONLINE ONLINE cehaovmsp143 STABLE


ora.scan1.vip
1 ONLINE ONLINE cehaovmsp143 STABLE
--------------------------------------------------------------------------------

Note: If ASM ADVM proxy is not available, please create & configure it following the next
steps:

[root@cehaovmsp141 grid_1]# srvctl config asm -proxy


PRCR-1001 : Resource ora.proxy_advm does not exist

[root@cehaovmsp141 grid_1]# srvctl add asm -proxy

[root@cehaovmsp141 grid_1]# srvctl start asm -proxy

[root@cehaovmsp141 ~]# crsctl stat res -t


--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
.
.
.
ora.proxy_advm
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
--------------------------------------------------------------------------------

[root@cehaovmsp141 ~]# crsctl stat res ora.proxy_advm -t


--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.proxy_advm
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
--------------------------------------------------------------------------------

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


8/23

5) Then we can proceed with the ACFS diskgroup creation as follows:

[grid@cehaovmsp141 /]$ . oraenv


ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/grid

[grid@cehaovmsp141 ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 23 17:40:06 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> CREATE DISKGROUP ACFSDG EXTERNAL REDUNDANCY


DISK 'ORCL:ACFSDG' SIZE 3067M
ATTRIBUTE 'compatible.asm'='12.1.0.0.0',
'compatible.rdbms'='12.1.0.0.0',
'compatible.advm'='12.1.0.0.0',
'au_size'='4M';

SQL> select inst_id, name, total_mb, group_number from gv$asm_diskgroup where


name like 'ACFSDG';

INST_ID NAME TOTAL_MB GROUP_NUMBER


---------- ------------------------------ ---------- ------------
1 ACFSDG 3067 2
2 ACFSDG 3067 2
3 ACFSDG 3067 2

SQL> set linesize 100


SQL> col name format a20
SQL> col value format a20
SQL> select name, value from v$asm_attribute where GROUP_NUMBER = 2 and
name like 'compatible%';

NAME VALUE
-------------------- --------------------
compatible.asm 12.1.0.0.0
compatible.rdbms 12.1.0.0.0

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


9/23

compatible.advm 12.1.0.0.0

[root@cehaovmsp141 ~]# crsctl stat res ora.ACFSDG.dg -t


--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFSDG.dg
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE OFFLINE cehaovmsp144 STABLE
--------------------------------------------------------------------------------

Note 1: If you manually create the diskgroup, then you will need to manually mount it on
all the other ASM instances (only for the first time, later CRS will automatically mount
the diskgroup on every ASM instance) as follows:

SQL> alter diskgroup ACFSDG mount;

Note 2: f you create the diskgroup using the ASMCA GUI, then ASMCA will
automatically mount the new diskgroup on all the ASM instances.

Note 3: +ACFSDG diskgroup is not mounted on the cehaovmsp144 node since ASM is
only running on 3 nodes (cehaovmsp141,cehaovmsp142 & cehaovmsp143) due to this
is an ASM Flex configuration with Cardinality = 3.

[root@cehaovmsp141 ~]# srvctl config asm


ASM home: <CRS home>
Password file: +OCRVOTE/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

[root@cehaovmsp141 ~]# srvctl status asm -detail


ASM is running on cehaovmsp141,cehaovmsp142,cehaovmsp143
ASM is enabled.

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


10/23

6) Then a new ADVM volume needs to be created in the new +ACFSDG diskgroup:

[grid@cehaovmsp141 ~]$ asmcmd

ASMCMD> volcreate -G ACFSDG -s 2500M EBERNALVOL

ASMCMD> volinfo --all


Diskgroup Name: ACFSDG

Volume Name: EBERNALVOL


Volume Device: /dev/asm/ebernalvol-179
State: ENABLED
Size (MB): 2560
Resize Unit (MB): 64
Redundancy: UNPROT
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:

ASMCMD>

[root@cehaovmsp141 ~]# crsctl stat res ora.ACFSDG.EBERNALVOL.advm -t


--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFSDG.EBERNALVOL.advm
ONLINE OFFLINE cehaovmsp141 STABLE
ONLINE OFFLINE cehaovmsp142 STABLE
ONLINE OFFLINE cehaovmsp143 STABLE
ONLINE OFFLINE cehaovmsp144 STABLE
--------------------------------------------------------------------------------

[grid@cehaovmsp141 ~]$ ssh cehaovmsp141 ls -l /dev/asm/*


brwxrwx--- 1 root asmadmin 251, 91649 Mar 24 22:54 /dev/asm/ebernalvol-179

[grid@cehaovmsp141 ~]$ ssh cehaovmsp142 ls -l /dev/asm/*


brwxrwx--- 1 root asmadmin 251, 91649 Mar 24 22:54 /dev/asm/ebernalvol-179

[grid@cehaovmsp141 ~]$ ssh cehaovmsp143 ls -l /dev/asm/*


brwxrwx--- 1 root asmadmin 251, 91649 Mar 24 22:54 /dev/asm/ebernalvol-179

[grid@cehaovmsp141 ~]$ ssh cehaovmsp144 ls -l /dev/asm/*


brwxrwx--- 1 root asmadmin 251, 91649 Mar 24 22:54 /dev/asm/ebernalvol-179

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


11/23

7) As a next step, the ACFS mount point directory needs to be created on the 4 nodes
(on every node in a standard cluster or every Hub Node in a Flex Cluster):

[grid@cehaovmsp141 /]$ ssh cehaovmsp141 mkdir /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp142 mkdir /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp143 mkdir /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp144 mkdir /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp141 chown oracle:oinstall /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp142 chown oracle:oinstall /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp143 chown oracle:oinstall /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp144 chown oracle:oinstall /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp141 ls -ld /u01acfs


drwxr-xr-x 2 oracle oinstall 4096 Mar 11 12:44 /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp142 ls -ld /u01acfs


drwxr-xr-x 2 oracle oinstall 4096 Mar 11 12:45 /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp143 ls -ld /u01acfs


drwxr-xr-x 2 oracle oinstall 4096 Mar 11 12:45 /u01acfs

[grid@cehaovmsp141 /]$ ssh cehaovmsp144 ls -ld /u01acfs


drwxr-xr-x 2 oracle oinstall 4096 Mar 11 12:45 /u01acfs

8) Then, a new ACFS filesystem can be created in the new ADVM volume as grid user
(Grid Infrastructure owner) as follows (from the first node):

[grid@cehaovmsp141 ~]$ /sbin/mkfs -t acfs /dev/asm/ebernalvol-179


mkfs.acfs: version = 12.1.0.2.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/ebernalvol-179
mkfs.acfs: volume size = 2684354560 ( 2.50 GB )
mkfs.acfs: Format complete.

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


12/23

9) As a next step, the new ACFS filesystem needs to be registered as a new CRS
resource (from the first node) as root user, as follows:

[grid@cehaovmsp141 ~]$ su -
Password:

[root@cehaovmsp141 ~]# . oraenv


ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid

[root@cehaovmsp141 ~]# /u01/app/12.1.0/grid_1/bin/srvctl add filesystem -d


/dev/asm/ebernalvol-179 -m /u01acfs -u oracle -fstype ACFS -description
'"/u01acfs ACFS Filesystem"' -autostart ALWAYS

[root@cehaovmsp141 ~]# crsctl stat res ora.acfsdg.ebernalvol.acfs -t


--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.acfsdg.ebernalvol.acfs
OFFLINE OFFLINE cehaovmsp141 STABLE
OFFLINE OFFLINE cehaovmsp142 STABLE
OFFLINE OFFLINE cehaovmsp143 STABLE
OFFLINE OFFLINE cehaovmsp144 STABLE
--------------------------------------------------------------------------------

10) Then, the ACFS filesystem needs to be started and mounted (from the first node) as
root user:

[root@cehaovmsp141 ~]# /u01/app/12.1.0/grid_1/bin/srvctl start filesystem -d


/dev/asm/ebernalvol-179

[root@cehaovmsp141 ~]# crsctl stat res ora.acfsdg.ebernalvol.acfs -t


--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.acfsdg.ebernalvol.acfs
ONLINE ONLINE cehaovmsp141 mounted on /u01acfs,
STABLE
ONLINE ONLINE cehaovmsp142 mounted on /u01acfs,
STABLE
ONLINE ONLINE cehaovmsp143 mounted on /u01acfs,

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


13/23

STABLE
ONLINE ONLINE cehaovmsp144 mounted on /u01acfs,
STABLE
--------------------------------------------------------------------------------

[root@cehaovmsp141 ~]# crsctl stat res ora.ACFSDG.EBERNALVOL.advm -t


--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFSDG.EBERNALVOL.advm
ONLINE ONLINE cehaovmsp141 Volume device /dev/a
sm/ebernalvol-179 is
online,STABLE
ONLINE ONLINE cehaovmsp142 Volume device /dev/a
sm/ebernalvol-179 is
online,STABLE
ONLINE ONLINE cehaovmsp143 Volume device /dev/a
sm/ebernalvol-179 is
online,STABLE
ONLINE ONLINE cehaovmsp144 Volume device /dev/a
sm/ebernalvol-179 is
online,STABLE

11) Verify/confirm the new ACFS filesystem is mounted on all the nodes:

Node #1:

[root@cehaovmsp141 ~]# ssh cehaovmsp141 df -m /u01acfs


root@cehaovmsp141's password:
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

Node #2:

[root@cehaovmsp141 ~]# ssh cehaovmsp142 df -m /u01acfs


root@cehaovmsp142's password:
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


14/23

Node #3:

[root@cehaovmsp141 ~]# ssh cehaovmsp143 df -m /u01acfs


root@cehaovmsp143's password:
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

Node #4:

[root@cehaovmsp141 ~]# ssh cehaovmsp144 df -m /u01acfs


root@cehaovmsp144's password:
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

12) Then set the ownership and permissions to the new ACFS filesystem (from the first
node):

[root@cehaovmsp141 ~]# ls -ld /u01acfs


drwxr-xr-x 4 root root 4096 Mar 24 18:10 /u01acfs

[root@cehaovmsp141 ~]# chown oracle:oinstall /u01acfs

[root@cehaovmsp141 ~]# chmod 775 /u01acfs

13) Note, the new ownership and permissions were propagated to all the nodes:

Node #1:

[root@cehaovmsp141 ~]# ssh cehaovmsp141 ls -ld /u01acfs


root@cehaovmsp141's password:
drwxrwxr-x 4 oracle oinstall 4096 Mar 24 18:10 /u01acfs

Node #2:

[root@cehaovmsp141 ~]# ssh cehaovmsp142 ls -ld /u01acfs


root@cehaovmsp142's password:
drwxrwxr-x 4 oracle oinstall 4096 Mar 24 18:10 /u01acfs

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


15/23

Node #3:

[root@cehaovmsp141 ~]# ssh cehaovmsp143 ls -ld /u01acfs


root@cehaovmsp143's password:
drwxrwxr-x 4 oracle oinstall 4096 Mar 24 18:10 /u01acfs

Node #4:

[root@cehaovmsp141 ~]# ssh cehaovmsp144 ls -ld /u01acfs


root@cehaovmsp144's password:
drwxrwxr-x 4 oracle oinstall 4096 Mar 24 18:10 /u01acfs

14) All the new ACFS/ADVM resources ONLINE & STABLE and they are listed as
follows:

[root@cehaovmsp141 ~]# crsctl stat res -t


--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFSDG.EBERNALVOL.advm
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.ACFSDG.dg
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE OFFLINE cehaovmsp144 STABLE
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.OCRVOTE.dg
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
OFFLINE OFFLINE cehaovmsp144 STABLE
ora.acfsdg.ebernalvol.acfs

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


16/23

ONLINE ONLINE cehaovmsp141 mounted on /u01acfs,


STABLE
ONLINE ONLINE cehaovmsp142 mounted on /u01acfs,
STABLE
ONLINE ONLINE cehaovmsp143 mounted on /u01acfs,
STABLE
ONLINE ONLINE cehaovmsp144 mounted on /u01acfs,
STABLE
ora.net1.network
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.ons
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
ora.proxy_advm
ONLINE ONLINE cehaovmsp141 STABLE
ONLINE ONLINE cehaovmsp142 STABLE
ONLINE ONLINE cehaovmsp143 STABLE
ONLINE ONLINE cehaovmsp144 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE cehaovmsp143 169.254.39.109,STABL
E
ora.asm
1 ONLINE ONLINE cehaovmsp142 Started,STABLE
2 ONLINE ONLINE cehaovmsp141 Started,STABLE
3 ONLINE ONLINE cehaovmsp143 Started,STABLE
ora.cehaovmsp141.vip
1 ONLINE ONLINE cehaovmsp141 STABLE
ora.cehaovmsp142.vip
1 ONLINE ONLINE cehaovmsp142 STABLE
ora.cehaovmsp143.vip
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.cehaovmsp144.vip
1 ONLINE ONLINE cehaovmsp144 STABLE
ora.cvu
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.mgmtdb
1 ONLINE ONLINE cehaovmsp143 Open,STABLE
ora.oc4j
1 ONLINE ONLINE cehaovmsp143 STABLE
ora.scan1.vip

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


17/23

1 ONLINE ONLINE cehaovmsp143 STABLE


--------------------------------------------------------------------------------

15) Then CRS was stopped and restarted on all the 4 nodes to confirm the new ACFS
filesystem was automatically mounted as follows:

Node #1:

[root@cehaovmsp141 ~]# srvctl stop filesystem -d /dev/asm/ebernalvol-179 [-f]

[root@cehaovmsp141 ~]# crsctl stop crs

Node #2:

[root@cehaovmsp142 ~]# crsctl stop crs

Node #3:

[root@cehaovmsp143 ~]# crsctl stop crs

Node #4:

[root@cehaovmsp144 ~]# crsctl stop crs

Node #1:

[root@cehaovmsp141 ~]# crsctl start crs

Node #2:

[root@cehaovmsp142 ~]# crsctl start crs

Node #3:

[root@cehaovmsp143 ~]# crsctl start crs

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


18/23

Node #4:

[root@cehaovmsp144 ~]# crsctl start crs

16) Then as a grid OS user, verify the ACFS filesystem was automaticaly mounted
again on all the nodes at OS level as follows:

Node #1:

[root@cehaovmsp141 ~]# su - grid

[grid@cehaovmsp141 ~]$ . oraenv


ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
[grid@cehaovmsp141 ~]$ ssh cehaovmsp141 df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/xvda2 9451 4730 4332 53% /
/dev/xvda1 487 38 424 9% /boot
tmpfs 2005 1242 764 62% /dev/shm
/dev/xvdb1 30236 27451 1249 96% /u01
/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

Node #2:

[grid@cehaovmsp141 ~]$ ssh cehaovmsp142 df -m


Filesystem 1M-blocks Used Available Use% Mounted on
/dev/xvda2 9451 4549 4513 51% /
/dev/xvda1 487 38 424 9% /boot
tmpfs 2005 1238 767 62% /dev/shm
/dev/xvdb1 30236 16276 12424 57% /u01
/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

Node #3:

[grid@cehaovmsp141 ~]$ ssh cehaovmsp143 df -m


Filesystem 1M-blocks Used Available Use% Mounted on
/dev/xvda2 9451 4541 4521 51% /
/dev/xvda1 487 38 424 9% /boot
tmpfs 2005 609 1396 31% /dev/shm
/dev/xvdb1 30236 16307 12393 57% /u01
/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


19/23

Node #4:

[grid@cehaovmsp141 ~]$ ssh cehaovmsp144 df -m


Filesystem 1M-blocks Used Available Use% Mounted on
/dev/xvda2 9451 4549 4513 51% /
/dev/xvda1 487 38 424 9% /boot
tmpfs 2005 1238 767 62% /dev/shm
/dev/xvdb1 30236 14098 14603 50% /u01
/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

Node #1:

+ASM1 is running, ASM proxy is running too and ACFS filesystem is mounted:

[grid@cehaovmsp141 ~]$ ssh cehaovmsp141 ps -fea | egrep 'asm_|APX'; df -m


/u01acfs
grid 4561 1 0 23:31 ? 00:00:00 asm_pmon_+ASM1
grid 4563 1 0 23:31 ? 00:00:00 asm_psp0_+ASM1
grid 4577 1 1 23:31 ? 00:00:24 asm_vktm_+ASM1
grid 4581 1 0 23:31 ? 00:00:00 asm_gen0_+ASM1
grid 4583 1 0 23:31 ? 00:00:00 asm_mman_+ASM1
grid 4587 1 0 23:31 ? 00:00:01 asm_diag_+ASM1
grid 4589 1 0 23:31 ? 00:00:00 asm_ping_+ASM1
grid 4591 1 0 23:31 ? 00:00:05 asm_dia0_+ASM1
grid 4593 1 0 23:31 ? 00:00:05 asm_lmon_+ASM1
grid 4595 1 0 23:31 ? 00:00:02 asm_lmd0_+ASM1
grid 4597 1 0 23:31 ? 00:00:04 asm_lms0_+ASM1
grid 4601 1 0 23:31 ? 00:00:01 asm_lmhb_+ASM1
grid 4603 1 0 23:31 ? 00:00:00 asm_lck1_+ASM1
grid 4605 1 0 23:31 ? 00:00:00 asm_dbw0_+ASM1
grid 4607 1 0 23:31 ? 00:00:00 asm_lgwr_+ASM1
grid 4609 1 0 23:31 ? 00:00:00 asm_ckpt_+ASM1
grid 4611 1 0 23:31 ? 00:00:00 asm_smon_+ASM1
grid 4613 1 0 23:31 ? 00:00:00 asm_lreg_+ASM1
grid 4615 1 0 23:31 ? 00:00:00 asm_pxmn_+ASM1
grid 4617 1 0 23:31 ? 00:00:00 asm_rbal_+ASM1
grid 4619 1 0 23:31 ? 00:00:00 asm_gmon_+ASM1
grid 4621 1 0 23:31 ? 00:00:00 asm_mmon_+ASM1
grid 4623 1 0 23:31 ? 00:00:00 asm_mmnl_+ASM1
grid 4625 1 0 23:31 ? 00:00:00 asm_lck0_+ASM1
grid 4627 1 0 23:31 ? 00:00:00 asm_gcr0_+ASM1
grid 4703 1 0 23:31 ? 00:00:00 asm_asmb_+ASM1

grid 4985 1 0 23:31 ? 00:00:00 apx_pmon_+APX1

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


20/23

grid 4987 1 0 23:31 ? 00:00:00 apx_psp0_+APX1


grid 4998 1 1 23:31 ? 00:00:24 apx_vktm_+APX1
grid 5002 1 0 23:31 ? 00:00:00 apx_gen0_+APX1
grid 5004 1 0 23:31 ? 00:00:00 apx_mman_+APX1
grid 5009 1 0 23:31 ? 00:00:00 apx_diag_+APX1
grid 5011 1 0 23:31 ? 00:00:00 apx_dia0_+APX1
grid 5013 1 0 23:31 ? 00:00:00 apx_lreg_+APX1
grid 5015 1 0 23:31 ? 00:00:00 apx_pxmn_+APX1
grid 5017 1 0 23:31 ? 00:00:00 apx_rbal_+APX1
grid 5019 1 0 23:31 ? 00:00:00 apx_vdbg_+APX1
grid 5021 1 0 23:31 ? 00:00:00 apx_vubg_+APX1
grid 5310 1 0 23:31 ? 00:00:00 oracle+APX1root
(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 5322 1 0 23:31 ? 00:00:00 apx_vbg0_+APX1
grid 5325 1 0 23:31 ? 00:00:01 apx_acfs_+APX1
grid 6223 1 0 23:32 ? 00:00:00 apx_vbg1_+APX1
grid 6244 1 0 23:32 ? 00:00:00 apx_vbg2_+APX1
grid 6250 1 0 23:32 ? 00:00:00 apx_vbg3_+APX1
grid 6261 1 0 23:32 ? 00:00:00 apx_vmb0_+APX1

Filesystem 1M-blocks Used Available Use% Mounted on


/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

Node #2:

+ASM2 is running, ASM proxy is running too and ACFS filesystem is mounted:

[grid@cehaovmsp141 ~]$ ssh cehaovmsp142 ps -fea | egrep 'asm_|APX'; df -m


/u01acfs
grid 20158 1 0 23:32 ? 00:00:00 asm_pmon_+ASM2
grid 20160 1 0 23:32 ? 00:00:00 asm_psp0_+ASM2
grid 20216 1 1 23:32 ? 00:00:24 asm_vktm_+ASM2
grid 20220 1 0 23:32 ? 00:00:00 asm_gen0_+ASM2
grid 20222 1 0 23:32 ? 00:00:00 asm_mman_+ASM2
grid 20226 1 0 23:32 ? 00:00:01 asm_diag_+ASM2
grid 20228 1 0 23:32 ? 00:00:00 asm_ping_+ASM2
grid 20230 1 0 23:32 ? 00:00:04 asm_dia0_+ASM2
grid 20232 1 0 23:32 ? 00:00:03 asm_lmon_+ASM2
grid 20234 1 0 23:32 ? 00:00:02 asm_lmd0_+ASM2
grid 20237 1 0 23:32 ? 00:00:04 asm_lms0_+ASM2
grid 20242 1 0 23:32 ? 00:00:01 asm_lmhb_+ASM2
grid 20244 1 0 23:32 ? 00:00:00 asm_lck1_+ASM2
grid 20248 1 0 23:32 ? 00:00:00 asm_dbw0_+ASM2
grid 20256 1 0 23:32 ? 00:00:00 asm_lgwr_+ASM2
grid 20268 1 0 23:32 ? 00:00:00 asm_ckpt_+ASM2
grid 20276 1 0 23:32 ? 00:00:00 asm_smon_+ASM2

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


21/23

grid 20279 1 0 23:32 ? 00:00:00 asm_lreg_+ASM2


grid 20281 1 0 23:32 ? 00:00:00 asm_pxmn_+ASM2
grid 20283 1 0 23:32 ? 00:00:00 asm_rbal_+ASM2
grid 20285 1 0 23:32 ? 00:00:00 asm_gmon_+ASM2
grid 20287 1 0 23:32 ? 00:00:00 asm_mmon_+ASM2
grid 20289 1 0 23:32 ? 00:00:00 asm_mmnl_+ASM2
grid 20333 1 0 23:32 ? 00:00:00 asm_lck0_+ASM2
grid 20345 1 0 23:32 ? 00:00:00 asm_gcr0_+ASM2
grid 26739 1 0 23:37 ? 00:00:00 asm_asmb_+ASM2

grid 20666 1 0 23:33 ? 00:00:00 apx_pmon_+APX2


grid 20668 1 0 23:33 ? 00:00:00 apx_psp0_+APX2
grid 20758 1 1 23:33 ? 00:00:24 apx_vktm_+APX2
grid 20762 1 0 23:33 ? 00:00:00 apx_gen0_+APX2
grid 20764 1 0 23:33 ? 00:00:00 apx_mman_+APX2
grid 20768 1 0 23:33 ? 00:00:00 apx_diag_+APX2
grid 20770 1 0 23:33 ? 00:00:00 apx_dia0_+APX2
grid 20774 1 0 23:33 ? 00:00:00 apx_lreg_+APX2
grid 20776 1 0 23:33 ? 00:00:00 apx_pxmn_+APX2
grid 20778 1 0 23:33 ? 00:00:00 apx_rbal_+APX2
grid 20782 1 0 23:33 ? 00:00:00 apx_vdbg_+APX2
grid 20789 1 0 23:33 ? 00:00:00 apx_vubg_+APX2
grid 21443 1 0 23:33 ? 00:00:00 oracle+APX2root
(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 21454 1 0 23:33 ? 00:00:00 apx_vbg0_+APX2
grid 21470 1 0 23:33 ? 00:00:01 apx_acfs_+APX2
grid 21531 1 0 23:33 ? 00:00:00 apx_vbg1_+APX2
grid 21537 1 0 23:33 ? 00:00:00 apx_vbg2_+APX2
grid 21540 1 0 23:33 ? 00:00:00 apx_vbg3_+APX2
grid 21546 1 0 23:33 ? 00:00:00 apx_vmb0_+APX2

Filesystem 1M-blocks Used Available Use% Mounted on


/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

Node #3:

+ASM3 is running, ASM proxy is running too and ACFS filesystem is mounted:

[grid@cehaovmsp141 ~]$ ssh cehaovmsp143 ps -fea | egrep 'asm_|APX'; df -m


/u01acfs
grid 2983 1 0 Mar24 ? 00:00:00 asm_pmon_+ASM3
grid 2985 1 0 Mar24 ? 00:00:00 asm_psp0_+ASM3
grid 3013 1 1 Mar24 ? 00:00:25 asm_vktm_+ASM3
grid 3017 1 0 Mar24 ? 00:00:00 asm_gen0_+ASM3
grid 3019 1 0 Mar24 ? 00:00:00 asm_mman_+ASM3
grid 3024 1 0 Mar24 ? 00:00:01 asm_diag_+ASM3
grid 3029 1 0 Mar24 ? 00:00:00 asm_ping_+ASM3
grid 3039 1 0 Mar24 ? 00:00:04 asm_dia0_+ASM3

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


22/23

grid 3063 1 0 Mar24 ? 00:00:06 asm_lmon_+ASM3


grid 3067 1 0 Mar24 ? 00:00:02 asm_lmd0_+ASM3
grid 3075 1 0 Mar24 ? 00:00:04 asm_lms0_+ASM3
grid 3079 1 0 Mar24 ? 00:00:01 asm_lmhb_+ASM3
grid 3081 1 0 Mar24 ? 00:00:00 asm_lck1_+ASM3
grid 3083 1 0 Mar24 ? 00:00:00 asm_dbw0_+ASM3
grid 3087 1 0 Mar24 ? 00:00:00 asm_lgwr_+ASM3
grid 3089 1 0 Mar24 ? 00:00:00 asm_ckpt_+ASM3
grid 3091 1 0 Mar24 ? 00:00:00 asm_smon_+ASM3
grid 3093 1 0 Mar24 ? 00:00:00 asm_lreg_+ASM3
grid 3097 1 0 Mar24 ? 00:00:00 asm_pxmn_+ASM3
grid 3120 1 0 Mar24 ? 00:00:00 asm_rbal_+ASM3
grid 3124 1 0 Mar24 ? 00:00:00 asm_gmon_+ASM3
grid 3126 1 0 Mar24 ? 00:00:00 asm_mmon_+ASM3
grid 3128 1 0 Mar24 ? 00:00:00 asm_mmnl_+ASM3
grid 3160 1 0 Mar24 ? 00:00:00 asm_gcr0_+ASM3
grid 3187 1 0 Mar24 ? 00:00:00 asm_lck0_+ASM3

grid 3543 1 0 Mar24 ? 00:00:00 apx_pmon_+APX3


grid 3545 1 0 Mar24 ? 00:00:00 apx_psp0_+APX3
grid 3635 1 1 Mar24 ? 00:00:24 apx_vktm_+APX3
grid 3639 1 0 Mar24 ? 00:00:00 apx_gen0_+APX3
grid 3641 1 0 Mar24 ? 00:00:00 apx_mman_+APX3
grid 3645 1 0 Mar24 ? 00:00:00 apx_diag_+APX3
grid 3647 1 0 Mar24 ? 00:00:00 apx_dia0_+APX3
grid 3650 1 0 Mar24 ? 00:00:00 apx_lreg_+APX3
grid 3653 1 0 Mar24 ? 00:00:00 apx_pxmn_+APX3
grid 3655 1 0 Mar24 ? 00:00:00 apx_rbal_+APX3
grid 3659 1 0 Mar24 ? 00:00:00 apx_vdbg_+APX3
grid 3678 1 0 Mar24 ? 00:00:00 apx_vubg_+APX3
grid 4340 1 0 Mar24 ? 00:00:00 oracle+APX3root
(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 4351 1 0 Mar24 ? 00:00:00 apx_vbg0_+APX3
grid 4354 1 0 Mar24 ? 00:00:01 apx_acfs_+APX3
grid 4485 1 0 Mar24 ? 00:00:00 apx_vbg1_+APX3
grid 4487 1 0 Mar24 ? 00:00:00 apx_vbg2_+APX3
grid 4492 1 0 Mar24 ? 00:00:00 apx_vbg3_+APX3
grid 4498 1 0 Mar24 ? 00:00:00 apx_vmb0_+APX3

Filesystem 1M-blocks Used Available Use% Mounted on


/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

Node #4:

+ASM4 is NOT running, ASM proxy is running too and ACFS filesystem is mounted:

[grid@cehaovmsp141 ~]$ ssh cehaovmsp144 ps -fea | egrep 'asm_|APX'; df -m

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal


23/23

/u01acfs
grid 1934 1 0 Mar24 ? 00:00:00 apx_pmon_+APX4
grid 1937 1 0 Mar24 ? 00:00:00 apx_psp0_+APX4
grid 1946 1 1 Mar24 ? 00:00:19 apx_vktm_+APX4
grid 1950 1 0 Mar24 ? 00:00:00 apx_gen0_+APX4
grid 1953 1 0 Mar24 ? 00:00:00 apx_mman_+APX4
grid 1960 1 0 Mar24 ? 00:00:00 apx_diag_+APX4
grid 1962 1 0 Mar24 ? 00:00:00 apx_dia0_+APX4
grid 1965 1 0 Mar24 ? 00:00:00 apx_lreg_+APX4
grid 1969 1 0 Mar24 ? 00:00:00 apx_pxmn_+APX4
grid 1971 1 0 Mar24 ? 00:00:00 apx_rbal_+APX4
grid 1977 1 0 Mar24 ? 00:00:00 apx_vdbg_+APX4
grid 1979 1 0 Mar24 ? 00:00:00 apx_vubg_+APX4
grid 2063 1 0 Mar24 ? 00:00:00 oracle+APX4root
(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 2077 1 0 Mar24 ? 00:00:00 apx_vbg0_+APX4
grid 2083 1 0 Mar24 ? 00:00:01 apx_acfs_+APX4
grid 2123 1 0 Mar24 ? 00:00:00 apx_vbg1_+APX4
grid 2125 1 0 Mar24 ? 00:00:00 apx_vbg2_+APX4
grid 2127 1 0 Mar24 ? 00:00:00 apx_vbg3_+APX4
grid 2129 1 0 Mar24 ? 00:00:00 apx_vmb0_+APX4

Filesystem 1M-blocks Used Available Use% Mounted on


/dev/asm/ebernalvol-179
2560 158 2403 7% /u01acfs

ACFS on ASM Flex Architecture. Author: Esteban D. Bernal

You might also like