OCR File and Voting Disk Administration by Example - (Oracle 10g)

本文提供详细指南,介绍如何管理Oracle集群关键组件:OCR文件与投票磁盘,包括配置、查看、添加、移除、备份与恢复等操作。使用实例展示在Linux环境下操作过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

OCR File and Voting Disk Administration by Example - (Oracle 10g)

by Jeff Hunter, Sr. Database Administrator


Contents

  1. Overview
  2. Example Configuration
  3. Administering the OCR File
    View OCR Configuration Information
    Add an OCR File
    Relocate an OCR File
    Repair an OCR File on a Local Node
    Remove an OCR File
  4. Backup the OCR File
    Automatic OCR Backups
    Manual OCR Exports
  5. Recover the OCR File
    Recover OCR from Valid OCR Mirror
    Recover OCR from Automatically Generated Physical Backup
    Recover OCR from an OCR Export File
  6. Administering the Voting Disk
    View Voting Disk Configuration Information
    Add a Voting Disk
    Remove a Voting Disk
    Relocate a Voting Disk
  7. Backup the Voting Disk
  8. Recover the Voting Disk
  9. Move the Voting Disk and OCR from OCFS to RAW Devices
    Move the OCR
    Move the Voting Disk
  10. About the Author

[@more@]


Overview

Oracle Clusterware 10g, formerly known as Cluster Ready Services (CRS) is software that when installed on servers running the same operating system, enables the servers to be bound together to operate and function as a single server or cluster. This infrastructure simplifies the requirement for an Oracle Real Application Clusters (RAC) database by providing cluster software that is tightly integrated with the Oracle Database.

The Oracle Clusterware requires two critical clusterware components: a voting disk to record node membership information and the Oracle Cluster Registry (OCR) to record cluster configuration information:

Voting Disk

The voting disk is a shared partition that Oracle Clusterware uses to verify cluster node membership and status. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The primary function of the voting disk is to manage node membership and prevent what is known as Split Brain Syndrome in which two or more instances attempt to control the RAC database. This can occur in cases where there is a break in communication between nodes through the interconnect.

The voting disk must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. For high availability, Oracle recommends that you have multiple voting disks. Oracle Clusterware can be configured to maintain multiple voting disks (multiplexing) but you must have an odd number of voting disks, such as three, five, and so on. Oracle Clusterware supports a maximum of 32 voting disks. If you define a single voting disk, then you should use external mirroring to provide redundancy.

A node must be able to access more than half of the voting disks at any time. For example, if you have five voting disks configured, then a node must be able to access at least three of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster. After the cause of the failure has been corrected and access to the voting disks has been restored, you can instruct Oracle Clusterware to recover the failed node and restore it to the cluster.

Oracle Cluster Registry (OCR)

Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. OCR is the repository of configuration information for the cluster that manages information about like the cluster node list and instance-to-node mapping information. This configuration information is used by many of the processes that make up the CRS as well as other cluster-aware applications which use this repository to share information amoung them. Some of the main components included in the OCR are:

Node membership information Database instance, node, and other mapping information ASM (if configured) Application resource profiles such as VIP addresses, services, etc. Service characteristics Information about processes that Oracle Clusterware controls Information about any third-party applications controlled by CRS (10g R2 and later)

The OCR stores configuration information in a series of key-value pairs within a directory tree structure. To view the contents of the OCR in a human-readable format, run the ocrdump command. This will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE.

The OCR must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. Oracle Clusterware 10g Release 2 allows you to multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability. Oracle Clusterware allows for a maximum of two OCR locations; one is the primary and the second is an OCR mirror. If you define a single OCR, then you should use external mirroring to provide redundancy. You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA).

This article provides a detailed look at how to administer the two critical Oracle Clusterware components — the voting disk and the Oracle Cluster Registry (OCR). The examples described in this guide were tested with Oracle RAC 10g Release 2 (10.2.0.4) on the Linux x86 platform.

popup_dialog_exclamation_mark.gif It is highly recommended to take a backup of the voting disk and OCR file before making any changes! Instruction are included in this guide on how to perform backups of the and .

popup_dialog_information_mark.gif CRS_home

The Oracle Clusterware binaries included in this article (i.e. crs_stat, ocrcheck, crsctl, etc.) are being executed from the Oracle Clusterware home directory which for the purpose of this article is /u01/app/crs. The environment variable $ORA_CRS_HOME is set for both the oracle and root user accounts to this directory and is also included in the $PATH:

[root@racnode1 ~]# echo $ORA_CRS_HOME
/u01/app/crs

[root@racnode1 ~]# which ocrcheck
/u01/app/crs/bin/ocrcheck

top_v1.gif


Example Configuration

The example configuration used in this article consists of a two-node RAC with a clustered database named racdb.idevelopment.info running Oracle RAC 10g Release 2 on the Linux x86 platform. The two node names are racnode1 and racnode2, each hosting a single Oracle instance named racdb1 and racdb2 respectively. For a detailed guide on building the example clustered database environment, please see:

mini_explorer_page2.gif
Building an Inexpensive Oracle RAC 10g Release 2 on Linux - (CentOS 5.3 / iSCSI)

The example Oracle Clusterware environment is configured with a single voting disk and a single OCR file on an OCFS2 clustered file system. Note that the voting disk is owned by the oracle user in the oinstall group with 0644 permissions while the OCR file is owned by root in the oinstall group with 0640 permissions:

[oracle@racnode1 ~]$ ls -l /u02/oradata/racdb
total 16608

drwxr-xr-x 2 oracle oinstall     3896 Aug 26 23:45 dbs/

Check Current OCR File

[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4660
         Available space (kbytes) :     257460
         ID                       :    1331197
         Device/File Name         : 
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

Check Current Voting Disk

[oracle@racnode1 ~]$ crsctl query css votedisk
 0.     0    

located 1 votedisk(s).

Preparation

To prepare for the examples used in this guide, five new iSCSI volumes were created from the SAN and will be bound to RAW devices on all nodes in the RAC cluster. These five new volumes will be used to demonstrate how to move the current voting disk and OCR file from an OCFS2 file system to RAW devices:

Five New iSCSI Volumes and their Local Device Name Mappings
iSCSI Target NameLocal Device NameDisk Size
iqn.2006-01.com.openfiler:racdb.ocr1/dev/iscsi/ocr1/part512 MB
iqn.2006-01.com.openfiler:racdb.ocr2/dev/iscsi/ocr2/part512 MB
iqn.2006-01.com.openfiler:racdb.voting1/dev/iscsi/voting1/part32 MB
iqn.2006-01.com.openfiler:racdb.voting2/dev/iscsi/voting2/part32 MB
iqn.2006-01.com.openfiler:racdb.voting3/dev/iscsi/voting3/part32 MB

After creating the new iSCSI volumes from the SAN, they now need to be configured for access and bound to RAW devices by all Oracle RAC nodes in the database cluster.

  1. From all Oracle RAC nodes in the cluster as root, discover the five new iSCSI volumes from the SAN which will be used to store the voting disks and OCR files.

    [root@racnode1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san
    192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1
    192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2
    192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs
    
    
    [root@racnode2 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san
    192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1
    192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2
    192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs
    

  2. Manually login to the new iSCSI targets from all Oracle RAC nodes in the cluster.

    [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p 192.168.2.195 -l
    [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p 192.168.2.195 -l
    [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -p 192.168.2.195 -l
    [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -p 192.168.2.195 -l
    [root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting3 -p 192.168.2.195 -l
    
    [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p 192.168.2.195 -l
    [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p 192.168.2.195 -l
    [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -p 192.168.2.195 -l
    [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -p 192.168.2.195 -l
    [root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting3 -p 192.168.2.195 -l

  3. Create a single primary partition on each of the five new iSCSI volumes that span the entire disk. Perform this from only one of the Oracle RAC nodes in the cluster:

    [root@racnode1 ~]# fdisk /dev/iscsi/ocr1/part
    [root@racnode1 ~]# fdisk /dev/iscsi/ocr2/part
    [root@racnode1 ~]# fdisk /dev/iscsi/voting1/part
    [root@racnode1 ~]# fdisk /dev/iscsi/voting2/part
    [root@racnode1 ~]# fdisk /dev/iscsi/voting3/part

  4. Re-scan the SCSI bus from all Oracle RAC nodes in the cluster:

    [root@racnode2 ~]# partprobe

  5. Create a shell script (/usr/local/bin/setup_raw_devices.sh) on all Oracle RAC nodes in the cluster to bind the five Oracle Clusterware component devices to RAW devices as follows:

    # +---------------------------------------------------------+
    # | FILE: /usr/local/bin/setup_raw_devices.sh               |
    # +---------------------------------------------------------+
    
    # +---------------------------------------------------------+
    # | Bind OCR files to RAW device files.                     |
    # +---------------------------------------------------------+
    /bin/raw /dev/raw/raw1 /dev/iscsi/ocr1/part1
    /bin/raw /dev/raw/raw2 /dev/iscsi/ocr2/part1
    sleep 3
    /bin/chown root:oinstall /dev/raw/raw1
    /bin/chown root:oinstall /dev/raw/raw2
    /bin/chmod 0640 /dev/raw/raw1
    /bin/chmod 0640 /dev/raw/raw2
    
    # +---------------------------------------------------------+
    # | Bind voting disks to RAW device files.                  |
    # +---------------------------------------------------------+
    /bin/raw /dev/raw/raw3 /dev/iscsi/voting1/part1
    /bin/raw /dev/raw/raw4 /dev/iscsi/voting2/part1
    /bin/raw /dev/raw/raw5 /dev/iscsi/voting3/part1
    sleep 3
    /bin/chown oracle:oinstall /dev/raw/raw3
    /bin/chown oracle:oinstall /dev/raw/raw4
    /bin/chown oracle:oinstall /dev/raw/raw5
    /bin/chmod 0644 /dev/raw/raw3
    /bin/chmod 0644 /dev/raw/raw4
    /bin/chmod 0644 /dev/raw/raw5

    From all Oracle RAC nodes in the cluster, change the permissions of the new shell script to execute:

    [root@racnode1 ~]# chmod 755 /usr/local/bin/setup_raw_devices.sh
    [root@racnode2 ~]# chmod 755 /usr/local/bin/setup_raw_devices.sh

    Manually execute the new shell script from all Oracle RAC nodes in the cluster to bind the voting disks to RAW devices:

    [root@racnode1 ~]# /usr/local/bin/setup_raw_devices.sh
    /dev/raw/raw1:  bound to major 8, minor 97
    /dev/raw/raw2:  bound to major 8, minor 17
    /dev/raw/raw3:  bound to major 8, minor 1
    /dev/raw/raw4:  bound to major 8, minor 49
    /dev/raw/raw5:  bound to major 8, minor 33
    
    [root@racnode2 ~]# /usr/local/bin/setup_raw_devices.sh
    /dev/raw/raw1:  bound to major 8, minor 65
    /dev/raw/raw2:  bound to major 8, minor 49
    /dev/raw/raw3:  bound to major 8, minor 33
    /dev/raw/raw4:  bound to major 8, minor 1
    /dev/raw/raw5:  bound to major 8, minor 17

    Check that the character (RAW) devices were created from all Oracle RAC nodes in the cluster:

    [root@racnode1 ~]# ls -l /dev/raw
    total 0
    crw-r----- 1 root   oinstall 162, 1 Sep 24 00:48 raw1
    crw-r----- 1 root   oinstall 162, 2 Sep 24 00:48 raw2
    crw-r--r-- 1 oracle oinstall 162, 3 Sep 24 00:48 raw3
    crw-r--r-- 1 oracle oinstall 162, 4 Sep 24 00:48 raw4
    crw-r--r-- 1 oracle oinstall 162, 5 Sep 24 00:48 raw5
    
    [root@racnode2 ~]# ls -l /dev/raw
    total 0
    crw-r----- 1 root   oinstall 162, 1 Sep 24 00:48 raw1
    crw-r----- 1 root   oinstall 162, 2 Sep 24 00:48 raw2
    crw-r--r-- 1 oracle oinstall 162, 3 Sep 24 00:48 raw3
    crw-r--r-- 1 oracle oinstall 162, 4 Sep 24 00:48 raw4
    crw-r--r-- 1 oracle oinstall 162, 5 Sep 24 00:48 raw5
    
    [root@racnode1 ~]# raw -qa
    /dev/raw/raw1:  bound to major 8, minor 97
    /dev/raw/raw2:  bound to major 8, minor 17
    /dev/raw/raw3:  bound to major 8, minor 1
    /dev/raw/raw4:  bound to major 8, minor 49
    /dev/raw/raw5:  bound to major 8, minor 33
    
    [root@racnode2 ~]# raw -qa
    /dev/raw/raw1:  bound to major 8, minor 65
    /dev/raw/raw2:  bound to major 8, minor 49
    /dev/raw/raw3:  bound to major 8, minor 33
    /dev/raw/raw4:  bound to major 8, minor 1
    /dev/raw/raw5:  bound to major 8, minor 17

    Include the new shell script in /etc/rc.local to run on each boot from all Oracle RAC nodes in the cluster:

    [root@racnode1 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local
    [root@racnode2 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local

  6. Once the raw devices are created, use the dd command to zero out the device and make sure no data is written to the raw devices. Only perform this action from one of the Oracle RAC nodes in the cluster:

    [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1
    dd: writing to '/dev/raw/raw1': No space left on device
    1048516+0 records in
    1048515+0 records out
    536839680 bytes (537 MB) copied, 773.145 seconds, 694 kB/s
    
    [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2
    dd: writing to '/dev/raw/raw2': No space left on device
    1048516+0 records in
    1048515+0 records out
    536839680 bytes (537 MB) copied, 769.974 seconds, 697 kB/s
    
    [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3
    dd: writing to '/dev/raw/raw3': No space left on device
    65505+0 records in
    65504+0 records out
    33538048 bytes (34 MB) copied, 47.9176 seconds, 700 kB/s
    
    [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4
    dd: writing to '/dev/raw/raw4': No space left on device
    65505+0 records in
    65504+0 records out
    33538048 bytes (34 MB) copied, 47.9915 seconds, 699 kB/s
    
    [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw5
    dd: writing to '/dev/raw/raw5': No space left on device
    65505+0 records in
    65504+0 records out
    33538048 bytes (34 MB) copied, 48.2684 seconds, 695 kB/s

top_v1.gif


Administering the OCR File
View OCR Configuration Information

Two methods exist to verify how many OCR files are configured for the cluster as well as their location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user account:

[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4660
         Available space (kbytes) :     257460
         ID                       :    1331197
         Device/File Name         : 

If CRS is down, you can still determine the location and number of OCR files by viewing the file ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:

[root@racnode1 ~]# cat /etc/oracle/ocr.loc
ocrconfig_loc=
local_only=FALSE

To view the actual contents of the OCR in a human-readable format, run the ocrdump command. This command requires the CRS stack to be running. Running the ocrdump command will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE:

[root@racnode1 ~]# ocrdump
[root@racnode1 ~]# ls -l OCRDUMPFILE
-rw-r--r-- 1 root root 250304 Oct  2 22:46 OCRDUMPFILE

The ocrdump utility also allows for different output options:

[root@racnode1 ~]# ocrdump /tmp/'hostname'_ocrdump_'date +%m%d%y:%H%M'



[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css



[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css -xml > ocrdump.xml

Add an OCR File

Starting with Oracle Clusterware 10g Release 2 (10.2), users now have the ability to multiplex (mirror) the OCR. Oracle Clusterware allows for a maximum of two OCR locations; one is the primary and the second is an OCR mirror. To avoid simultaneous loss of multiple OCR files, each copy of the OCR should be placed on a shared storage device that does not share any components (controller, interconnect, and so on) with the storage devices used for the other OCR file.

Before attempting to add a mirrored OCR, determine how many OCR files are currently configured for the cluster as well as their location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user account:

[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4660
         Available space (kbytes) :     257460
         ID                       :    1331197
         Device/File Name         : 

If CRS is down, you can still determine the location and number of OCR files by viewing the file ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:

[root@racnode1 ~]# cat /etc/oracle/ocr.loc
ocrconfig_loc=
local_only=FALSE

The results above indicate I have only one OCR file and that it is located on an OCFS2 file system. Since we are allowed a maximum of two OCR locations, I intend to create an OCR mirror and locate it on the same OCFS2 file system in the same directory as the primary OCR. Please note that I am doing this for the sake brevity. The OCR mirror should always be placed on a separate device than the primary OCR file to guard against a single point of failure.

Note that the Oracle Clusterware stack should be online and running on all nodes in the cluster while adding, replacing, or removing the OCR location and hence does not require any system downtime.

popup_dialog_exclamation_mark.gifThe operations performed in this section affect the OCR for the entire cluster. However, the ocrconfig command cannot modify OCR configuration information for nodes that are shut down or for nodes on which Oracle Clusterware is not running. So, you should avoid shutting down nodes while modifying the OCR using the ocrconfig command. If for any reason, any of the nodes in the cluster are shut down while modifying the OCR using the ocrconfig command, you will need to perform a repair on the stopped node before it can brought online to join the cluster. Please see the section "Repair an OCR File on a Local Node" for instructions on repairing the OCR file on the affected node.

You can add an OCR mirror after an upgrade or after completing the Oracle Clusterware installation. The Oracle Universal Installer (OUI) allows you to configure either one or two OCR locations during the installation of Oracle Clusterware. If you already mirror the OCR, then you do not need to add a new OCR location; Oracle Clusterware automatically manages two OCRs when you configure normal redundancy for the OCR. As previously mentioned, Oracle RAC environments do not support more than two OCR locations; a primary OCR and a secondary (mirrored) OCR.

Run the following command to add or relocate an OCR mirror using either destination_file or disk to designate the target location of the additional OCR:

ocrconfig -replace ocrmirror 
ocrconfig -replace ocrmirror 

popup_dialog_information_mark.gifYou must be logged in as the root user to run the ocrconfig command.

popup_dialog_stop_mark.gifPlease note that ocrconfig -replace is the only way to add/relocate OCR files/mirrors. Attempting to copy the existing OCR file to a new location and then manually adding/changing the file pointer in the ocr.loc file is not supported and will actually fail to work.

For example:

[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy


[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy


[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror


[root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirror

After adding the new OCR mirror, check that it can be seen from all nodes in the cluster:

[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /u02/oradata/racdb/OCRFile
                                    Device/File integrity check succeeded
         Device/File Name         : 

As mentioned earlier, you can have at most two OCR files in the cluster; the primary OCR and a single OCR mirror. Attempting to add an extra mirror will actually relocate the current OCR mirror to the new location specified in the command:

[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /u02/oradata/racdb/OCRFile
                                    Device/File integrity check succeeded
         Device/File Name         : 

Relocate an OCR File

Just as we were able to add a new ocrmirror while the CRS stack was online, the same holds true when relocating an OCR file or OCR mirror and therefore does not require any system downtime.

popup_dialog_information_mark.gif

You can relocate OCR only when the OCR is mirrored. A mirror copy of the OCR file is required to move the OCR online. If there is no mirror copy of the OCR, first create the mirror using the instructions in the previous section.

Attempting to relocate OCR when an OCR mirror does not exist will produce the following error:

ocrconfig -replace ocr /u02/oradata/racdb/OCRFile

If the OCR mirror is not required in the cluster after relocating the OCR, it can be safely removed.

Run the following command as the root account to relocate the current OCR file to a new location using either destination_file or disk to designate the new target location for the OCR:

ocrconfig -replace ocr 
ocrconfig -replace ocr 

Run the following command as the root account to relocate the current OCR mirror to a new location using either destination_file or disk to designate the new target location for the OCR mirror:

ocrconfig -replace ocrmirror 
ocrconfig -replace ocrmirror 

The following example assumes the OCR is mirrored and demonstrates how to relocate the current OCR file (/u02/oradata/racdb/OCRFile) from the OCFS2 file system to a new raw device (/dev/raw/raw1):

[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy


[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy


[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : 

After relocating the OCR file, check that the change can be seen from all nodes in the cluster:

[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : 

After verifying the relocation was successful, remove the old OCR file at the OS level:

[root@racnode1 ~]# rm -v /u02/oradata/racdb/OCRFile
removed '/u02/oradata/racdb/OCRFile'

Repair an OCR File on a Local Node

It was mentioned in the previous section that the ocrconfig command cannot modify OCR configuration information for nodes that are shut down or for nodes on which Oracle Clusterware is not running. You may need to repair an OCR configuration on a particular node if your OCR configuration changes while that node is stopped. For example, you may need to repair the OCR on a node that was shut down while you were adding, replacing, or removing an OCR.

To repair an OCR configuration, run the following command as root from the node on which you have stopped the Oracle Clusterware daemon:

ocrconfig 杛epair ocr device_name

To repair an OCR mirror configuration, run the following command as root from the node on which you have stopped the Oracle Clusterware daemon:

ocrconfig 杛epair ocrmirror device_name

popup_dialog_information_mark.gifYou cannot perform this operation on a node on which the Oracle Clusterware daemon is running. The CRS stack must be shutdown before attempting to repair the OCR configuration on the local node.

The ocrconfig 杛epair command changes the OCR configuration only on the node from which you run this command. For example, if the OCR mirror was relocated to a disk named /dev/raw/raw2 from racnode1 while the node racnode2 was down, then use the command ocrconfig -repair ocrmirror /dev/raw/raw2 on racnode2 while the CRS stack is down on that node to repair its OCR configuration:

[root@racnode2 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.

[root@racnode2 ~]# ps -ef | grep d.bin | grep -v grep


[root@racnode1 ~]# ocrconfig -replace ocrmirror /dev/raw/raw2


[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : 

Remove an OCR File

To remove an OCR, you need to have at least one OCR online. You may need to perform this to reduce overhead or for other storage reasons, such as stopping a mirror to move it to SAN, RAID etc. Carry out the following steps:

  • Check if at least one OCR is online
  • Verify the CRS stack is online — preferably on all nodes
  • Remove the OCR or OCR mirror
  • If using a clustered file system, remove the deleted file at the OS level

Run the following command as the root account to delete the current OCR or the current OCR mirror:

ocrconfig -replace ocr
or
ocrconfig -replace ocrmirror

For example:

[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy


[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy


[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : 

After removing the new OCR mirror, check that the change is seen from all nodes in the cluster:

[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded

                                    

popup_dialog_information_mark.gifRemoving the OCR or OCR mirror from the cluster configuration does not remove the physical file at the OS level when using a clustered file system.

top_v1.gif


Backup the OCR File

There are two methods for backing up the contents of the OCR and each backup method can be used for different recovery purposes. This section discusses how to ensure the stability of the cluster by implementing a robust backup strategy.

The first type of backup relies on automatically generated OCR file copies which are sometimes referred to as physical backups. These physical OCR file copies are automatically generated by the CRSD process on the

master node and are primarily used to recover the OCR from a lost or corrupt OCR file. Your backup strategy should include procedures to copy these automatically generated OCR file copies to a secure location which is accessible from all nodes in the cluster in the event the OCR needs to be restored.

The second type of backup uses manual procedures to create OCR export files; also known as logical backups. Creating a manual OCR export file should be performed both before and after making significant configuration changes to the cluster, such as adding or deleting nodes from your environment, modifying Oracle Clusterware resources, or creating a database. If in the event a configuration change is made to the OCR that causes errors, the OCR can be restored to a previous state by performing an import of the logical backup taken before the configuration change. Please note that an OCR logical export can also be used to restore the OCR from a lost or corrupt OCR file.

popup_dialog_stop_mark.gifUnlike the methods used to backup the voting disk, attempting to backup the OCR by copying the OCR file directly at the OS level is not a valid backup and will result in errors after the restore!

Because of the importance of OCR information, Oracle recommends that you make copies of the automatically created backup files and an OCR export at least once a day. The following is a working UNIX script that can be scheduled in CRON to backup the OCR File(s) and the Voting Disk(s) on a regular basis:

mini_oracle_database_backup_4.gif crs_components_backup_10g.ksh

Automatic OCR Backups

The Oracle Clusterware automatically creates OCR physical backups every four hours. At any one time, Oracle always retains the last 3 backup copies of the OCR that are 4 hours old. The CRSD process that creates these backups also creates and retains an OCR backup for each full day and at the end of each week. You cannot customize the backup frequencies or the number of OCR physical backup files that Oracle retains.

The default location for generating physical backups on UNIX-based systems is CRS_

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/10130206/viewspace-1058613/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/10130206/viewspace-1058613/

内容概要:本文档为《400_IB Specification Vol 2-Release-2.0-Final-2025-07-31.pdf》,主要描述了InfiniBand架构2.0版本的物理层规范。文档详细规定了链路初始化、配置与训练流程,包括但不限于传输序列(TS1、TS2、TS3)、链路去偏斜、波特率、前向纠错(FEC)支持、链路速度协商及扩展速度选项等。此外,还介绍了链路状态机的不同状态(如禁用、轮询、配置等),以及各状态下应遵循的规则和命令。针对不同数据速率(从SDR到XDR)的链路格式化规则也有详细说明,确保数据包格式和控制符号在多条物理通道上的一致性和正确性。文档还涵盖了链路性能监控和错误检测机制。 适用人群:适用于从事网络硬件设计、开发及维护的技术人员,尤其是那些需要深入了解InfiniBand物理层细节的专业人士。 使用场景及目标:① 设计和实现支持多种数据速率和编码方式的InfiniBand设备;② 开发链路初始化和训练算法,确保链路两端设备能够正确配置并优化通信质量;③ 实现链路性能监控和错误检测,提高系统的可靠性和稳定性。 其他说明:本文档属于InfiniBand贸易协会所有,为专有信息,仅供内部参考和技术交流使用。文档内容详尽,对于理解和实施InfiniBand接口具有重要指导意义。读者应结合相关背景资料进行学习,以确保正确理解和应用规范中的各项技术要求。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值