EAS Oracle Whitepaper
EAS Oracle Whitepaper
Contents
AUDIENCE ---------------------------- 1
BACKGROUND ---------------------------- 1
OBJECTIVE ---------------------------- 2
SETUP ---------------------------- 4
Software ----------------- 4
Initial Configuration ----------------- 5
Network ----------------- 5
NFS Shares ----------------- 6
AIX Source Side ----------------- 6
ZFS Appliance ----------------- 7
Exadata Destination Side ----------------- 7
Configuration ----------------- 7
XTTDRIVER.PL Modification ----------------- 8
Directory Setup ----------------- 8
Custom Scripts Created ----------------- 8
SUMMARY ---------------------------- 16
REFERENCES ---------------------------- 16
1
Other options would include things like data We were utilizing 18 cores in each of the four
pump, ETL tools, etc, but these have a much compute nodes, which provided 36 vcpu’s per
larger downtime requirement and are much compute node for a total of 144 vcpu’s for the
more customized for each implementation. entire RAC environment. In addition, our
We wanted to develop an approach that was Exadata was connected to an Oracle ZFS ZS4
much more universal and could be re-used at appliance which stored all RMAN backups.
other clients in the future.
This appliance has usable storage of 1.2 PBs.
Therefore, we settled on using Oracle’s XTTS The two appliances are connected using
utility with RMAN incremental roll-forward standard Infini-band network connectivity.
backups as our migration approach.
Oracle ZFS Storage ZS4-4
We chose to use the RMAN transfer option as
our DBAs were more familiar with that ZS4-4 Increase
2
You will then run subsequent RMAN
THE PROCESS
incremental backups on the source system,
Since the process that we ended up selecting until you are ready for a READ-ONLY window
is Oracle’s XTTS utility, let’s take a look at how to occur. Each RMAN incremental is also
Since the process that we ended up selecting is converted and then rolled-forward into each
Oracle’s XTTS utility, let’s take a look at how this data file on the destination.
process works.
Once you are ready for your READ-ONLY
The XTTS process allows for you to migrate the window, all tablespaces that are being
data that you are moving ahead of the actual migrated must be placed in READ-ONLY mode.
“Cut-Over” process. This is accomplished by Then a final RMAN incremental backup is
utilizing RMAN incremental and roll-forward taken on the source system. This incremental
processing that will “roll forward” your backup is converted and rolled-forward on the
transferred data files when each incremental destination, bringing the data files up to date
backup. Changes are tracked through the with the source side. At this point, you have
use of the database SCN process. With each the same data files on the source as on the
incremental being backed up until a particular destination.
SCN. We will reference that aspect later on in
the document.
However, you still need to “hook” everything up.
In a nutshell, we are using the Oracle provided That is where the Oracle Data Pump
xttdriver.pl, which is a Perl script, to execute
Transportable Tablespace process comes into play.
RMAN incremental backup scripts based on
where you are at in the process. The initial
RMAN backup is really an RMAN level 0.
This makes an image copy of the data files from There are a couple of ways you can do this.
your source system. These data files will then You need the meta-data about the tables and
be converted using the RMAN convert process tablespaces that make up the structures that
on your destination system. are currently in your data files. You could
export this meta-data and output it to a dump
This makes the data files compatible with the file, and then import that meta-data into your
OEL operating system that is used on the destination system. Or, you can use a database
Oracle Exadata platform, after an RMAN link and read the information directly from the
convert process is run. If you are using another source system and import it into the
type of Operating system, they would be destination system. In our case, we choose the
converted to that type. We will discuss how to latter approach, because we had issues getting
set that up later in this document. the former to consistently work. This is
covered in “THE ISSUES FOUND” section
below.
3
Once the meta-data is imported into the On this project, we utilized version 2 of the
destination system, the “hook-up” is complete, utility, since that was the latest version at the
and you can now view all of the table time our project began. However, a newer
structures, definitions, etc of your database. version is now available, version 3. This newer
The XTTS process only brings over the objects version of the utility has many additional
that are relevant to the tablespaces and tables features developed by Oracle in conjunction
in the tablespaces migrated. The SYSTEM with our project. We recommend that you
tablespace cannot be migrated as part of the check for additional releases as new
XTTS process. Therefore, roles, grants, functionality may be released at any time.
profiles, views, pl/sql, sequences, triggers, etc
are not brought over. In the version 2, you could not add a tablespace
or data file once the initial RMAN level 0
Due to some meta-data information not being backup image copy started. Therefore, if you
brought over, you will need to do a full went a week or more before finally entering
meta-data export on the source side and into the ‘READ-ONLY’ window, you might have
import that data into the destination. This some issues with space becoming a problem.
ensures that all objects are transferred to the In the new version, you can add tablespaces
destination system. and/or data files any time during the process.
SETUP Once you unzip the XTTS utility, you will find
the following files:
Software
4
The xtt.properties file holds all of the Once all data files have had an image copy
configuration information about your made for that tablespace, it will move onto the
environment: what data you want to migrate, next tablespace in the list.
where the files will be placed, and what RMAN
configuration options you want to utilize. In our specific case, the delivered process was not
fast enough to meet our outage timeframe.
Once the initial steps have been run on the
source, you will need to copy this directory to Our client had over 530 tablespaces with over
the destination, since much of this information four thousand data files that needed to have an
will be used there as well. Again, more on that image copy made. Within our client’s network,
later in this document. we were getting less than 100 MBs per second
throughput with a single xttdriver Perl script
Initial Configuration running. That would equate to over 23 days to
create image copies of a 200 TBs database.
5
Below is a diagram of our network environment.
We did not enable Jumbo frames, but if all ldcgrp01 /export/PROD/NFSAIX1B /export/
of the network components in your path are PROD/NFSAIX1B nfs3 dio,bg,hard,intr,r-
rated for Jumbo frames capabilities, this could size=65536,wsize=65536,timeo=600
potentially add additional network bandwidth.
The DIO option was critical for us. Without
We wanted to ensure the fastest throughput it, we had significant performance issues with
possible due to the size of the database we these mounts on the AIX box.
were attempting to migrate. Having the ability
to re-run the process was very important to In addition, with a ZFS appliance, you need to
us. If it took a very long time to get the data ensure that the write bias parameter is set to
pushed, the ability to re-run the process would LATENCY for writing data from the AIX side to
be delayed. the ZFS appliance. Again, we saw significant
performance issues if that option was not set.
6
ZFS Appliance Although during the XTTS process we do not
write from Exadata back to the ZFS appliance,
We followed Oracles best practices for setting this setting is critical if you use the ZFS
up our ZFS shares and projects, see the appliance for RMAN backups from Exadata.
“REFERENCE” section below.
There are various dNFS views that you can
We created two ZFS pools as destinations for query to ensure that your database is actually
RMAN backup pieces. These would initially be using the dNFS service when processing
used for the XTTS processing, but later would backup pieces. A couple of options is to
be utilized as the destination for the Exadata monitor the gv$dnfs_stats view and/or setup
RMAN backups. These pools were created a worksheets under the ZFS Analytics tab.
on each of the array controller’s setup on the Either will show you the throughput being
ZFS appliance. This ensured that we can read generated by ZFS across the dNFS utilized
and write utilizing both array controllers for shares.
maximum performance. In addition, you need
to ensure that you have sufficient number of Configuration
disks allocated to the pool for maximum i/o
operations (IOPS).
As we stated above, in order to get the
required performance for our migration, we
As is standard practice, our ZFS controllers are
modified the way that the XTTS process works.
set to cross-failover so if one controller goes
down, the shares are automatically mounted
In our case, we decided to create forty (40)
on the other controller.
individual XTT directories. Each would hold
a complete copy of the XTTS utility that we
Exadata Destination Side
unzipped from Oracle. This would allow us to
break up our 530+ tablespaces into forty
We mounted these NFS shares utilizing Oracle processing groups. We would have
best practices for an Exadata environment. PARALLEL=2 set in the xtt.properties file for
We are utilizing Direct NFS on the Exadata, each directory, so we would get eighty data
but we still setup the fstab and the following files being written concurrently.
mount options:
7
To complete the modified process, the Next, we created a custom Perl script that
following modifications and enhancements to would read this spool file and pull out the
the existing process were required: tablespace string and put it in the
corresponding xtt.properties files for each
directory.
XXTDRIVER.PL Modification
8
XTT1 – XTT.PROPERTIES Key Values
9
XTT2 – XTT.PROPERTIES Key Values
10
We also created custom scripts to run the Xttbi.sh » This script sets up the env variables
various steps of the XTTS process. Since we and runs the xttdriver.pl for the RMAN
would be running forty concurrent jobs, Incremental step
instead of just one.
Export ORACLE_HOME=/u01/app/oracle/11.2/
Below are all of the source side scripts that 11.2.0.3
were used to run the xttdriver.pl script options. Export ORACLE_SID=EDWPRO
Export TMPDIR=/home/edwpro/$1
RUNALL_B0.sh » This script runs the forty Cd $TMPDIR
RMAN Prepare Phase jobs $ORACLE_HOME/Perl/bin/Perl xttdriver.pl –i -L
11
Below are all of the destination side scripts The roll-forward processing we found was
that were used to run the xttdriver.pl script done just fine by running on a single Exadata
options. node. So we did not break up that process.
On the Exadata side, we found that running the RUNALL_FORWARD.sh » This script runs the
RMAN convert on all four nodes was extremely forty RMAN convert and roll-forward jobs on a
efficient. To take advantage of this, we broke single node on target
up the jobs so that certain directories where
run on each node. We actually kicked off four Date > runall.log
separate shell scripts on each node: 16 RMAN Nohup xttforward.sh xtt1 1>run1 2>&1 &
converts running concurrently across the RAC. ---
This division allowed for us to convert over Nohup xttforward.sh xtt40 1>run40 2>&1 &
200 TBs of data in about 8 hours.
Xttforward.sh » This script sets up the env
Here is a sample of the shell script on the variables and runs the xttdriver.pl script to
destination side that we used for one of the convert and roll-forward each data file to make
sixteen convert jobs. There would be RMAN_ them up-to-date with the current incremental
CONV_PART1.sh thru RMAN_CONV_PART4. backup
sh on each compute node.
Export ORACLE_HOME=/u01/app/oracle/
RUNALL_CONV_PART1.sh » This script runs the product/12.1.0.2/dbhome_1
forty RMAN convert jobs Export ORACLE_SID=EDWSTG1
Export TMPDIR=/home/oracle/$1
nohup xttconv.sh xtt1 1>run1 2>&1 & Cd $TMPDIR
wait $ORACLE_HOME/Perl/bin/Perl xttdriver.pl –r
nohup xttconv.sh xtt5 1>run5 2>&1 & –L –d
Export ORACLE_HOME=/u01/app/oracle/
product/12.1.0.2/dbhome_1
Export ORACLE_SID=EDWSTG1
Export TMPDIR=/home/oracle/$1
Cd $TMPDIR
$ORACLE_HOME/Perl/bin/Perl xttdriver.pl –c–L
12
THE EXECUTION
Once everything was configured, we were ready to actually do the RMAN copy. The following is a
list of the steps that were executed to migrate the database utilizing the XTTS process.
Kick off Prepare This kicks of 40 xttdriver.pl scripts that copy two
Source data files at a time. So we are getting 80 data files
Phase copied concurrently.
Copy Directories from Copy the contents of XTT1 thru XTT40 to Node 1
Source / Target on Exadata on Target
source to Node 1 Target
Copy Directories from Copy key directories from node 1 to node 2,3, and 4
Target as needed. This will allow for all four nodes to be
Node 1 to Nodes 2,3, and 4 used to do RMAN convert.
Kick off Incremental Source This runs the incremental process on the source.
Kick off Convert / Kick off the RMAN convert and roll-forward on
Target node 1 for all 40 XTT directories
Roll-Forward
13
These steps occur when source tablespaces are
READ-ONLY in READ-ONLY mode
Kick off full meta-data This run a data pump export job to get all meta-data
Source in the database, excludes tablespaces.
export
Create users Target Load the roles, profiles, and users in target system.
Kick off Convert / Kick off the RMAN convert and roll-forward on node
Target 1 for all 40 XTT directories
Roll-Forward
COMPLETE
The initial Prepare phase takes the longest to execute. Using our modified version of the XTTS
process, we are unable to add tablespaces or data files during this entire process. Therefore,
getting the data migrated as quickly as possible is critical.
14
The modified process took about six days to It is important to remember that your target
complete and migrated close to 230 TBs of database will be almost a mirror copy of your
data. Using the existing XTTS process, out-of- source. You target system will have the same
the-box, would have taken a month or more to block size, the same national character set,
complete. the same encryption options, etc. In our case,
we wanted to change a lot of things about our
Once the initial process is complete, the RMAN database structure that cannot be done as part
convert process runs for about 8 – 10 hours on of the XTTS process. Therefore, we had to do
the Exadata system. By running the convert jobs another migration, but that will be covered in
on all four Exadata compute nodes, we were able another white paper.
to process about 25 TBs of data per hour.
THE ISSUES FOUND
Next are the repetitive steps to roll the
destination database forward. These are the So during our migration utilizing Oracle’s
incremental backups, the copying of files, the version 2 of the XTTS process, we did find a
convert / roll-forward processing, and then number of issues.
setting the new SCN. These steps are repeated
over and over again until we are ready for the I’ve listed these here in order to help clients out in
READ-ONLY window to occur. Basically, the the future, if they run into similar problems.
first time the incremental backup is run, it will
execute for the longest period of time, due to 1. Ensuring that the DIO option was set on
six days’ worth of archive logs that have been the NFS mounting options for the AIX
generated to-date. server. Otherwise, performance was
terrible and AIX server started having
Each subsequent run, there will be less and less memory issues.
archive logs to process, so the time to run will
2. Ensuring that the Write Bias is set to
go down. By the time the READ-ONLY window
Latency on the ZFS appliance, when writing
occurs, you should have less than 12 hours
from AIX to the ZFS NFS file shares. Again,
of archive logs to process. In our case, we
otherwise, performance for the transfers
had about 8 hours’ worth, and it took about 2
were seriously degraded.
hours for the incremental backup to run in the
READ-ONLY window. 3. An issue with utilizing the Import Plug-In
Step via a database link required two
The entire time of the READ-ONLY window patches to be applied. One to the
is around 16 hours to complete. At the end of Exadata target side, and one to the source
this process, all of the data will have been AIX side. We were getting ORA- errors
successfully migrated to the new target when attempting to use the plug-in after
system. the July 2016 Bundle Patch was applied to
our Exadata system.
15
4. We were unable to get the Export
SUMMARY
Transportable Tablespace dump files to
work successfully when attempting to Hopefully, you found some insights from this
Plug-In these tablespaces on the target paper. We learned quite bit during this project
side. We consistently got tablespaces were and look forward to utilizing many of these
not self-contained, even though, we knew techniques again on future projects.
they were. This was not an issue if we used This project has shown that the Cross
the database link option. Platform Transportable Tablespace (XTTS)
utility delivered by Oracle does work and
5. A number of xtf extension files that were
gives you a repeatable process for the future.
created in ASM as the new data files came
In order to move very large databases, some
over in the wrong format. The format
modifications will need to be made. However,
should be: <tablespace_name>_<data file
once in place, it is very possible to “cut-over” a
number>.xtf . Instead, we had about thirty
very large database within a day.
data files that consistently did not follow
this format. We had to manually fix our
There is no guarantee that you might not run
dynamically created Import Plug-In par file
into bugs and require patches, so plan for that
to match these anomalies in ASM.
in your project estimates. Also, network
6. Obviously, the speed of a single xttdriver.pl capacity and the ability to move the backup
script was an issue that we had to pieces easily are also key considerations.
overcome. So if you have a significantly Although you don’t need a ZFS appliance,
large database or your network is fairly having some type of mountable NFS share was
slow, running a single xttdriver.pl script quite helpful.
may take too long for your migration.
Thanks for reading our paper. If you have any
7. The amount of time it takes to actually
questions or would like to discuss how Centric
Plug-In the meta-data seems significantly
can help with your database migration, please
longer that it should. It took nine hours for
reach out to me at :
our meta-data about the tablespaces to get
loaded. If you have a very large number of [email protected]
tables and indexes, something like Oracle
EBS, this might take even significantly REFERENCES
longer to Plug-In.
Refer to the following MOS note for additional details:
16
CROSS-PLATFORM TRANSPORTABLE TABLESPACES UTILIZING INCREMENTAL BACKUPS
About Centric Consulting
CENTRIC AT A GLANCE
Centric is a management and technology consulting company. We have over 700 consultants with
extensive experience delivering high-profile projects for clients of all shapes and sizes, including
Fortune 500 companies.
Please visit our web site for a complete description of our ERP offerings. www.centricconsulting.com
100% of Centric’s Clients are willing to reference our work. A sampling of our Clients includes:
CONTACT INFORMATION
Andy Park, VP and Partner, EAS Chris Szaz, VP and Partner, EAS Jay Barnhart, Sr. Tech Manager, EAS
[email protected] [email protected] [email protected]
(513) 382-3011 mobile (513) 235-2648 mobile (740) 501-2551 mobile