40043 claborn
Session id: 40043

Data Pump in Oracle Database 10g :
Foundation for Ultra-High Speed Data
Movement Utilities
George H. Claborn
Data Pump Technical Project Leader
Oracle Corporation
New England Development Center
Data Pump: Overview







What is it?
Main Features
Architecture
Performance
Things to keep in mind
Some thoughts on original exp / imp
Data Pump: What is it?
 Server-based facility for high performance
loading and unloading of data and metadata
 Callable: DBMS_DATAPUMP. Internally uses
DBMS_METADATA
 Data written in Direct Path stream format. Metadata
written as XML
 New clients expdp and impdp: Supersets of original
exp / imp.
 Foundation for Streams, Logical Standby, Grid,
Transportable Tablespaces and Data Mining initial
instantiation.
Features: Performance!!
 Automatic, two-level parallelism
–
–
–
–
–






Direct Path for inter-partition parallelism
External Tables for intra-partition parallelism
Simple: parallel=<number of active threads>
Dynamic: Workers can be added and removed from a running
job in Enterprise Edition
Index builds automatically “parallelized” up to degree of job

Simultaneous data and metadata unload
Single thread of data unload: 1.5-2X exp
Single thread of data load: 15X-40X imp
With index builds: 4-10X imp
Features: Checkpoint / Restart
 Job progress recorded in a “Master Table”
 May be explicitly stopped and restarted later:
–

Stop after current item finishes or stop immediate

 Abnormally terminated job is also restartable
 Current objects can be skipped on restart if
problematic
Features: Network Mode
 Network import: Load one database
directly from another
 Network export: Unload a remote database to a local
dumpfile set
–

Allows export of read-only databases

 Data Pump runs locally, Metadata API runs remotely.
 Uses DB links / listener service names, not pipes. Data
is moved as ‘insert into <local table> select from
<remote table>@service_name’
 Direct path engine is used on both ends
 It’s easy to swamp network bandwidth: Be careful!
Features: Fine-Grained Object Selection
 All object types are supported
for both operations: export and import
 Exclude: Specified object types
are excluded from the operation
 Include: Only the specified object types are included.
E.g, just retrieve packages, functions and procedures
 More than one of each can be specified, but use of both
is prohibited by new clients
 Both take an optional name filter for even finer
granularity:
–
–

INCLUDE PACKAGE: “LIKE ‘PAYROLL%’ “
EXCLUDE TABLE: “IN (‘FOO’,’BAR’, … )’ “
Features: Monitoring
 Flexible GET_STATUS call
 Per-worker status showing current object and
percent done
 Initial job space estimate and overall percent
done
 Job state and description
 Work-in-progress and errors
Features: Dump File Set Management
 Directory based: E.g, DMPDIR:export01.dmp where
DMPDIR created as:
SQL> create directory dmpdir as ‘/data/dumps’

 Multiple, wildcarded file specifications supported:
dumpfile=dmp1dir:full1%u.dmp, dmp2dir:full2%u.dmp
–

Files are created as needed on a round-robin basis from
available file specifications

 File size can be limited for manageability
 Dump file set coherency automatically maintained
New Clients – expdp / impdp
 Similar (but not identical) look and feel to exp / imp
 All modes supported: full, schema, table, tablespace,
transportable. Superset of exp / imp
 Flashback is supported
 Query supported by both expdp and impdp… and on a
per-table basis!
 Detach from and attach to running jobs
 Multiple clients per job allowed; but a single client can
attach to only one job at a time
 If privileged, attach to and control other users’ jobs
New Clients – expdp / impdp
 Interactive mode entered via Ctl-C:
–
–
–
–

–
–
–
–

ADD_FILE: Add dump files and wildcard specs. to job
PARALLEL: Dynamically add or remove workers
STATUS: Get detailed per-worker status and change
reporting interval
STOP_JOB{=IMMEDIATE}: Stop job, leaving it
restartable. Immediate doesn’t wait for workers to finish
current work items… they’ll be re-done at restart
START_JOB: Restart a previously stopped job
KILL_JOB: Stop job and delete all its resources (master
table, dump files) leaving it unrestartable
CONTINUE: Leave interactive mode, continue logging
EXIT: Exit client, leave job running
Features: Other Cool Stuff…
 DDL transformations are easy with XML:
–
–
–
–

REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
Segment and storage attributes can be suppressed

 Can extract and load just data, just metadata or both
 SQLFILE operation generates executable DDL script
 If a table pre-exists at load time, you can: skip it
(default), replace it, truncate then load or append to it.
 Space estimates based on allocated blocks (default) or
statistics if available
 Enterprise Manager interface integrates 9i and 10g
 Callable!
Architecture: Block Diagram
expdp

impdp

Other Clients:
Data Mining, etc

Enterprise
Manager

Data Pump
DBMS_DATAPUMP
Data/Metadata movement engine

Oracle_
Loader

Oracle_
DataPump

External Table API

Direct Path
API

Metadata
API:
DBMS_METADATA
Architecture:
Flow Diagram
User A:
expdp

Data, metadata & master table

Master
table

User A’s
shadow
process

Worker A:
metadata

Status queue:
Work-in-progress
and errors
Master
Control
Process
Log File

User B:
OEM

Dump File Set:

User B’s
shadow
process

Worker B:
Direct path
Parallel
Proc. 01
Worker C:
Ext. Table

Dynamic commands Command and
(stop, start, parallel etc) control queue

Parallel
Proc. 02
No Clients Required !

Dump File Set:
Data, metadata & master table

Master
table
Worker A:
metadata

Master
Control
Process
Log File

Worker B:
Direct path
Parallel
Proc. 01
Worker C:
Ext. Table

Parallel
Proc. 02
Data Pump: Performance Tuning
 Default initialization parameters are fine!
–

Make sure disk_asynch_io remains TRUE

 Spread the I/O!
 Parallel= no more than 2X number of CPUs:
Do not exceed disk spindle capacity.
–

Corollary:

SPREAD THE I/O !!!

 Sufficient SGA for AQ messaging and metadata API
queries
 Sufficient rollback for long running queries

That’s it!
Large Internet Company
2 Fact Tables: 16.2M rows, 2 Gb
Program

Elapsed

exp out of the box: direct=y

0 hr 10 min 40 sec

exp tuned: direct=y buffer=2M recordlength=64K

0 hr 04 min 08 sec

expdp out of the box: Parallel=1

0 hr 03 min 12 sec

imp out of the box

2 hr 26 min 10 sec

imp tuned: buffer=2M recordlength=64K

2 hr 18 min 37 sec

impdp out of the box: Parallel=1

0 hr 03 min 05 sec

With one index per table
imp tuned: buffer=2M recordlength=64K

2 hr 40 min 17 sec

impdp: Parallel=1

0 hr 25 min 10 sec
Oracle Applications Seed Database:
 Metadata intensive: 392K objects, 200 schemas,
10K tables, 2.1 Gb of data total
 Original exp / imp total:
32 hrs 50 min
–

exp:

2 hr 13 min

imp:

30 hrs 37 min.

 Data Pump expdp / impdp total:
–
–

15 hrs 40 min

expdp: 1 hr 55 min impdp: 13 hrs 45 min.
Parallel=2 for both expdp and impdp
Keep in Mind:
 Designed for *big* jobs with lots of data.
–
–

Metadata performance is about the same
More complex infrastructure, longer startup

 XML is bigger than DDL, but much more flexible
 Data format in dump files is ~15% more
compact than exp
 Import subsetting is accomplished by pruning
the Master Table
Original exp and imp

 Original imp will be supported forever to allow
loading of V5 – V9i dump files
 Original exp will ship at least in 10g, but may
not support all new functionality.
 9i exp may be used for downgrades from 10 g
 Original and Data Pump dump file formats are
not compatible
10g Beta Feedback
 British Telecom:
Ian Crocker, Performance & Storage Consultant

“We have tested Oracle Data Pump, the new Oracle10g Export
and Import Utilities. Data Pump Export performed twice as fast
as Original Export, and Data Pump Import performed ten times
faster than Original Import. The new manageability features
should give us much greater flexibility in monitoring job status.”

 Airbus Deutschland:
Werner Kawollek, Operation Application Management
“We have tested the Oracle Data Pump Export and Import
utilities and are impressed by their rich functionality. First tests
have shown tremendous performance gains in comparison to the
original export and import utilities"
Please visit our Demo!
Oracle Database 10g Data Pump: Faster and Better
Export / Import

Try out Data Pump’s Tutorial in the Oracle By Example
(OBE) area:
“Unloading and Loading Data Base Contents”
Q
&

Q U E S T I O N S
A N S W E R S

More Related Content

PDF
Deep Dive: More Oracle Data Pump Performance Tips and Tricks
PPT
All Change
KEY
Spring Batch Behind the Scenes
PDF
Create your oracle_apps_r12_lab_with_less_than_us1000
DOCX
Oracle Database 12c "New features"
PDF
Exam 1z0 062 Oracle Database 12c: Installation and Administration
PDF
Go faster with_native_compilation Part-2
PPTX
Dan Hotka's Top 10 Oracle 12c New Features
Deep Dive: More Oracle Data Pump Performance Tips and Tricks
All Change
Spring Batch Behind the Scenes
Create your oracle_apps_r12_lab_with_less_than_us1000
Oracle Database 12c "New features"
Exam 1z0 062 Oracle Database 12c: Installation and Administration
Go faster with_native_compilation Part-2
Dan Hotka's Top 10 Oracle 12c New Features

What's hot (19)

PDF
Troubleshooting Complex Performance issues - Oracle SEG$ contention
PPTX
Oracle ebs capacity_analysisusingstatisticalmethods
PPTX
Spring batch for large enterprises operations
PDF
ORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIES
PDF
SAP HANA Distributed System Scaleout and HA
PDF
An introduction to_rac_system_test_planning_methods
PDF
nginx + ansible로 점검모드 만들기
PPTX
Sql server logshipping
PDF
Flashback - The Time Machine..
PDF
The Top 12 Features new to Oracle 12c
PPTX
Apache Airflow in Production
PPTX
Spring batch
PDF
twp-integrating-hadoop-data-with-or-130063
PPT
Les 17 sched
PDF
SAP HANA System Replication - Setup, Operations and HANA Monitoring
DOCX
Database upgradation
PDF
Long live to CMAN!
PDF
Apache Scoop - Import with Append mode and Last Modified mode
PDF
RMAN - New Features in Oracle 12c - IOUG Collaborate 2017
Troubleshooting Complex Performance issues - Oracle SEG$ contention
Oracle ebs capacity_analysisusingstatisticalmethods
Spring batch for large enterprises operations
ORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIES
SAP HANA Distributed System Scaleout and HA
An introduction to_rac_system_test_planning_methods
nginx + ansible로 점검모드 만들기
Sql server logshipping
Flashback - The Time Machine..
The Top 12 Features new to Oracle 12c
Apache Airflow in Production
Spring batch
twp-integrating-hadoop-data-with-or-130063
Les 17 sched
SAP HANA System Replication - Setup, Operations and HANA Monitoring
Database upgradation
Long live to CMAN!
Apache Scoop - Import with Append mode and Last Modified mode
RMAN - New Features in Oracle 12c - IOUG Collaborate 2017
Ad

Viewers also liked (8)

PDF
Oracle RDS Data pump
PPTX
PPT
Comandos Editor VI
 
PPT
Less17 moving data
PDF
Mastering the Oracle Data Pump API
PPT
Oracle data pump
PPTX
SQL Developer for DBAs
PPT
Less17 Util
Oracle RDS Data pump
Comandos Editor VI
 
Less17 moving data
Mastering the Oracle Data Pump API
Oracle data pump
SQL Developer for DBAs
Less17 Util
Ad

Similar to 40043 claborn (20)

PPT
Kettleetltool 090522005630-phpapp01
PPT
Kettle – Etl Tool
PDF
Oracle 21c: New Features and Enhancements of Data Pump & TTS
PPTX
Datastage free tutorial
PDF
COUG_AAbate_Oracle_Database_12c_New_Features
PPTX
Optimizing your Database Import!
PDF
Apache Zeppelin on Kubernetes with Spark and Kafka - meetup @twitter
PDF
What's coming in Airflow 2.0? - NYC Apache Airflow Meetup
PPTX
Watch Re-runs on your SQL Server with RML Utilities
PPTX
Dataflow.pptx
PDF
Big data should be simple
PPT
Pentaho etl-tool
PPTX
Synapse 2018 Guarding against failure in a hundred step pipeline
PPTX
Airflow at lyft
PPTX
ELT Publishing Tool Overview V3_Jeff
PDF
End-to-end pipeline agility - Berlin Buzzwords 2024
PDF
Managing Apache Spark Workload and Automatic Optimizing
PPTX
Von neumann workers
PPT
Less18 moving data
PDF
From Idea to Model: Productionizing Data Pipelines with Apache Airflow
Kettleetltool 090522005630-phpapp01
Kettle – Etl Tool
Oracle 21c: New Features and Enhancements of Data Pump & TTS
Datastage free tutorial
COUG_AAbate_Oracle_Database_12c_New_Features
Optimizing your Database Import!
Apache Zeppelin on Kubernetes with Spark and Kafka - meetup @twitter
What's coming in Airflow 2.0? - NYC Apache Airflow Meetup
Watch Re-runs on your SQL Server with RML Utilities
Dataflow.pptx
Big data should be simple
Pentaho etl-tool
Synapse 2018 Guarding against failure in a hundred step pipeline
Airflow at lyft
ELT Publishing Tool Overview V3_Jeff
End-to-end pipeline agility - Berlin Buzzwords 2024
Managing Apache Spark Workload and Automatic Optimizing
Von neumann workers
Less18 moving data
From Idea to Model: Productionizing Data Pipelines with Apache Airflow

Recently uploaded (20)

PDF
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf
PDF
Advancing precision in air quality forecasting through machine learning integ...
PDF
Rapid Prototyping: A lecture on prototyping techniques for interface design
PDF
Lung cancer patients survival prediction using outlier detection and optimize...
PDF
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
PDF
Transform-Your-Streaming-Platform-with-AI-Driven-Quality-Engineering.pdf
PDF
Transform-Your-Factory-with-AI-Driven-Quality-Engineering.pdf
PDF
Enhancing plagiarism detection using data pre-processing and machine learning...
PDF
Convolutional neural network based encoder-decoder for efficient real-time ob...
PDF
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
PDF
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
PDF
Comparative analysis of machine learning models for fake news detection in so...
PPTX
AI-driven Assurance Across Your End-to-end Network With ThousandEyes
PPTX
Microsoft User Copilot Training Slide Deck
PPTX
agenticai-neweraofintelligence-250529192801-1b5e6870.pptx
PDF
IT-ITes Industry bjjbnkmkhkhknbmhkhmjhjkhj
PDF
“A New Era of 3D Sensing: Transforming Industries and Creating Opportunities,...
PDF
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
PDF
Aug23rd - Mulesoft Community Workshop - Hyd, India.pdf
PDF
MENA-ECEONOMIC-CONTEXT-VC MENA-ECEONOMIC
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf
Advancing precision in air quality forecasting through machine learning integ...
Rapid Prototyping: A lecture on prototyping techniques for interface design
Lung cancer patients survival prediction using outlier detection and optimize...
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
Transform-Your-Streaming-Platform-with-AI-Driven-Quality-Engineering.pdf
Transform-Your-Factory-with-AI-Driven-Quality-Engineering.pdf
Enhancing plagiarism detection using data pre-processing and machine learning...
Convolutional neural network based encoder-decoder for efficient real-time ob...
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
Comparative analysis of machine learning models for fake news detection in so...
AI-driven Assurance Across Your End-to-end Network With ThousandEyes
Microsoft User Copilot Training Slide Deck
agenticai-neweraofintelligence-250529192801-1b5e6870.pptx
IT-ITes Industry bjjbnkmkhkhknbmhkhmjhjkhj
“A New Era of 3D Sensing: Transforming Industries and Creating Opportunities,...
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
Aug23rd - Mulesoft Community Workshop - Hyd, India.pdf
MENA-ECEONOMIC-CONTEXT-VC MENA-ECEONOMIC

40043 claborn

  • 2. Session id: 40043 Data Pump in Oracle Database 10g : Foundation for Ultra-High Speed Data Movement Utilities George H. Claborn Data Pump Technical Project Leader Oracle Corporation New England Development Center
  • 3. Data Pump: Overview       What is it? Main Features Architecture Performance Things to keep in mind Some thoughts on original exp / imp
  • 4. Data Pump: What is it?  Server-based facility for high performance loading and unloading of data and metadata  Callable: DBMS_DATAPUMP. Internally uses DBMS_METADATA  Data written in Direct Path stream format. Metadata written as XML  New clients expdp and impdp: Supersets of original exp / imp.  Foundation for Streams, Logical Standby, Grid, Transportable Tablespaces and Data Mining initial instantiation.
  • 5. Features: Performance!!  Automatic, two-level parallelism – – – – –     Direct Path for inter-partition parallelism External Tables for intra-partition parallelism Simple: parallel=<number of active threads> Dynamic: Workers can be added and removed from a running job in Enterprise Edition Index builds automatically “parallelized” up to degree of job Simultaneous data and metadata unload Single thread of data unload: 1.5-2X exp Single thread of data load: 15X-40X imp With index builds: 4-10X imp
  • 6. Features: Checkpoint / Restart  Job progress recorded in a “Master Table”  May be explicitly stopped and restarted later: – Stop after current item finishes or stop immediate  Abnormally terminated job is also restartable  Current objects can be skipped on restart if problematic
  • 7. Features: Network Mode  Network import: Load one database directly from another  Network export: Unload a remote database to a local dumpfile set – Allows export of read-only databases  Data Pump runs locally, Metadata API runs remotely.  Uses DB links / listener service names, not pipes. Data is moved as ‘insert into <local table> select from <remote table>@service_name’  Direct path engine is used on both ends  It’s easy to swamp network bandwidth: Be careful!
  • 8. Features: Fine-Grained Object Selection  All object types are supported for both operations: export and import  Exclude: Specified object types are excluded from the operation  Include: Only the specified object types are included. E.g, just retrieve packages, functions and procedures  More than one of each can be specified, but use of both is prohibited by new clients  Both take an optional name filter for even finer granularity: – – INCLUDE PACKAGE: “LIKE ‘PAYROLL%’ “ EXCLUDE TABLE: “IN (‘FOO’,’BAR’, … )’ “
  • 9. Features: Monitoring  Flexible GET_STATUS call  Per-worker status showing current object and percent done  Initial job space estimate and overall percent done  Job state and description  Work-in-progress and errors
  • 10. Features: Dump File Set Management  Directory based: E.g, DMPDIR:export01.dmp where DMPDIR created as: SQL> create directory dmpdir as ‘/data/dumps’  Multiple, wildcarded file specifications supported: dumpfile=dmp1dir:full1%u.dmp, dmp2dir:full2%u.dmp – Files are created as needed on a round-robin basis from available file specifications  File size can be limited for manageability  Dump file set coherency automatically maintained
  • 11. New Clients – expdp / impdp  Similar (but not identical) look and feel to exp / imp  All modes supported: full, schema, table, tablespace, transportable. Superset of exp / imp  Flashback is supported  Query supported by both expdp and impdp… and on a per-table basis!  Detach from and attach to running jobs  Multiple clients per job allowed; but a single client can attach to only one job at a time  If privileged, attach to and control other users’ jobs
  • 12. New Clients – expdp / impdp  Interactive mode entered via Ctl-C: – – – – – – – – ADD_FILE: Add dump files and wildcard specs. to job PARALLEL: Dynamically add or remove workers STATUS: Get detailed per-worker status and change reporting interval STOP_JOB{=IMMEDIATE}: Stop job, leaving it restartable. Immediate doesn’t wait for workers to finish current work items… they’ll be re-done at restart START_JOB: Restart a previously stopped job KILL_JOB: Stop job and delete all its resources (master table, dump files) leaving it unrestartable CONTINUE: Leave interactive mode, continue logging EXIT: Exit client, leave job running
  • 13. Features: Other Cool Stuff…  DDL transformations are easy with XML: – – – – REMAP_SCHEMA REMAP_TABLESPACE REMAP_DATAFILE Segment and storage attributes can be suppressed  Can extract and load just data, just metadata or both  SQLFILE operation generates executable DDL script  If a table pre-exists at load time, you can: skip it (default), replace it, truncate then load or append to it.  Space estimates based on allocated blocks (default) or statistics if available  Enterprise Manager interface integrates 9i and 10g  Callable!
  • 14. Architecture: Block Diagram expdp impdp Other Clients: Data Mining, etc Enterprise Manager Data Pump DBMS_DATAPUMP Data/Metadata movement engine Oracle_ Loader Oracle_ DataPump External Table API Direct Path API Metadata API: DBMS_METADATA
  • 15. Architecture: Flow Diagram User A: expdp Data, metadata & master table Master table User A’s shadow process Worker A: metadata Status queue: Work-in-progress and errors Master Control Process Log File User B: OEM Dump File Set: User B’s shadow process Worker B: Direct path Parallel Proc. 01 Worker C: Ext. Table Dynamic commands Command and (stop, start, parallel etc) control queue Parallel Proc. 02
  • 16. No Clients Required ! Dump File Set: Data, metadata & master table Master table Worker A: metadata Master Control Process Log File Worker B: Direct path Parallel Proc. 01 Worker C: Ext. Table Parallel Proc. 02
  • 17. Data Pump: Performance Tuning  Default initialization parameters are fine! – Make sure disk_asynch_io remains TRUE  Spread the I/O!  Parallel= no more than 2X number of CPUs: Do not exceed disk spindle capacity. – Corollary: SPREAD THE I/O !!!  Sufficient SGA for AQ messaging and metadata API queries  Sufficient rollback for long running queries That’s it!
  • 18. Large Internet Company 2 Fact Tables: 16.2M rows, 2 Gb Program Elapsed exp out of the box: direct=y 0 hr 10 min 40 sec exp tuned: direct=y buffer=2M recordlength=64K 0 hr 04 min 08 sec expdp out of the box: Parallel=1 0 hr 03 min 12 sec imp out of the box 2 hr 26 min 10 sec imp tuned: buffer=2M recordlength=64K 2 hr 18 min 37 sec impdp out of the box: Parallel=1 0 hr 03 min 05 sec With one index per table imp tuned: buffer=2M recordlength=64K 2 hr 40 min 17 sec impdp: Parallel=1 0 hr 25 min 10 sec
  • 19. Oracle Applications Seed Database:  Metadata intensive: 392K objects, 200 schemas, 10K tables, 2.1 Gb of data total  Original exp / imp total: 32 hrs 50 min – exp: 2 hr 13 min imp: 30 hrs 37 min.  Data Pump expdp / impdp total: – – 15 hrs 40 min expdp: 1 hr 55 min impdp: 13 hrs 45 min. Parallel=2 for both expdp and impdp
  • 20. Keep in Mind:  Designed for *big* jobs with lots of data. – – Metadata performance is about the same More complex infrastructure, longer startup  XML is bigger than DDL, but much more flexible  Data format in dump files is ~15% more compact than exp  Import subsetting is accomplished by pruning the Master Table
  • 21. Original exp and imp  Original imp will be supported forever to allow loading of V5 – V9i dump files  Original exp will ship at least in 10g, but may not support all new functionality.  9i exp may be used for downgrades from 10 g  Original and Data Pump dump file formats are not compatible
  • 22. 10g Beta Feedback  British Telecom: Ian Crocker, Performance & Storage Consultant “We have tested Oracle Data Pump, the new Oracle10g Export and Import Utilities. Data Pump Export performed twice as fast as Original Export, and Data Pump Import performed ten times faster than Original Import. The new manageability features should give us much greater flexibility in monitoring job status.”  Airbus Deutschland: Werner Kawollek, Operation Application Management “We have tested the Oracle Data Pump Export and Import utilities and are impressed by their rich functionality. First tests have shown tremendous performance gains in comparison to the original export and import utilities"
  • 23. Please visit our Demo! Oracle Database 10g Data Pump: Faster and Better Export / Import Try out Data Pump’s Tutorial in the Oracle By Example (OBE) area: “Unloading and Loading Data Base Contents”
  • 24. Q & Q U E S T I O N S A N S W E R S

Editor's Notes

  • #5: Server, not client Write your own export and import! DP stream format mirrors what’s on disk… minimal conversions required. Caveat the “superset” comment
  • #6: Describe req. gathering: Perf. Was customer reqs 1, 2 &amp; 3. 3. 1.5-2X exp direct path! Even faster against exp conventional 4. See me after for some tricks on how to increase original imp performance
  • #7: 2nd most requested enhancement
  • #8: Ask, “How many do the pipe trick?” Both operations support a “network mode” where the source is a remote instance.
  • #9: 1. Original exp: grants, indexes, triggers and constraints *only*.
  • #11: Explain security issue. If dmpdir1 pts to data1 and dmpdir2 to data2, then files will be created as: /data1/full101.dmp, /data2/full201.dmp, /data1/full102.dmp, /data2/full202.dmp 3. However, specified size must be large enough to hold entire Master Table at end of export. 4. File specs. Must span entire dump set at import time.
  • #12: 2. XML schemas and views are not yet supported in Data Pump 5. Start a job up when leaving the office and monitor it from home.
  • #13: A whole slide to the very cool interactive mode…
  • #14: Explain XML 2. Original exp can’t do just data. Caveat on “truncate”: Target of ref. constraints. Append is default for data-only.
  • #18: 1. Important on those few platforms that don’t support asynch I/O by default.