Kimball University: The Subsystems of ETL Revisited
These 34 subsystems cover the crucial extract, transform and load architecture
components required in almost every dimensional data warehouse environment.
Understanding the breadth of requirements is the first step to putting an effective
architecture in place.
By Bob Becker
October 21, 2007
Through education and consulting work, Kimball Group has been exposed to hundreds
of successful data warehouses. Careful study of these successes has revealed a set of
extract, transformation, and load (ETL) best practices. We first described these best
practices in an Intelligent Enterprise column three years ago (see "The 38 Subsystems
of ETL"). Since then we have continued to refine the practices based on client
experiences, feedback from students and continued research. As a result, we have
carefully restructured these best practices into 34 subsystems that represent the key
ETL architecture components required in almost every dimensional data warehouse
environment. No wonder the ETL system takes such a large percentage of data
warehouse and BI project resources!
The good news is that if you study these 34 subsystems, you'll recognize almost all of
them and will be on the way to leveraging your experience as you build your ETL
system. While we understand and accept the industry's accepted acronym, the "ETL"
process really has four major components: Extracting, Cleaning and Conforming,
Delivering and Managing. Each of these components and all 34 subsystems contained
therein are explained below.
EXTRACTING: GETTING DATA INTO THE DATA WAREHOUSE
To no surprise, the initial subsystems of the ETL architecture address the issues of
understanding your source data, extracting the data and transferring it to the data
warehouse environment where the ETL system can operate on it independent of the
operational systems. While the remaining subsystems focus on the transforming, loading
and system management within the ETL environment, the initial subsystems interface to
the source systems to access the required data. The extract-related ETL subsystems
include:
Data Profiling (subsystem 1) — Explores a data source to determine its fit for
inclusion as a source and the associated cleaning and conforming requirements.
Change Data Capture (subsystem 2) — Isolates the changes that occurred in the
source system to reduce the ETL processing burden.
Extract System (subsystem 3) — Extracts and moves source data into the data
warehouse environment for further processing.
CLEANING AND CONFORMING DATA
These critical steps are where the ETL system adds value to the data. The other
activities, extracting and delivering data, are obviously important, but they simply move
and load the data. The cleaning and conforming subsystems change data and enhance
its value to the organization. In addition, these subsystems should be architected to
create metadata used to diagnose source-system problems. Such diagnoses can
eventually lead to business process reengineering initiatives to address the root causes
of dirty data and to improve data quality over time.
The ETL data cleaning process is often expected to fix dirty data, yet at the same time
the data warehouse is expected to provide an accurate picture of the data as it was
captured by the organization's production systems (see related article, "Data
Stewardship 101: First Step to Quality and Consistency). It's essential to strike the
proper balance between these conflicting goals. The key is to develop an ETL system
capable of correcting, rejecting or loading data as is, and then highlighting, with easy-to-
use structures, the modifications, standardizations, rules and assumptions of the
underlying cleaning apparatus so the system is self-documenting.
The five major subsystems in the cleaning and conforming step include:
Data Cleansing System (subsystem 4) — Implements data quality processes to
catch quality violations.
Error Event Tracking (subsystem 5) — Captures all error events that are vital
inputs to data quality improvement.
Audit Dimension Creation (subsystem 6) — Attaches metadata to each fact table
as a dimension. This metadata is available to BI applications for visibility into
data quality.
Deduplication (subsystem 7) — Eliminates redundant members of core
dimensions, such as customers or products. May require integration across
multiple sources and application of survivorship rules to identify the most
appropriate version of a duplicate row.
Data Conformance (subsystem 8) — Enforces common dimension attributes
across conformed master dimensions and common metrics across related fact
tables (see related article, "Kimball University: Data Integration for Real People").
DELIVERING: PREPARE FOR PRESENTATION
The primary mission of the ETL system is the handoff of the dimension and fact tables in
the delivery step. There is considerable variation in source data structures and cleaning
and conforming logic, but the delivery processing techniques are more defined and
disciplined. Careful and consistent use of these techniques is critical to building a
successful dimensional data warehouse that is reliable, scalable and maintainable.
Many of these subsystems focus on dimension table processing. Dimension tables are
the heart of the data warehouse. They provide the context for the fact tables and hence
for all the measurements. For many dimensions, the basic load plan is relatively simple:
perform basic transformations to the data to build dimension rows to be loaded into the
target presentation table.
Preparing fact tables is certainly important as they hold the key measurements of the
business that users want to see. Fact tables can be very large and time consuming to
load. However, preparing fact tables for presentation is typically more straightforward.
The delivery systems in the ETL architecture consist of:
Slowly Changing Dimension (SCD) Manager (subsystem 9) — Implements logic
for slowly changing dimension attributes.
Surrogate Key Generator (subsystem 10) — Produces surrogate keys
independently for every dimension.
Hierarchy Manager (subsystem 11) — Delivers multiple, simultaneous,
embedded hierarchical structures in a dimension.
Special Dimensions Manager (subsystem 12) — Creates placeholders in the ETL
architecture for repeatable processes supporting an organization's specific
dimensional design characteristics, including standard dimensional design
constructs such as junk dimensions, mini-dimensions and behavior tags.
Fact Table Builders (subsystem 13) — Construct the three primary types of fact
tables: transaction grain, periodic snapshot and accumulating snapshot.
Surrogate Key Pipeline (subsystem 14) — Replaces operational natural keys in
the incoming fact table record with the appropriate dimension surrogate keys.
Multi-Valued Bridge Table Builder (subsystem 15) — Builds and maintains bridge
tables to support multi-valued relationships.
Late Arriving Data Handler (subsystem 16) — Applies special modifications to
the standard processing procedures to deal with late-arriving fact and dimension
data.
Dimension Manager (subsystem 17) — Centralized authority who prepares and
publishes conformed dimensions to the data warehouse community.
Fact Table Provider (subsystem 18) — Owns the administration of one or more
fact tables and is responsible for their creation, maintenance and use.
Aggregate Builder (subsystem 19) — Builds and maintains aggregates to be
used seamlessly with aggregate navigation technologies for enhanced query
performance.
OLAP Cube Builder (subsystem 20) — Feeds data from the relational
dimensional schema to populate OLAP cubes.
Data Propagation Manager (subsystem 21) — Prepares conformed, integrated
data from the data warehouse presentation server for delivery to other
environments for special purposes.
MANAGING THE ETL ENVIRONMENT
A data warehouse will not be a success until it can be relied upon as a dependable
source for business decision making. To achieve this goal, the ETL system must
constantly work toward fulfilling three criteria:
Reliability. The ETL processes must run consistently to provide data on a timely basis
that is trustworthy at any level of detail.
Availability. The data warehouse must meet its service level agreements. The
warehouse should be up and available as promised.
Manageability. A successful data warehouse is never done; it constantly grows and
changes along with the business. Thus, ETL processes need to evolve gracefully as
well.
The ETL management subsystems are the key architectural components that help
achieve the goals of reliability, availability and manageability. Operating and maintaining
a data warehouse in a professional manner is not much different than other systems
operations: follow standard best practices, plan for disaster and practice (see related
article, "Don't Forget the Owner's Manual"). Many of you will be very familiar with the
following requisite management subsystems:
Job Scheduler (subsystem 22) — Reliably manages the ETL execution strategy,
including the relationships and dependencies between ETL jobs.
Backup System (subsystem 23) — Backs up the ETL environment for recovery,
restart and archival purposes.
Recovery and Restart (subsystem 24) — Processes for recovering the ETL
environment or restarting a process in the event of failure.
Version Control (subsystem 25) — Takes snapshots for archiving and recovering
all the logic and metadata of the ETL pipeline.
Version Migration (subsystem 26) — Migrates a complete version of the ETL
pipeline from development into test and finally into production.
Workflow Monitor (subsystem 27) — Ensures that the ETL processes are
operating efficiently and that the warehouse is being loaded on a consistently
timely basis.
Sorting (subsystem 28) — Serves the fundamental, high-performance ETL
processing role.
Lineage and Dependency (subsystem 29) — Identifies the source of a data
element and all intermediate locations and transformations for that data element
or, conversely, start with a specific data element in a source table and reveals all
activities performed on that data element.
Problem Escalation (subsystem 30) — Support structure that elevates ETL
problems to appropriate levels for resolution.
Paralleling and Pipelining (subsystem 31) — Enables the ETL system to
automatically leverage multiple processors or grid computing resources to deliver
within time constraints.
Security (subsystem 32) — Ensures authorized access to (and a historical record
of access to) all ETL data and metadata by individual and role.
Compliance Manager (subsystem 33) — Supports the organization's compliance
requirements typically through maintaining the data's chain of custody and
tracking who had authorized access to the data.
Metadata Repository (subsystem 34) — Captures ETL metadata including the
process metadata, technical metadata and business metadata, which make up
much of the metadata of the total DW/BI environment.
SUMMING IT UP
As you may now better appreciate, building an ETL system is unusually challenging. The
required ETL architecture requires a host of subsystems necessary to meet the
demanding requirements placed on the data warehouse. To succeed, carefully consider
each of these 34 subsystems. You must understand the breadth of requirements and
then place an appropriate and effective architecture in place. ETL is more than just
extract, transform and load; it's a host of complex and important tasks.
Bob Becker is a member of Kimball Group. He has focused on dimensional data
warehouse consulting and education since 1989. Contact him at
bob@kimballgroup.com.