0% found this document useful (0 votes)
54 views

Introduction To Database Management Systems: Unit 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Introduction To Database Management Systems: Unit 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Unit 3

Introduction to Database Management


Systems
Learning Objectives

• Describe how the problems of managing data resources in a traditional


file environment are solved by a database management system
• Describe the capabilities and value of a database management system
• Apply important database design principles
• Evaluate tools and technologies for accessing information from databases
to improve business performance and decision making
• Assess the role of information policy, data administration, and data
quality assurance in the management of a firm’s data resources
RR Donnelley Tries to Master Its Data

• Problem: Explosive growth created information management challenges.


• Solutions: Use MDM to create an enterprise-wide set of data, preventing
unnecessary data duplication.
• Master data management (MDM) enables companies like R.R. Donnelley
to eliminate outdated, incomplete or incorrectly formatted data.
• Demonstrates IT’s role in successful data management.
• Illustrates digital technology’s role in storing and organizing data.
Organizing Data in a Traditional File Environment

• File organization concepts


Database: Group of related files
File: Group of records of same type
Record: Group of related fields
Field: Group of characters as word(s) or number
 Describes an entity (person, place, thing on which we store
information)
 Attribute: Each characteristic, or quality, describing entity
– E.g., Attributes Date or Grade belong to entity COURSE
Organizing Data in a Traditional File Environment

THE DATA
HIERARCHY
A computer system organizes
data in a hierarchy that starts
with the bit, which represents
either a 0 or a 1. Bits can be
grouped to form a byte to
represent one character,
number, or symbol. Bytes can be
grouped to form a field, and
related fields can be grouped to
form a record. Related records
can be collected to form a file,
and related files can be
organized into a database.
Organizing Data in a Traditional File Environment

• Problems with the traditional file environment (files maintained


separately by different departments)
Data redundancy:
 Presence of duplicate data in multiple files
Data inconsistency:
 Same attribute has different values
Program-data dependence:
• When changes in program requires changes to data accessed by
program
Lack of flexibility
Poor security
Lack of data sharing and availability
Organizing Data in a Traditional File Environment

TRADITIONAL FILE PROCESSING

The use of a traditional approach to file processing encourages each functional area in a
corporation to develop specialized applications. Each application requires a unique data file
that is likely to be a subset of the master file. These subsets of the master file lead to data
redundancy and inconsistency, processing inflexibility, and wasted storage resources.
The Database Approach to Data Management

• Database
Serves many applications by centralizing data and controlling
redundant data
• Database management system (DBMS)
Interfaces between applications and physical data files
Separates logical and physical views of data
Solves problems of traditional file environment
 Controls redundancy
 Eliminates inconsistency
 Uncouples programs and data
 Enables organization to centrally manage data and data security
The Database Approach to Data Management

HUMAN RESOURCES DATABASE WITH MULTIPLE VIEWS

A single human resources database provides many different views of data, depending on the
information requirements of the user. Illustrated here are two possible views, one of interest
to a benefits specialist and one of interest to a member of the company’s payroll
department.
The Database Approach to Data Management

• Relational DBMS
Represent data as two-dimensional tables called relations or files
Each table contains data on entity and attributes
• Table: grid of columns and rows
Rows (tuples): Records for different entities
Fields (columns): Represents attribute for entity
Key field: Field used to uniquely identify each record
Primary key: Field in table used for key fields
Foreign key: Primary key used in second table as look-up field to
identify records from original table
The Database Approach to Data Management

RELATIONAL DATABASE TABLES

A relational database organizes data in the form of two-dimensional tables. Illustrated here
are tables for the entities SUPPLIER and PART showing how they represent each entity and
its attributes. Supplier Number is a primary key for the SUPPLIER table and a foreign key for
the PART table.
The Database Approach to Data Management

RELATIONAL DATABASE TABLES (cont.)

A relational database organizes data in the form of two-dimensional tables. Illustrated here
are tables for the entities SUPPLIER and PART showing how they represent each entity and
its attributes. Supplier Number is a primary key for the SUPPLIER table and a foreign key for
the PART table.
The Database Approach to Data Management

• Operations of a Relational DBMS


Three basic operations used to develop useful sets of data
 SELECT: Creates subset of data of all records that meet stated criteria
 JOIN: Combines relational tables to provide user with more
information than available in individual tables
 PROJECT: Creates subset of columns in table, creating tables with only
the information specified
The Database Approach to Data Management

THE THREE BASIC OPERATIONS OF A RELATIONAL DBMS

The select, join, and project operations enable data from two different tables to be
combined and only selected attributes to be displayed.
The Database Approach to Data Management

• Object-Oriented DBMS (OODBMS)


Stores data and procedures as objects
Objects can be graphics, multimedia, Java applets
Relatively slow compared with relational DBMS for processing large
numbers of transactions
Hybrid object-relational DBMS: Provide capabilities of both OODBMS
and relational DBMS
• Databases in the cloud
Typically less functionality than on-premises DBs
Amazon Web Services, Microsoft SQL Azure
The Database Approach to Data Management

• Capabilities of Database Management Systems


Data definition capability: Specifies structure of database content, used
to create tables and define characteristics of fields
Data dictionary: Automated or manual file storing definitions of data
elements and their characteristics
Data manipulation language: Used to add, change, delete, retrieve data
from database
 Structured Query Language (SQL)
 Microsoft Access user tools for generation SQL
Many DBMS have report generation capabilities for creating polished
reports (Crystal Reports)
The Database Approach to Data Management

• Designing Databases
Conceptual (logical) design: Abstract model from business perspective
Physical design: How database is arranged on direct-access storage
devices
• Design process identifies
Relationships among data elements, redundant database elements
Most efficient way to group data elements to meet business
requirements, needs of application programs
• Normalization
Streamlining complex groupings of data to minimize redundant data
elements and awkward many-to-many relationships
The Database Approach to Data Management

AN UNNORMALIZED RELATION FOR ORDER

An unnormalized relation contains repeating groups. For example, there can be many parts
and suppliers for each order. There is only a one-to-one correspondence between
Order_Number and Order_Date.
The Database Approach to Data Management

NORMALIZED TABLES CREATED FROM ORDER

An unnormalized relation contains repeating groups. For example, there can be many parts
and suppliers for each order. There is only a one-to-one correspondence between
Order_Number and Order_Date.
The Database Approach to Data Management

• Entity-relationship diagram
Used by database designers to document the data model
Illustrates relationships between entities
• Distributing databases: Storing database in more than one place
Partitioned: Separate locations store different parts of database
Replicated: Central database duplicated in entirety at different
locations
The Database Approach to Data Management

AN ENTITY-RELATIONSHIP DIAGRAM

This diagram shows the relationships between the entities SUPPLIER, PART, LINE_ITEM, and
ORDER that might be used to model the database in Figure 6-10.
Using Databases to Improve Business Performance
and Decision Making

• Very large databases and systems require special capabilities, tools


To analyze large quantities of data
To access data from multiple systems
• Three key techniques
Data warehousing
Data mining
Tools for accessing internal databases through the Web
Using Databases to Improve Business Performance
and Decision Making
• Data warehouse:
Stores current and historical data from many core operational
transaction systems
Consolidates and standardizes information for use across enterprise,
but data cannot be altered
Data warehouse system will provide query, analysis, and reporting
tools
• Data marts:
Subset of data warehouse
Summarized or highly focused portion of firm’s data for use by specific
population of users
Typically focuses on single subject or line of business
The Database Approach to Data Management

COMPONENTS OF A DATA WAREHOUSE

FIGURE 6-12

The data warehouse extracts current and historical data from multiple operational systems inside
the organization. These data are combined with data from external sources and reorganized into a
central database designed for management reporting and analysis. The information directory
provides users with information about the data available in the warehouse.
Using Databases to Improve Business Performance
and Decision Making

• Business Intelligence:
Tools for consolidating, analyzing, and providing access to vast
amounts of data to help users make better business decisions
E.g., Harrah’s Entertainment analyzes customers to develop gambling
profiles and identify most profitable customers
Principle tools include:
 Software for database query and reporting
 Online analytical processing (OLAP)
 Data mining
Using Databases to Improve Business Performance
and Decision Making

• Online analytical processing (OLAP)


Supports multidimensional data analysis
 Viewing data using multiple dimensions
 Each aspect of information (product, pricing, cost, region, time period)
is different dimension
 E.g., how many washers sold in the East in June compared with other
regions?
OLAP enables rapid, online answers to ad hoc queries
The Database Approach to Data Management

MULTIDIMENSIONAL DATA MODEL

The view that is showing is


product versus region. If
you rotate the cube 90
degrees, the face that will
show is product versus
actual and projected sales.
If you rotate the cube 90
degrees again, you will see
region versus actual and
projected sales. Other
views are possible.

FIGURE 6-13
Using Databases to Improve Business Performance
and Decision Making
• Data mining:
More discovery driven than OLAP
Finds hidden patterns, relationships in large databases and infers rules
to predict future behavior
E.g., Finding patterns in customer data for one-to-one marketing
campaigns or to identify profitable customers.
Types of information obtainable from data mining
 Associations
 Sequences
 Classification
 Clustering
 Forecasting
Using Databases to Improve Business Performance
and Decision Making
WHAT CAN BUSINESSES LEARN FROM TEXT MINING?
Read the Interactive Session and discuss the following questions

• What challenges does the increase in unstructured data present for


businesses?
• How does text-mining improve decision-making?
• What kinds of companies are most likely to benefit from text mining
software? Explain your answer.
• In what ways could text mining potentially lead to the erosion of personal
information privacy? Explain.
Using Databases to Improve Business Performance
and Decision Making

• Web mining
Discovery and analysis of useful patterns and information from WWW
 E.g., to understand customer behavior, evaluate effectiveness of Web
site, etc.
Web content mining
 Knowledge extracted from content of Web pages
Web structure mining
 E.g., links to and from Web page
Web usage mining
 User interaction data recorded by Web server
Using Databases to Improve Business Performance
and Decision Making

• Databases and the Web


Many companies use Web to make some internal databases available
to customers or partners
Typical configuration includes:
 Web server
 Application server/middleware/CGI scripts
 Database server (hosting DBM)
Advantages of using Web for database access:
 Ease of use of browser software
 Web interface requires few or no changes to database
 Inexpensive to add Web interface to system
The Database Approach to Data Management

LINKING INTERNAL DATABASES TO THE WEB

Users access an organization’s internal database through the Web using their desktop PCs
and Web browser software.
Managing Data Resources

• Establishing an information policy


Firm’s rules, procedures, roles for sharing, managing, standardizing
data
Data administration:
 Firm function responsible for specific policies and procedures to
manage data
Data governance:
 Policies and processes for managing availability, usability, integrity,
and security of enterprise data, especially as it relates to government
regulations
Database administration:
 Defining, organizing, implementing, maintaining database; performed
by database design and management group
Managing Data Resources

• Ensuring data quality


More than 25% of critical data in Fortune 1000 company databases are
inaccurate or incomplete
Most data quality problems stem from faulty input
Before new database in place, need to:
 Identify and correct faulty data
 Establish better routines for editing data once database in operation

34 © Prentice Hall 2011


Managing Data Resources

• Data quality audit:


Structured survey of the accuracy and level of completeness of the
data in an information system
 Survey samples from data files, or
 Survey end users for perceptions of quality
• Data cleansing
Software to detect and correct data that are incorrect, incomplete,
improperly formatted, or redundant
Enforces consistency among different sets of data from separate
information systems
Managing Data Resources

CREDIT BUREAU ERRORS—BIG PEOPLE PROBLEMS

Read the Interactive Session and discuss the following questions

• Assess the business impact of credit bureaus’ data quality problems for
the credit bureaus, for lenders, for individuals.
• Are any ethical issues raised by credit bureaus’ data quality problems?
Explain your answer.
• Analyze the people, organization, and technology factors responsible for
credit bureaus’ data quality problems.
• What can be done to solve these problems?
Data Warehouse and Data Mart
Introduction
• Data warehouse is data management and data
analysis
• Ø Goal: is to integrate enterprise wide
corporate data into a single repository from
which users can easily run queries
Benefits
• Ø The major benefit of data warehousing are
high returns on investment.
• Ø Increased productivity of corporate
decision-makers
Problems
• Ø Underestimation of resources for data loading
• Ø Hidden problems with source systems
• Ø Required data not captured
• Ø Increased end-user demands
• Ø Data homogenization
• Ø High demand for resources
• Ø Data ownership
• Ø High maintenance
• Ø Long-duration projects
• Ø Complexity of integration
Architecture
Main Components
• Operational data sources
• Operational data store(ODS)
• Query manager
• end-user access tools: : data reporting and
query tools, application development tools,
executive information system (EIS) tools,
online analytical processing (OLAP) tools, and
data mining tools.
Data flow
• Inflow- The processes associated with the
extraction, cleansing, and loading of the data from
the source systems into the data warehouse.
• Ø upflow- The process associated with adding value
to the data in the warehouse through summarizing,
packaging , packaging, and distribution of the data.
• Ø downflow- The processes associated with
archiving and backing-up of data in the warehouse.
Tools and Technologies
• Ø Extraction
• Ø Cleansing
• Ø Transformation after the critical steps,
loading the results into target system can be
carried
Data Mart
• A data mart is a simple form of a data warehouse
that is focused on a single subject (or functional
area), such as sales, finance or marketing.
• Data marts are often built and controlled by a
single department within an organization.
• Given their single-subject focus, data marts
usually draw data from only a few sources. The
sources could be internal operational systems, a
central data warehouse, or external data
Dependent and Independent Data Marts

• Dependent data marts draw data from a


central data warehouse that has already been
created.
• Independent data marts, in contrast, are
standalone systems built by drawing data
directly from operational or external sources
of data, or both.
Extraction-Transformation-and Loading (ETL)

• Extraction-Transformation-and Loading (ETL) process,


involves moving data from operational systems, filtering it,
and loading it into the data mart.
• With dependent data marts, this process is somewhat
simplified because formatted and summarized (clean) data
has already been loaded into the central data warehouse.
• The ETL process for dependent data marts is mostly a
process of identifying the right subset of data relevant to
the chosen data mart subject and moving a copy of it,
perhaps in a summarized form.
Steps in Implementing a Data Mart

• · Designing
• · Constructing
• · Populating
• · Accessing
• · Managing
Designing
• · Gathering the business and technical
requirements
• · Identifying data sources
• · Selecting the appropriate subset of data
• · Designing the logical and physical structure
of the data mart
Constructing
• · Creating the physical database and storage
structures, such as tablespaces, associated
with the data mart
• · Creating the schema objects, such as tables
and indexes defined in the design step
• · Determining how best to set up the tables
and the access structures
Populating
• · Mapping data sources to target data
structures
• · Extracting data
• · Cleansing and transforming the data
• · Loading data into the data mart
• · Creating and storing metadata
Accessing
• · Set up an intermediate layer for the front-
end tool to use..
• · Maintain and manage these business
interfaces.
• · Set up and manage database structures, like
summarized tables, that help queries
submitted through the front-end tool execute
quickly and efficiently.
Managing
• · Providing secure access to the data
• · Managing the growth of the data
• · Optimizing the system for better
performance
• · Ensuring the availability of data even with
system failures
Data Mart issues
• Data mart functionality
• Data mart size: the performance deteriorates
as data marts grow in size, so need to reduce
the size of data marts to gain improvements in
performance
• Data mart load performance: two critical
components: end-user response time and data
loading performance
Data Mining
• Data mining refers to extracting or mining
knowledge from large amounts of data. The
term is actually a misnomer. Thus, data mining
should have been more appropriately named
as knowledge mining which emphasis on
mining from large amounts of data.
Properties
• The key properties of data mining are
– Automatic discovery of patterns
– Prediction of likely outcomes
– Creation of actionable information
– Focus on large datasets and databases
Tasks
• Anomaly detection (Outlier/change/deviation detection) – The identification of unusual
data records, that might be interesting or data errors that require further investigation.

• Association rule learning (Dependency modelling) – Searches for relationships between


variables. For example a supermarket might gather data on customer purchasing habits.
 
• Clustering – is the task of discovering groups and structures in the data that are in some
way or another "similar", without using known structures in the data.

• Classification – is the task of generalizing known structure to apply to new data. For
example, an e-mail program might attempt to classify an e-mail as "legitimate" or as
"spam".
 
• Regression – attempts to find a function which models the data with the least error.

• Summarization – providing a more compact representation of the data set, including


visualization and report generation.

You might also like