0% found this document useful (0 votes)
8 views

DBMS_Unit 5-1

Transaction processing in database systems ensures data integrity and consistency through the ACID properties: Atomicity, Consistency, Isolation, and Durability. It allows multiple users to access and modify data simultaneously while preventing errors and inconsistencies, utilizing mechanisms like concurrency control and serializability. The document also discusses transaction states, execution types, and methods for implementing atomicity and durability in database management systems.

Uploaded by

vrao42190
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

DBMS_Unit 5-1

Transaction processing in database systems ensures data integrity and consistency through the ACID properties: Atomicity, Consistency, Isolation, and Durability. It allows multiple users to access and modify data simultaneously while preventing errors and inconsistencies, utilizing mechanisms like concurrency control and serializability. The document also discusses transaction states, execution types, and methods for implementing atomicity and durability in database management systems.

Uploaded by

vrao42190
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

DataBase Management System

MBA (Business Analytics)


Transaction Processing: Introduction
Transaction processing is a fundamental concept in database systems that ensures data is handled in
a safe, consistent, and reliable way.
A transaction is a single logical unit of work that may consist of one or more operations —
such as inserting, updating, or deleting data—that must all be completed successfully for
the database to remain in a consistent state. For example, consider transferring money from
one bank account to another. The process involves debiting one account and crediting another.
Both steps must succeed together; if one fails, the entire transaction must be rolled back to
prevent data inconsistency.

Transaction processing helps maintain data integrity, especially in multi-user environments,


by ensuring that:
➢Data remains consistent even when multiple users access or modify it simultaneously.
➢Transactions are isolated from one another until completed.
➢Changes are permanent only when a transaction is successfully completed.
➢The system can recover to a consistent state in case of failures.
…To achieve this, databases follow the ACID Properties
1. Also known as transaction properties
2. Used for maintaining the integrity, consistency, and accuracy of database
3. During transaction processing
ACID Stands for :

Atomicity:
• All transactions should be successful / All transactions should not be successful
• Either Successful or Failure applied to all transactions
Consistency:
• Before and after each transaction the database must be in a consistent state.
Isolation:
• Each transaction should be isolated from another one.
Durability:
• Once a transaction achieves a Committed state, it will remain committed.
• But if failure, then re-transact till getting success and after success must be committed.
ATOMICITY

1.All operations of a transaction need to be executed at once if not, the transaction is


aborted.
2.A transaction cannot be executed partially.
3.Each transaction should be considered as one unit.
4.Once a transaction started to run, it must be completed or is not executed at all.

Atomicity involves the following two operations:

Abort:

If a transaction aborts, then all the changes made are not visible.

Commit:

If a transaction commits, then all the changes made are visible.


CONSISTENCY

• The integrity constraints are maintained so that the database is


consistent before and after the transaction.
• The execution of a transaction will leave a database in either its prior
stable state or a new stable state.
• The consistent property of the database states that every transaction
sees a consistent database instance.
• The transaction is used to transform the database from one consistent
state to another consistent state.
ISOLATION

• It shows that the data which is used at the time of execution of a


transaction cannot be used by the second transaction until the first one
is completed.
• In isolation, if the transaction T1 is being executed and using the data
item X, then that data item can't be accessed by any other transaction
T2 until the transaction T1 ends.
• The concurrency control subsystem of the DBMS enforced the
isolation property.
DURABILITY
• This property ensures that once the transaction has completed execution, the
updates and modifications to the database are stored in and written to disk
and they persist even if a system failure occurs.
• These updates now become permanent and are stored in non-volatile
memory. The effects of the transaction, thus, are never lost.
• When a transaction is completed, then the database reaches a state known as
the consistent state. That consistent state cannot be lost, even in the event of
a system's failure.
• The recovery subsystem of the DBMS has the responsibility of Durability
property.
Transaction State
A transaction can be in any
one of the following states:
1. start
2. partially committed
3. committed
4. failed
5. aborted or terminate
Transaction State
Active - This is the first state of a transaction where the operations are being
carried out, but nothing is finalized yet.
Example: You open the movie ticket app, select the movie, date, and seats.
These actions are part of the transaction, but the booking isn’t confirmed or
saved yet.
Partially Committed - At this point, the last step of the transaction has been
executed, but the changes are not yet saved in the database.
Example: You’ve reached the payment page and successfully entered your
card details. The system is now ready to finalize the transaction but hasn’t
actually booked the ticket yet.
Committed - In this state, all the transactions are permanently saved to the
database. This step is the last step of a transaction, if it executes without fail.
Example: The payment is successful, and the ticket is booked. Your selected
seats are now locked in the system, and a confirmation is sent to you.
Transaction State
4. Failed - If the transaction cannot proceed due to a system crash, power
failure, or an internal error, it enters the failed state.
Example: If the system crashes just as you click on the “Pay” button,
before the payment is processed, the transaction fails. No seat is booked
and the process stops.

5. Aborted - If the transaction has failed at any point, the system will roll
back all the changes made during the transaction to maintain database
consistency.
Example: Suppose the payment went through, but the server crashed
before booking confirmation. The system will cancel the payment and
return your money, ensuring that either the entire transaction completes or
nothing changes at all. The transaction can be retried or canceled
completely.
Implementation of Atomicity
1.Undo Logging (Rollback Logs) : Before any data page is changed, the DBMS
writes an undo log record containing the “before image” of that change.

How it enforces atomicity: If a transaction aborts or the system crashes before its
commit record, the recovery process walks the log backwards, using the
before-images to undo any partial updates. Ensures that no part of an uncommitted
transaction “sticks.”

2.Shadow-Paging (Shadow-Copy) : The system keeps two page-maps: the current


one and a shadow copy. Updates go to new pages; the old (shadow) pages remain
untouched.

How it enforces atomicity: If the transaction fails, the DBMS simply discards the
modified in-memory map—no rollback of data pages is ever needed because the
on-disk shadow pages were never overwritten. Either the old state (shadow) or the
new state (updated map) becomes visible—nothing in between.
Implementation of Atomicity

3. Write-Ahead Logging (WAL) with Combined Undo/Redo Records : Every


update generates a log entry before the actual data page write (“write-ahead”).

How it supports atomicity: During recovery, any transaction without a


committed log record is undone using the logged old Values. (Note: Though
WAL also supports durability via its redo capability, its undo component is what
enforces atomicity.)
Implementation of Durability
A. Log-Based Methods
Rely on redo information to replay committed work after a crash.
1.Write-Ahead Logging (WAL / Redo Logs)
Mechanism
•Every update writes a log record containing the after-image (newValue) before
the data page is ever flushed to disk.
•On commit, the COMMIT record is forced to disk (Log-Force)
How It Ensures Durability
•Even if the system crashes before data pages land on disk, on recovery the
redo phase replays every “newValue” for transactions that had a logged
COMMIT.
•Example
•An e-commerce order writes inventory and billing updates to the redo log,
flushes the COMMIT record, returns “Success” to the user, and only then lets the
background writer flush data pages—but if a crash occurs immediately,
recovery’s redo ensures the order shows up in the database.
Implementation of Durability
B. Checkpointing – Snapshot-Based
•Periodically writes all modified (dirty) data pages to disk.
•Helps reduce recovery time by allowing the system to skip reprocessing older logs.
•Can be done as:
•Consistent checkpoints: freeze activity briefly to flush all changes.
•Fuzzy checkpoints: flush in the background while allowing ongoing transactions.

C. Redundancy – Storage-Based
•Stores duplicate copies of data/logs across different disks or locations.
•Techniques include:
•RAID: Hardware-based redundancy (e.g., mirroring or parity) for disk failure tolerance.
•Replication: Sends live copies to standby servers or other sites.
•Backups: Periodic copies to off-site/cloud storage for disaster recovery.
•Ensures data remains safe even if hardware or entire site fails.
Single-User versus Multiuser Systems
• One criterion for classifying a database system is according to the
number of users who can use the system concurrently.
• A DBMS is single-user if at most one user at a time can use the
system, and it is multiuser if many users can use the system—and
hence access the database—concurrently. Single-user DBMSs are
mostly restricted to personal computer systems; most other DBMSs
are multiuser.
• For example, an airline reservations system is used by hundreds of
travel agents and reservation clerks concurrently. Database systems
used in banks, insurance agencies, stock exchanges, supermarkets, and
many other applications are multiuser systems. In these systems,
hundreds or thousands of users are typically operating on the data base
by submitting transactions concurrently to the system.
Concurrent execution of processes is actually
interleaved

Interleaved Execution Parallel Execution

Concurrent execution of processes is actually interleaved


Interleaving means mixing operations of two or more transactions, so they run together — not one
fully after the other.
Concurrent Executions
• Concurrency control is a concept in Database Management Systems
(DBMS) that ensures multiple transactions can simultaneously access or
modify data without causing errors or inconsistencies.
• It provides mechanisms to handle the concurrent execution in a way that
maintains ACID properties.
• By implementing concurrency control, a DBMS allows transactions to
execute concurrently while avoiding issues such as deadlocks, race
conditions, and conflicts between operations.
• The main goal of concurrency control is to ensure that simultaneous
transactions do not lead to data conflicts or violate the consistency of the
database. The concept of serializability is often used to achieve this goal.
What is Concurrent Executions
• In a multi-user system, several users can access and work on the same database at the same time. This
is known as concurrent execution, where the database is used simultaneously by different users
for various operations. For instance, one user might be updating data while another is retrieving it.
• When multiple transactions are performed on the database simultaneously, it is important that these
operations are executed in an interleaved manner. This means that the actions of one user should not
interfere with or affect the actions of another. This helps in maintaining the consistency of the database.
However, managing such simultaneous operations can be challenging, and certain problems may arise
if not handled properly. These challenges need to be addressed to ensure smooth and error-free
concurrent execution.
Concurrent Execution can lead to various challenges:
• Dirty Reads: One transaction reads uncommitted data from another transaction, leading to potential
inconsistencies if the changes are later rolled back.
• Lost Updates: When two or more transactions update the same data simultaneously, one update may
overwrite the other, causing data loss.
• Inconsistent Reads: A transaction may read the same data multiple times during its execution, and the
data might change between reads due to another transaction, leading to inconsistency.
Serial Schedule: The serial schedule is a type of Schedule where
one transaction is executed completely before starting another
transaction.
Non-serial Schedule: In a Non-serial schedule, multiple
transactions execute concurrently/simultaneously.

S1 S2
T1 T2 T1 T2
Serial R(A) R(A)
Schedule W(A) W(A)
R(B)
Non-
Serial R(A)
W(B)
Always Schedule W(A)
gives R(A) R(B)
consistent W(A) W(B)
results R(B) R(B)
W(B) W(B)
Serial Schedule
• As the name says, all the transactions are executed serially one after
the other.
• In serial Schedule, a transaction does not start execution until the
currently running transaction finishes execution.
• This type of execution of the transaction is also known as non-
interleaved execution.
• Serial Schedules are always recoverable, cascades, strict, and
consistent. A serial schedule always gives the correct result.

Cascading Rollback — the rollback of one transaction causes a chain of rollbacks in


others.
Non-Serial Schedule
• In a non-serial Schedule, multiple transactions execute
concurrently/simultaneously, unlike the serial Schedule, where one
transaction must wait for another to complete all its operations.
• In the Non-Serial Schedule, the other transaction proceeds without
the completion of the previous transaction.
• All the transaction operations are interleaved or mixed with each
other.
• Non-serial schedules are NOT always recoverable, cascades, strict and
consistent.
Serializability

• Serializability is a concept used to ensure correctness when multiple


transactions run at the same time (concurrently) in a database.
• Serializability means the final result of a set of concurrent transactions
is the same as if the transactions were run one after another, in some
order (serially).
• A serializable schedule always leaves the database in a consistent state
Types of Serializability

Conflict Serializable: A schedule is conflict serializable if its transactions


can be rearranged into a serial order without changing the conflicting
operations (like read/write on the same data item). It only considers
conflicts between operations.
View Serializable: A schedule is view serializable if its transactions can
be rearranged into a serial order where the final result of the transactions
(the data they produce) is the same, even if some operations don’t
conflict directly.
Types of Serializability

A schedule is called conflict serializable if it can be transformed into a


serial schedule by swapping non-conflicting operations.

An operations pair become conflicting if all conditions satisfy:


1.Both belong to separate transactions.
2.They have the same data item.
3.They contain at least one write operation.
Testing for
Serializability

S2
T1 T2
R(A) T1 T2
W(A)
R(A) NO CYCLE MADE, Therefore the schedule can be
transformed into a serial schedule by swapping non-
W(A)
conflicting operations.
R(B)
W(B) If the precedence graph (serialization graph) has
R(B)
no cycle, then the schedule is conflict
W(B)
serializable.
How to Convert -> Swapping of non-
conflicting operations
S2 S2
T1 T2 T1 T2
R(A) R(A) Swap only when:
W(A) W(A)
R(A) R(B) •Operations were on
W(A) W(B)
different data items.
R(B) R(A)
W(B) W(A)
•No read-write or
R(B) R(B) write-write conflict.
W(B) W(B)

Before Swap After Swap


Examples – Testing Serializability
S3
T2
T1 T2 T3
R (X)
T1
R (Z)
W (Z)
R (Y)
T3
R (Y)
W (Y)
W (X)
Precedence graph is cyclic.
W (Z) And therefore- the schedule is NOT
W (X) conflict serializable.
Non-Serializability in DBMS
A non-serial schedule that is not serializable is called a non-serializable
schedule. Non-serializable schedules may/may not be consistent or
recoverable. Non-serializable Schedule is divided into types:
1.Recoverable Schedule
2.Non-recoverable Schedule
Recoverability
• For some schedules it is easy to recover from transaction and system
failures, whereas for other schedules the recovery process can be quite
involved.
• In some cases, it is even not possible to recover correctly after a failure.
Hence, it is important to characterize the types of schedules for which
recovery is possible, as well as those for which recovery is relatively
simple.
• These characterizations do not actually provide the recovery algorithm; they
only attempt to theoretically characterize the different types of schedules.
• Once a transaction T is committed, it should never be necessary to roll
back T. This ensures that the durability property of transactions is not
violated. The schedules that theoretically meet this criterion are called
recoverable schedules; those that do not are called nonrecoverable and
hence should not be permitted by the DBMS.
Recoverable Schedule
A recoverable schedule ensures that the database can return to a consistent
state after a transaction failure. In this type of schedule:
1. A transaction cannot use (read) data updated by another transaction
until that transaction commits.

2. If a transaction fails before committing, all its changes must be rolled


back, and any other transactions that have used its uncommitted data
must also be rolled back.

Recoverable Schedule prevents data inconsistencies and ensures that no


transaction commits based on unverified changes, making recovery easier
and safer.
Recoverable Schedule

Handles Failures properly (Failure Classification)

Supports proper Isolation Levels (Implementation of Isolation)

Enables smooth UNDO/REDO (Recovery Techniques)

Ensures data consistency in stored files (Storage)
Failure Classification
Failure in a database means:
• The database cannot complete a transaction (like transferring money, booking a ticket, etc.)
• Data is lost, damaged, or becomes wrong.
Common causes of database failure:

Network failures (Internet connection lost), System crashes (Server or database software suddenly stops), Natural
disasters (Floods, earthquakes damaging servers), Human carelessness (Deleting important files by mistake), Sabotage
(Someone intentionally harming the database), Software errors (Bugs or mistakes in the database program)

Failure

Transaction Failure System Crash Data – Transfer/ Disk Failure

Logical Errors

System Errors
Failure Classification
1. Transaction Failure - A transaction failure happens when a transaction cannot
continue or complete its task.
Reasons for transaction failure:
• Logical Error: Mistakes in the code or internal faults that stop the transaction.
• System Error: The database system itself stops the transaction due to issues like
deadlock or resource unavailability.(Example: System detects two transactions waiting
for each other forever.)
2. System Crash - A system crash occurs due to hardware or software breakdown or
external problems like: Transaction failure, Operating system errors, Power outages, Main
memory failure.
• These failures are called soft failures and mostly affect volatile memory (like
RAM).Usually, non-volatile memory (like hard disks) remains safe — this assumption is
called the fail-stop assumption
3. Data-Transfer Failure - A data-transfer failure happens when a problem occurs during
the transfer of data between memory and disk.
Reasons for data-transfer failure: Disk head crash, Read/write errors
For solution, Keep regular backups of your data on other storage devices to recover
quickly if a failure happens.
Storage
• Databases are stored in file formats, which contain records. At
physical level, the actual data is stored in electromagnetic format on
some device. These storage devices can be broadly categorized into
three types −
Storage
• Primary Storage - Primary storage is the fastest and closest memory
to the CPU (RAM, Cache)
Features:

• Ultra-fast access: CPU can directly read/write data.


• Small in size: Only important or active data is kept here.
• Volatile: Data is lost when power is switched off.
• Needs continuous power supply to maintain data.

Usage in DBMS:

• Running transactions
• Buffering database pages
• Executing queries
Storage
• Secondary Storage - Secondary storage is used to permanently store
data even if power is lost. (Hard Disk Drives (HDD), Solid State
Drives (SSD),CDs, DVDs, Flash Drives (Pen drives)
Features:

• Non-volatile: Data remains safe even without power.


• Slower than primary storage, but larger in size.
• Cheaper compared to primary memory.

Usage in DBMS:

• Storing full database files


• Keeping indexes
• Saving transaction logs
Storage
• Tertiary Storage - Tertiary storage is used for massive, rarely used
data or full backups. (Magnetic Tapes)
Features:

• Very large capacity (can store terabytes of data).


• Slowest access speed compared to primary and secondary storage.
• Mainly used for backup and archival purposes.

Usage in DBMS:

• Full system backups


• Archiving old database snapshots
Recovery algorithm
What is Recovery?
• Whenever a transaction is failed, it may result in loss of information
leaving database in an inconsistent state. Recovery is the process to
restore the database to the previous consistent state that existed before
the failure, using recovery algorithms.
What are Recovery Algorithms?
• Those algorithms which will perform the recovery process are termed as
recovery algorithms.
• These are techniques to ensure database consistency, transaction atomicity
and durability properties of the transaction despite failures.
Recovery Techniques
1. Log-Based Recovery
In log-based recovery, every change made to the database during a transaction is
recorded in a special file called a log. The log keeps information such as:
• When a transaction starts,
• What changes it makes (old value and new value),
• Whether it successfully commits or aborts.
TO REMEMBER:
Before the database itself is updated, the change is first written to the log. This is known
as Write-Ahead Logging (WAL).
During recovery:
• If a transaction is found to have committed, its changes are redone.
• If a transaction did not commit, its changes are undone.
Recovery Techniques
2. Checkpoint
Checkpointing is a technique used to reduce the time needed for recovery. At
regular intervals, the DBMS saves a snapshot of the current state of the
database and the transaction log.
TO REMEMBER:
• All changes made up to the checkpoint are permanently saved to disk.
• Recovery only needs to start from the last checkpoint, not from the very
beginning of the transaction log.
Example:
If a checkpoint was taken at 3:00 PM, and a crash occurs at 3:30 PM, during
recovery, the system only needs to look at the logs from 3:00 PM onwards.
This saves time and speeds up the recovery process.
Recovery Techniques
3. Shadow Paging
Shadow paging is a recovery method where the database maintains two copies of
data:
• A current page table (original copy),
• A shadow page table (copy for updates).
TO REMEMBER:
• When a transaction updates data, it modifies only the shadow pages.
• If the transaction completes successfully, the shadow page table becomes the
current page table.
• If a failure occurs, the system simply discards the shadow copy and uses the
original stable copy.
Example:
Suppose a bank account balance is being updated.If a crash happens before the
transaction commits, the DBMS will ignore the updated shadow page and use the old
balance data.
Recovery Techniques
4. ARIES (Algorithm for Recovery and Isolation Exploiting Semantics)
ARIES is a widely used, advanced recovery technique implemented in many real-world databases
like IBM DB2. It combines the ideas of logging, checkpointing, and sophisticated undo/redo
processes.
ARIES follows three phases during recovery:
1.Analysis Phase: Identify active transactions and dirty pages (pages that were modified but not yet
saved to disk) at the time of the crash.
2.Redo Phase: Repeat all actions from the logs to reconstruct the database state exactly as it was
before the crash.
3.Undo Phase: Rollback any transactions that were active (not committed) at the time of the crash.
Example:
Suppose transaction T2 had written some values but had not committed.
After a crash, ARIES will redo all committed transactions and undo T2 to maintain consistency.
Implementation of Isolation
1. Lock-Based Protocol
In lock-based protocols, transactions use locks to control access to
data items. A lock ensures that only one transaction can use the data
in a particular way at a time.
There are two main types of locks:
• Shared Lock (S-lock):
• For reading a data item.
• Many transactions can read the same data at the same time.
• Exclusive Lock (X-lock):
• For writing (updating) a data item.
• Only one transaction can have the exclusive lock — no other transaction can
even read it.
Implementation of Isolation
2. Timestamp-Based Protocol
• In timestamp-based protocols, every transaction is given a timestamp when
it starts.
• This timestamp shows how old the transaction is - lower timestamp
means older.
• The DBMS uses timestamps to decide the order of transaction operations
automatically.
• Timestamp-based protocol gives each transaction a unique time value to
decide the correct order of operations.
• It avoids deadlocks because transactions do not wait for each other; they
rollback if needed.
In timestamp protocol, an older transaction cannot read or write a data item if a
newer transaction has already updated it, to avoid reading outdated data.
PL/ SQL
Introduction
• PL/SQL (Procedural Language/Structured Query Language) is a block-structured
language developed by Oracle that allows developers to combine the strength
of SQL with procedural programming constructs.
Basics of PL/SQL
• PL/SQL stands for Procedural Language extensions to the Structured Query
Language (SQL).
• PL/SQL is a combination of SQL along with the procedural features of
programming languages.
• PL/SQL includes procedural language elements like conditions and loops. It
allows declaration of constants and variables, procedures and functions, types and
variable of those types and triggers.
Differences Between SQL and PL/SQL
SQL PL/SQL

SQL is a single query that is used to perform PL/SQL is a block of codes that used to write the
DML and DDL operations. entire program blocks/ procedure/ function, etc.

It is declarative, that defines what needs to be PL/SQL is procedural that defines how the things
done, rather than how things need to be done. needs to be done.

Execute as a single statement. Execute as a whole block.

Mainly used to manipulate data. Mainly used to create an application.

It is an extension of SQL, so it can contain SQL


Cannot contain PL/SQL code in it.
inside it.
Structure of PL/SQL Block
• The basic unit in PL/SQL is a block.
• All PL/SQL programs are made up of blocks, which can be nested within each other.
• Typically, each block performs a logical action in the program.
Structure of PL/SQL Block
• Declare section starts with DECLARE keyword in which variables, constants,
records as cursors can be declared which stores data temporarily. It basically
consists definition of PL/SQL identifiers. This part of the code is optional.
• Execution section starts with BEGIN and ends with END keyword. This is a
mandatory section and here the program logic is written to perform any task like
loops and conditional statements.
It supports all DML commands, DDL commands and SQL*PLUS built-in functions
as well.
• Exception section starts with EXCEPTION keyword. This section is optional
which contains statements that are executed when a run-time error occurs. Any
exceptions can be handled in this section.
Types of Blocks
1.Anonymous Block – • Not stored in the database.
• Used for one-time or ad hoc operations.
• No name – just written and executed.

2.Named Block - These are stored in the database for reuse and can
be:
• Procedures
• Functions
• Packages
•Triggers
Writing a basic program
Use Case : Addition of 2 numbers
DECLARE

n1 NUMBER(10);

n2 NUMBER (10);

BEGIN

n1 := 6;

n2:= 5;

DBMS_OUTPUT.PUT_LINE (‘The addition of given numbers is’ || (n1+n2));

END;

/
Writing a basic program
Use Case : Addition of 2 numbers

DECLARE
n1 NUMBER(10) := 10;
n2 NUMBER(10) := 20;
BEGIN
DBMS_OUTPUT.PUT_LINE('The addition of given numbers is ' || (n1 + n2));
END;
/
Writing a basic program
Use Case : Product of 2 numbers

DECLARE
n1 NUMBER(10) := 10;
n2 NUMBER(10) := 20;
BEGIN
DBMS_OUTPUT.PUT_LINE('The multiplication of given numbers is '
|| (n1*n2));
END;
/
Writing a basic program
Use Case : Show Product of 2 numbers. If results are greater than 100, print High
Result value and lower than 100 print Low Result value.

DECLARE
n1 NUMBER(10) := 10;
n2 NUMBER(10) := 20;
BEGIN
IF (n1 * n2) > 100 THEN
DBMS_OUTPUT.PUT_LINE(‘High Product Value: ' || (n1 * n2));
ELSE
DBMS_OUTPUT.PUT_LINE(‘Low Product Value: ' || (n1 * n2));
END IF;
END;
/
Writing a basic program
Use Case : Calculate the total cost of items by multiplying quantity and price. If
the total cost exceeds a budget of 1000, display ‘Budget exceeded’. Otherwise,
show ‘Total cost is within budget’.

DECLARE
quantity NUMBER := 15;
price NUMBER := 70;
BEGIN
IF (quantity * price) > 1000 THEN
DBMS_OUTPUT.PUT_LINE(‘Budget exceeded’);
ELSE
DBMS_OUTPUT.PUT_LINE(‘Total cost is within budget’);
END IF;
END;
/
Writing a basic program: Usage of SELECT
SELECT name, marks
FROM students_125 DECLARE
WHERE student_id = 101; v_name students_125.name%TYPE;
v_marks students_125.marks%TYPE;
BEGIN
SELECT name, marks
INTO v_name, v_marks
FROM students_125 WHERE student_id = 101;
DBMS_OUTPUT.PUT_LINE('Name: ' || v_name);
DBMS_OUTPUT.PUT_LINE('Marks: ' || v_marks);
END;
/
Writing a basic program: Usage of SELECT
SELECT department
DECLARE
FROM students_125 v_dept students_125.department%TYPE;
WHERE student_id = 102;
BEGIN
SELECT department
INTO v_dept
FROM students_125
WHERE student_id = 102;

DBMS_OUTPUT.PUT_LINE('Department: ' || v_dept);

END;
/
Control Structures
Control structures allow to control the flow of execution in PL/SQL
programs. PL/SQL has three categories of control statements:
1. Conditional selection statements : It runs different statements for
different data values.
The conditional statements are IF statement and the CASE statement.
2. Loop statements : It run the same statements with a series of
different data values.
The loop statements are the basic LOOP, FOR LOOP, and WHILE LOOP.
3. Sequential control statements : The sequential control statements are
GOTO, which goes to a specified statement, and NULL, which does
nothing.

Source : - https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpls/plsql-control-
statements.html#GUID-18777904-23F6-4F6D-8B41-46BABF00BA03
Control Structures
1. Conditional selection statements
a. IF Statement : The IF statement is used to execute a block of code
based on certain conditions.
The IF statement has these forms:
• IF THEN
• IF THEN ELSE
• IF THEN ELSIF

IF condition THEN
--- statement----
END IF;
Source : - https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpls/plsql-control-
statements.html#GUID-18777904-23F6-4F6D-8B41-46BABF00BA03
Control Structures
1. Conditional selection statements
b. CASE Statement
The CASE statement provides a way to handle multiple conditions more
efficiently than nested IF statements.

The CASE statement chooses from a sequence of CASE


conditions,and runs the corresponding statement.
WHEN condition1 THEN
The CASE statement has these forms: -- Statements
• Simple, which evaluates a single expression and compares it
to several potential values. WHEN condition2 THEN
• Searched, which evaluates multiple conditions and chooses -- Statements
the first one that is true. ELSE
The CASE statement is appropriate when a different -- Statements
action is to be taken for each alternative. END CASE;
Control Structures
2. Loop statements
Loop statements run the same statements with a series of different values. The
loop statements are:

• Basic LOOP : Repeats a block of code indefinitely until a specific condition is


met, and you can exit it manually.
• FOR LOOP : Repeats a block of code a fixed number of times, often based on
a counter or range.
• WHILE LOOP : Repeats a block of code as long as a specified condition is
true.
Procedure
A procedure is a named block of code that performs a specific task, like printing a
message or updating a record.
• It's reusable and can take input parameters.
• It doesn’t return a value directly. CREATE OR REPLACE PROCEDURE procedure_name (parameters) IS
BEGIN

END;

CREATE OR REPLACE PROCEDURE greet_user(p_name IN VARCHAR2) IS


BEGIN
DBMS_OUTPUT.PUT_LINE('Hello, ' || p_name);
END;
/

-----To call-----
BEGIN
greet_user('Rahul');
END;
/
Functions
A FUNCTION in PL/SQL is a stored program that accepts parameters, performs
operations, and returns a single value.
Types of Functions
1. Pre-defined/ Built-in functions
A. Scalar - Scalar functions operate on single values and return a single value
of a specified type (e.g., string, number, date).

• TO_CHAR () : Converts a date or number to a string.


• TO_NUMBER () : Converts a string or other value to a number.
• LENGTH () : Returns the length of a string.
• SUBSTR () : Extracts a substring from a string.
Functions
1. Pre-defined/ Built-in functions
B. Aggregate functions - Aggregate functions operate on a group of rows and
return a single value, typically used with GROUP BY clauses.

• SUM () : Returns the sum of values in a numeric column.


• AVG () : Returns the average of values in a numeric column.
• COUNT () : Returns the number of rows (or non-NULL values) in a
column.

2. User – defined functions


These are custom functions that you can define in PL/SQL to perform specific tasks.
These functions can take parameters, perform operations, and return a value.
Example : User-defined Function
DECLARE
n1 NUMBER(10) := 10;
n2 NUMBER(10) := 20;
BEGIN
DBMS_OUTPUT.PUT_LINE('The multiplication of given numbers is ' || (n1*n2));
END;
/
CREATE OR REPLACE FUNCTION calculate_product (n1 IN NUMBER, n2 IN NUMBER)
RETURN NUMBER IS
BEGIN
RETURN n1 * n2;
END;
/

SELECT calculate_product (10, 20) AS product FROM dual;


Cursor
A cursor is a pointer to a memory area (created temporarily during SQL
execution) that stores the result of a query and allows row-by-row access.
The concept of cursor in PL/SQL can be explained from different viewpoints.
1. Logical Concept (programmer’s view): A pointer that helps you fetch
multiple rows one-by-one.
2. Memory-Level Concept (system’s view): A temporary memory area in
RAM created to store the result set of a SQL query, so the database engine
can fetch one row at a time.
A cursor is used when:
• You need to handle multiple rows, one row at a time.
• You want to perform row-by-row operations (like checking if each student is above 20
years old).
Types of Cursor
1. Implicit Cursor
• Automatically created by Oracle whenever you run a SQL statement like INSERT,
UPDATE, DELETE, or a SELECT ... INTO (that returns only one row).
• User don’t declare it.
• Oracle does open, fetch, and close operations internally.

2. Explicit Cursor
• Created and controlled manually using DECLARE, OPEN, FETCH, and CLOSE.
• Used when the SELECT statement returns multiple rows.
• Gives more control over row-by-row processing.
SET SERVEROUTPUT ON;
DECLARE
CURSOR cur_students IS
SELECT Name, Age FROM Students_125;
Demo Program
v_name Students_125.Name%TYPE;
v_age Students_125.Age%TYPE;
BEGIN
USE CASE:
OPEN cur_students;
LOOP
FETCH cur_students INTO v_name, v_age;
Print the names of all
EXIT WHEN cur_students%NOTFOUND;
students from the
IF v_age > 20 THEN Students_125 table who
DBMS_OUTPUT.PUT_LINE(v_name || ' is above 20'); are older than 20, using a
END IF; row-by-row check with a
END LOOP; cursor.
CLOSE cur_students;
END;
/
THANK YOU!

You might also like