0% found this document useful (0 votes)
12 views5 pages

Ratnojit Saha Multimedia Systems

Uploaded by

CosmicFlick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views5 pages

Ratnojit Saha Multimedia Systems

Uploaded by

CosmicFlick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

SILIGURI INSTITUTE OF TECHNOLOGY

Salbari, Sukna, Siliguri – 734009


Department of Computer Science & Engineering

Student Name Ratnojit Saha


Department CSE Roll No. 11900122167
Year 4th Semester 7th Section C
Subject Name Multimed ia Systems
Subject Code OEC-CS701B
Faculty Name Anupam Mukherjee

DECLARATION BY STUDENT

I certify that this assignment is my own work in my own words.

Signature of Student: ----------------------------------


1. Answer the following: (5 marks — short answers)

a) What is MIDI?

Ans : MIDI (Musical Instrument Digital Interface) is a standardized protocol for communicating
musical performance data between electronic musical instruments, computers, and controllers.
It transmits event messages (note on/off, pitch, velocity, control changes) rather than audio,
enabling control and sequencing of instruments.

b) State the Nyquist sampling theorem.


Ans : Nyquist sampling theorem: A continuous-time bandlimited signal with maximum
frequency fmaxf_{max}fmax can be completely reconstructed from its samples if sampled at a
rate fsf_sfs such that fs>2fmaxf_s > 2 f_{max}fs>2fmax. The minimum sampling rate 2fmax2
f_{max}2fmax is called the Nyquist rate.

c) What is quantization error?


Ans : Quantization error (or quantization noise) is the difference between a continuous (or high-
precision) signal value and the nearest representable discrete level after quantization. It is
introduced when mapping analog/continuous amplitude values to a finite set of digital levels,
causing rounding or truncation error.

d) What is hypermedia?
Ans : Hypermedia is an extension of hypertext that integrates multiple types of media (text,
images, audio, video, animations) linked via hyperlinks. It allows non-linear navigation through
interconnected multimedia content (e.g., the web with multimedia pages and links).

2. What is data compression? Difference between lossy and lossless techniques.

Ans:

Data compression:
Compression is the process of encoding information using fewer bits than the original
representation by removing redundancy or irrelevancy. It reduces storage space and
transmission time.

Difference — Lossless vs Lossy

• Lossless compression

o Definition: Original data can be perfectly reconstructed from compressed data


(no information loss).
o Examples: Huffman coding, LZW, PNG, FLAC (audio).

o Use cases: Text, executable files, source code, medical images where exact
recovery is required.

o Pros/Cons: Exact recovery; typically lower compression ratio than lossy.

• Lossy compression

o Definition: Some information is discarded during compression; reconstruction is


approximate (irreversible).

o Examples: JPEG (images), MP3 (audio), MPEG (video).

o Use cases: Multimedia (images, audio, video) where perceptual quality is


preserved and higher compression is desired.

o Pros/Cons: Higher compression ratio; possible quality degradation/artifacts.

3. Huffman coding for given frequencies — construct tree & derive codes.

Frequencies: A: 0.25, B: 0.10, C: 0.20, D: 0.15, E: 0.26, F: 0.04

Ans:

Steps (brief):

1. List symbols with probabilities and build a min-heap by frequency.

2. Repeatedly remove two lowest-frequency nodes, combine them into a new node (sum
of frequencies), and insert back.

3. Continue until one node (root) remains — this forms the Huffman tree.

4. Assign binary digits by traversing: usually left = 0, right = 1. Code for each symbol is the
path from root to its leaf.

Combining order (example):

• Combine F(0.04) + B(0.10) → node(0.14)

• Combine node(0.14) + D(0.15) → node(0.29)

• Combine C(0.20) + A(0.25) → node(0.45)

• Combine E(0.26) + node(0.29) → node(0.55)


• Combine node(0.45) + node(0.55) → root(1.00)

Derived Huffman codes (one valid assignment):

• C : 00

• A : 01

• E : 10

• F : 1100

• B : 1101

• D : 111

4. Explain RGB and CMYK colour models and why they are complementary.

Ans:

RGB (Red, Green, Blue)

• Type: Additive colour model.

• Principle: Colours are produced by adding light of red, green, and blue primaries.
Combining all at full intensity yields white; absence yields black.

• Use cases: Displays and devices that emit light (monitors, TVs, phone screens,
projectors).

• Representation: Typically 8-bit per channel → 24-bit colour (0–255 per channel).

CMYK (Cyan, Magenta, Yellow, Key/Black)

• Type: Subtractive colour model.

• Principle: Colours are produced by subtracting (absorbing) portions of white light via
inks/paints: cyan absorbs red, magenta absorbs green, yellow absorbs blue. Combining
all ideally produces black (in practice a dark brown, so black K is added).

• Use cases: Print media (inkjet, offset printing).

• Components: C, M, Y for colour mixing; K (black) for depth and ink efficiency.

Why complementary:
• The primary colours in one model correspond to the complements in the other (R ↔ C,
G ↔ M, B ↔ Y). RGB specifies emitted-light colour while CMYK specifies pigment/ink
absorption. A colour displayed in RGB must be converted to CMYK for printing because
printers reproduce colour by subtractive mixing; the conversion is not exact due to
gamut differences (prints often have smaller gamut).

5. Steps of JPEG compression technique.

Ans:

JPEG compression pipeline (baseline JPEG, stepwise):

1. Color space conversion: Convert input from RGB to YCbCr (luminance Y and
chrominance Cb, Cr). Human vision is more sensitive to luminance than chroma.

2. Downsampling (optional): Reduce chroma resolution (e.g., 4:2:0) because humans are
less sensitive to colour detail.

3. Block splitting: Divide image into 8×8 pixel blocks (for each channel).

4. Level shift: Subtract 128 from each pixel value to center values around zero (range
−128..127).

5. Discrete Cosine Transform (DCT): Apply 2D DCT to each 8×8 block to convert spatial
pixels to frequency-domain coefficients.

6. Quantization: Divide DCT coefficients by quantization matrix values (lossy step). Many
high-frequency coefficients become zero — main source of compression & loss.

7. Zig-zag scanning: Reorder 8×8 coefficients into a 1D sequence that groups low-frequency
first and high-frequency later to maximize runs of zeros.

8. Entropy coding: Apply lossless coding (e.g., Huffman coding or arithmetic coding) to the
serialized coefficients and run-lengths (RLE for zero runs).

9. File formatting: Store compressed data plus headers (quantization tables, Huffman
tables, image metadata) in JPEG file format.

Decompression reverses these steps (entropy decoding → inverse zig-zag → dequantize →


inverse DCT → add 128 → combine channels → convert back to RGB). Quality level is controlled
via quantization tables.

You might also like