Fundamentals of a Computer
Fundamentals of a Computer
DATA INFORMATION
● Raw facts and figures ● Processed data
● Similar to raw material ● Similar to the finished product
● Cannot be directly used ● Adds to knowledge and helps in
● Does not give precise and clear taking decisions
sense ● Clear and meaningful
Data processing
Definition of Data Processing
● Data processing refers to the operations or actions performed on raw data to
convert it into useful information. It involves capturing, storing, manipulating, and
presenting data in a form that supports decision-making and problem-solving.
Storage Unit**
● Purpose**: Stores data and instructions before processing and holds the
processed information until it’s sent to the output unit.
● Types of Storage**:
● 1.Primary Storage (Main Memory)**:
● Random Access Memory (RAM)**: Temporary storage for data and instructions
being processed.
● Read Only Memory (ROM)**: Permanent storage containing the start-up
instructions for the computer.
● Secondary Storage**: Used for large volumes of data that need permanent
storage. Examples include hard drives, CDs, DVDs, and USB drives.
*Output Unit**
● Function**: The output unit translates processed data into a human-readable
form and delivers it to the outside world.
● Examples**: Monitors and printers are common output devices.
● Key Tasks**:
● - Receiving data from the CPU.
● - Converting data into formats understandable by humans.
● - Delivering this data to users or external systems.
Number system
Definition of Number System
● A number system is a structured way to represent numbers. It consists of
symbols and rules for representing values in a specific base, or radix.
● The document covers four main types of number systems: **Decimal, Binary,
Octal, and Hexadecimal**.
● - **Binary (Base-2)**:
● Used internally by computers, consisting only of the digits 0 and 1.
● - Each position in a binary number represents a power of 2. For example,
\(1101_2\) represents \(1 \times 2^3 + 1 \times 2^2 + 0 \times 2^1 + 1 \times 2^0
= 13_{10}\).
● Each binary digit is called a **bit**.
● - **Octal (Base-8)**:
● Uses eight symbols (0-7), where each position represents a power of 8.
● Octal numbers are sometimes used in computing as a shorthand for binary,
grouping binary digits into sets of three (e.g., \(111_2 = 7_8\)).
● - **Hexadecimal (Base-16)**:
● Uses sixteen symbols (0-9 and A-F, where A-F represent decimal values 10-15).
● Each position in a hexadecimal number represents a power of 16.
● Hexadecimal is frequently used in computing because it can represent binary
numbers more compactly by grouping bits in sets of four (e.g., \(1111_2 =
F_{16}\)).
Number conversions
Importance of Number Conversions**
● In computing, data needs to be represented in various number systems (binary,
decimal, octal, hexadecimal).
● Converting between these systems allows for easier processing, storage, and
readability.
Conversion Methods**
● - **Decimal to Binary**:
● Repeatedly divide the decimal number by 2 and record the remainders.
● Write the remainders in reverse order, with the last remainder being the most
significant bit (MSB).
● Example: \(25_{10} \to 11001_2\).
● - **Decimal to Octal**:
● Divide the decimal number by 8, recording remainders.
● Reversing the remainder sequence gives the octal result.
● Example: \(125_{10} \to 175_8\).
● - **Decimal to Hexadecimal**:
● Repeatedly divide by 16, noting remainders.
● Reversing remainders yields the hexadecimal representation.
● Example: \(155_{10} \to 9B_{16}\).
● - **Binary to Decimal**:
● Multiply each binary digit by its positional power of 2 and sum the results.
● Example: \(1101_2 \to 13_{10}\).
● - **Octal to Decimal**:
● Multiply each octal digit by its positional power of 8 and sum.
● Example: \(157_8 \to 111_{10}\).
● - **Hexadecimal to Decimal**:
● Multiply each hexadecimal digit by its positional power of 16 and add.
● Example: \(2D5_{16} \to 725_{10}\).
● - **Octal to Binary**:
● Convert each octal digit to a 3-bit binary equivalent.
● Example: \(437_8 \to 100011111_2\).
● - **Hexadecimal to Binary**:
● Convert each hexadecimal digit to a 4-bit binary equivalent.
● Example: \(AB_{16} \to 10101011_2\).
● - **Binary to Octal**:
● Group binary digits in sets of three from right to left and convert each group to its
octal equivalent.
● Example: \(101100111_2 \to 547_8\).
● - **Binary to Hexadecimal**:
● Group binary digits in sets of four from right to left and convert to hexadecimal.
● Example: \(101100111010_2 \to B3A_{16}\).
Two-Step Conversions**
● **Octal to Hexadecimal** and **Hexadecimal to Octal** conversions are done in
two steps by first converting to binary, then to the desired system.
Data representation
Representation of Numbers**
● Integer Representation**: Integers are represented using binary encoding. There
are three primary methods for encoding integers:
● Sign and Magnitude**: Uses the most significant bit (MSB) as the sign bit (0 for
positive, 1 for negative).
● 1’s Complement**: Negative numbers are represented by inverting all bits
(changing 1s to 0s and 0s to 1s).
● 2’s Complement**: Negative numbers are represented by taking the 1's
complement of a number and adding 1 to it. This is the most common method as
it simplifies binary arithmetic.
● Floating-Point Representation**: Real numbers (with fractional parts) are
represented using **mantissa** and **exponent** in floating-point notation,
allowing the binary point to "float" based on the exponent's value. For instance,
25.45 can be represented as 0.2545 × \(10^2\).
Representation of Characters**
● Characters are stored as binary values using specific encoding standards:
● ASCII (American Standard Code for Information Interchange)**: Uses 7 or 8 bits
to represent characters, covering basic English letters, numbers, and symbols.
● EBCDIC (Extended Binary Coded Decimal Interchange Code)**: An 8-bit code
primarily used in IBM systems.
● ISCII (Indian Script Code for Information Interchange)**: An 8-bit encoding
scheme for Indian languages.
● Unicode**: A universal character set that can represent characters from almost
all world languages. Originally 16-bit, Unicode now supports even larger sets to
cover a vast range of characters.