Semester 1 Computer Fundamentals
Semester 1 Computer Fundamentals
ASSIGNMENT
SEMESTER : I
3. Third Generation
Timeline: 1960s – 1970s
Technology: Integrated Circuits (ICs)
Key Characteristics: Smaller size, increased reliability, higher speed, lower cost, and greater
efficiency.
Example: IBM System/360 was notable for its ability to handle both commercial and scientific
applications, supporting high-level programming languages.
4. Fourth Generation
Timeline: 1970s - Present
Technology: Microprocessors
Key Characteristics: Significant reduction in size, increased processing power, very high-speed
operations, wide spread use in personal computers.
Example: Apple Macintosh revolutionized personal computing with its graphical user interface
(GUI), making computers more accessible to the general public.
5. Fifth Generation
Timeline: Present and beyond
Technology: Artificial Intelligence and Quantum Computing (in development)
Key Characteristics: Focus on AI and machine learning, natural language processing, advanced
parallel processing, potential use of quantum computing.
Example: IBM Watson uses AI to analyze large datasets and provide insights in fields like
healthcare and finance, demonstrating the capabilities of modern AI-driven systems.
These generations highlight the rapid advancement in computer technology, driven by both
hardware innovations and software developments, making computers more powerful, efficient,
and accessible.
2.
Differentiating Positional and Non-Positional Number Systems:
Number systems can be categorized into positional and non-positional number systems. Here
is a detailed comparison:
Non-Positional Number System Positional Number System
Value is determined by using objects or Value is determined by using both digits and
symbols without positional significance. its position within the number.
Example : Roman numerals such as I, V, X, Example : Decimal system (Base 10)
etc.
The number system cannot represent Zero. The number system represents Zero as a
digit.
The system has a limited set of symbols. There is a fixed set of digits with many
different methods of using them.
There is no place value concept. Each digit’s value is determined by its place
value.
The system was invented for simple The system was invented for modern and
counting and record-keeping by ancient complex mathematics and computing.
civilizations.
It’s not easy to perform arithmetic The system was invented to make arithmetic
calculations. calculations easier.
Solution:
2 3456 Remainder
2 1728 0
2 864 0
2 432 0
2 216 0
2 108 0
2 54 0
2 27 0
2 13 1
2 6 1
2 3 0
2 1 1
0 1
Operating System:
There are two types of computer software:
a. Application software, which performs services that the user wants.
b. System software, which manages all the operations of the computer.
An operating system (OS) is one of the most important system software which controls all the
computer’s resources. It acts as the medium between the user and the computer hardware. Without
an operating system, it would be rather difficult, if not impossible, for a layman to execute any
application software. In other words, the user interface is made easier with the OS.
The OS is quite similar to a government. By itself, a government has no true function. However,
when the other bodies in the surrounding start performing work, the government makes sure that
they complete the work accurately, properly, and in a timely manner.
SET – II
4.
Data Communication:
Data communication is the process of transferring various data and information from one source
(hard disk, SSD) to a receiver (pen drive, CD). The hard disk will transmit the data and the receiver
collects and stores it. Data moves shorter and longer distances within a computer. For short
distance, the data is transmitted over copper conductors as two-level electrical signals.
Circuit designers usually do not worry much about the shape of the conductor or the analog
characteristics of signal transmission except in the fastest computers. Data communication also
involves transmitting digital messages to external devices, which are typically independently
powered circuitry existing beyond the computer’s chassis. The goal of any communication system
is to provide the highest possible transmission rate with the least noise and power.
Understanding the basic elements is essential for ensuring efficient and accurate data transfer in
any communication system. The system comprises several basic elements, mainly involving data
and signals. Here is a brief detail of the elements:
The OSI model is short form for Open Systems Interconnection model. It is a type of layered model
of data communications and networks consisting of seven layers. The OSI layer was touted to be
the ultimate model of data communications. However, the internet adopted the TCP/IP model and
the OSI model was discarded.
When a message moves from one device to another, it goes through these layers like climbing
stairs. Sometimes, however, it only goes through the first three layers, especially when passing
through intermediary devices.
The OSI Reference Model is a way of breaking down how computers communicate into
manageable pieces. Each layer plays a role in the transfer of data. Even though it didn't become
the main standard, it still helps us understand how networks function.
5.
TCP/IP Model:
The TCP/IP model is short form for the Transmission Control Protocol/Internet Protocol model. It
serves as the universal language that computers use to communicate with each other, either on the
internet or within private networks. A computer uses the TCP/IP model to exchange information
with other computers when connected to the internet. When a computer has direct access to the
internet, it is automatically provided with the TCP/IP program copy.
The model is structured in four layers that makes comprehension and utilization of data easier.
Functional comparisons cannot be extracted from the model as each layer has a different job to do.
The layers are represented in a protocol stack, which means that each layer within the TCP/IP
model possesses its own distinct role. These layers collaborate, with each one offering services to
the layer above it while utilizing services from the layer below it.
The IP layer’s job is to transport the data from one computer to the other. It does not have the
responsibility of reliable delivery. Therefore, the TCP layer provides reliable data stream delivery.
Application Layer: The initial layer is the Application layer, where programs communicate
by utilizing the TCP/IP model. Examples of such programs include Telnet and FTP.
Transport Layer: Next is the Transport layer, which is responsible for transmitting data
from one application to another. The prevalent protocol employed here is TCP, which
ensures reliable data delivery. A common example of transport layer is the User Datagram
Protocol (UDP), which provides trouble-free, albeit unreliable service.
Network Layer: This is the most important layer in the whole model. Following the
Transport layer is the internetwork layer, also referred to as the network layer. It establishes
the virtual network image and employs the Internet Protocol (IP) to route messages. This
layer doesn’t provide reliable information flow or error recovery. Instead, it relies on the
higher levels for those things. It only provides a highway to deliver the information to their
destination via the following layer.
Network Interface Layer: The final layer is called the Network Interface layer, which
connects to the actual network hardware. This layer is not known for its reliable delivery,
and is packet or steam oriented. TCP/IP exhibits flexibility as it can function with various
types of network interfaces. It does not specify a particular protocol for the Network
Interface layer. That’s why it allows it to adapt to any interface available.
6.
Object-Oriented Design:
1. Analysis: This is where developers understand what the users need and translate these
needs into system requirements.
2. Design: This stage involves creating a detailed plan for how the system will work. It starts
with a problem statement and ends with a blueprint for the system.
3. Implementation: Here, the detailed design is turned into a working system that meets the
users' needs.
An example of this process is the waterfall model. This model starts by identifying what
needs to be done, then figuring out how to do it, followed by actually doing it. After that,
the results are tested to make sure they meet the users' requirements. Finally, the completed
system is used. However, the waterfall model has limitations because, in the real world,
problems are not always clear-cut.
b. Software Testing:
Software testing is the crucial process in software development where the developer
examines a software program they created to identify errors.
Successful testing reveals existing errors and provides insight into the software's
functionality, performance, and adherence to specifications. However, it cannot guarantee
error-free software; its purpose is to indicate the presence of errors rather than their
absence. The testing strategy follows a spiral model, starting from unit testing, where
individual components are examined, then progressing to integration testing, validation
testing, and finally system testing.
Each stage broadens the scope of testing and ensures that the software meets all functional,
behavioral, and performance requirements. Different testing techniques, such as white-box
and black-box testing, are employed at various stages to achieve comprehensive coverage
and thorough error detection.
c. Imperative Paradigms:
The imperative paradigm, rooted in the Latin word "imperium" meaning "to command,"
centers on commands that directly update variables stored in memory. In this paradigm,
programming languages offer statements, like assignment statements, explicitly altering
the computer's memory state. This model closely mirrors how computers execute
commands and typically boasts high execution efficiency. Many programmers find the
imperative paradigm intuitive for expressing instructions.
At its core, imperative programming emphasizes step-by-step instructions for the computer
to follow, akin to giving direct commands. Developers specify precisely how tasks should
be carried out, dictating the exact sequence of operations. Imperative languages often
feature variables, loops, and conditional statements, allowing programmers to manipulate
data directly and control the flow of execution.
Common examples of imperative languages include C, Java, and Python. These languages
enable developers to specify the exact sequence of operations, making them suitable for
tasks requiring precise control over memory and computation. Despite the prevalence of
other paradigms, the imperative paradigm remains widely used due to its straightforward
approach and efficiency in expressing computational tasks.