0% found this document useful (0 votes)
5 views12 pages

Semester 1 Computer Fundamentals

The document is an assignment for a Bachelor of Business Administration course on Computer Fundamentals, detailing the characteristics of computers, their generations, and the differences between positional and non-positional number systems. It also covers the distinctions between RAM and ROM, the functions of operating systems, data communication principles, and the OSI and TCP/IP models. Additionally, it discusses object-oriented design and the software development process.

Uploaded by

sadixamishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views12 pages

Semester 1 Computer Fundamentals

The document is an assignment for a Bachelor of Business Administration course on Computer Fundamentals, detailing the characteristics of computers, their generations, and the differences between positional and non-positional number systems. It also covers the distinctions between RAM and ROM, the functions of operating systems, data communication principles, and the OSI and TCP/IP models. Additionally, it discusses object-oriented design and the software development process.

Uploaded by

sadixamishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

BACHELOR OF BUSINESS ADMINISTRATION

ASSIGNMENT

NAME : SADIKSHYA MISHRA

ROLL NUMBER : 2414104439

SESSION : JAN-FEB 2024

PROGRAM : BACHELOR OF BUSINESS ADMINISTRATION (BBA)

SEMESTER : I

COURSE CODE : DBB1105

COURSE NAME : COMPUTER FUNDAMENTALS


SET – I
1.
Characteristics of Computers:
Here are the primary characteristics of computers:
A. Speed:
Modern processing technology of computers is measured in gigahertz (GHz), which
significantly faster than the older computers.
Modern desktop processors currently have a clock speed of up to 5.3 GHz, which helps the
users edit a video in 4K, perform complex calculations, and live-streaming while playing a
game.
B. Storage:
Users can store huge amounts of data into computers with the help of hard drives, SSDs, and
cloud storage. Computers make long term and immediate data retention easier. Nowadays, one
can purchase up to 1 TB of SSD for storage purposes. You can even store your photos,
documents, videos and other data in cloud services.
C. Accuracy:
Computers are highly accurate, performing calculations and operations with almost no errors.
However, the output is heavily reliant on the quality of the input.
Financial software like QuickBooks relies on the computer's accuracy to perform complex
accounting calculations, bringing accurate financial reports and analyses that businesses
depend on.
D. Reliability:
If computers are maintained properly, computers can be highly reliable. For example, if you
don’t visit shady websites, you’re not vulnerable to viruses. So, you can rely on your computer
to download and stream videos or music from ethical websites.
In healthcare, medical devices and systems like MRI machines and patient monitoring systems
must be highly reliable to ensure accurate diagnostics and patient safety.
E. Automation:
Home automation systems such as Amazon Alexa or Google Home can control smart home
devices like lights, thermostats, and security cameras, automating everyday tasks and
enhancing convenience.
F. Diligence:
Unlike humans, monotonous tasks do not tire computers. They can perform repetitive tasks
with consistent efficiency and accuracy over long periods.
Data entry systems used in large organizations, such as CRM systems, handle millions of
records without errors, maintaining high productivity and accuracy in data management.
G. Versatility:
You can use your personal computer to use programs such as Photoshop, conduct online
classes through Zoom, prepare and edit your assignments through Word, and access the vast
internet. There is little you cannot do with your computer. With the advancement of
technology, the computers are getting more and more versatile every day.
Generations of Computers:
Computers have evolved through several generations with the help of technological
advancement. Each generation of computer is defined by its usage, technology, and
capabilities. Here are some key outlines of each generations of computers from the beginning
to the present:
1. First Generation
Timeline: 1940s – 1950s
Technology: Vacuum tubes
Key Characteristics: Large size, high power consumption, significant heat generation, limited
programming capabilities.
Example: ENIAC (Electronic Numerical Integrator and Computer) was used for artillery
trajectory calculations and was one of the first general-purpose electronic digital computers.
2. Second Generation
Timeline: 1950s – 1960s
Technology: Transistors
Key Characteristics: Smaller size, more reliable, less heat generation, faster, and more energy-
efficient.
Example: IBM 1401 was widely used in business and industry for processing commercial data
and scientific calculations.

3. Third Generation
Timeline: 1960s – 1970s
Technology: Integrated Circuits (ICs)
Key Characteristics: Smaller size, increased reliability, higher speed, lower cost, and greater
efficiency.
Example: IBM System/360 was notable for its ability to handle both commercial and scientific
applications, supporting high-level programming languages.

4. Fourth Generation
Timeline: 1970s - Present
Technology: Microprocessors
Key Characteristics: Significant reduction in size, increased processing power, very high-speed
operations, wide spread use in personal computers.
Example: Apple Macintosh revolutionized personal computing with its graphical user interface
(GUI), making computers more accessible to the general public.

5. Fifth Generation
Timeline: Present and beyond
Technology: Artificial Intelligence and Quantum Computing (in development)
Key Characteristics: Focus on AI and machine learning, natural language processing, advanced
parallel processing, potential use of quantum computing.
Example: IBM Watson uses AI to analyze large datasets and provide insights in fields like
healthcare and finance, demonstrating the capabilities of modern AI-driven systems.
These generations highlight the rapid advancement in computer technology, driven by both
hardware innovations and software developments, making computers more powerful, efficient,
and accessible.
2.
Differentiating Positional and Non-Positional Number Systems:
Number systems can be categorized into positional and non-positional number systems. Here
is a detailed comparison:
Non-Positional Number System Positional Number System
Value is determined by using objects or Value is determined by using both digits and
symbols without positional significance. its position within the number.
Example : Roman numerals such as I, V, X, Example : Decimal system (Base 10)
etc.
The number system cannot represent Zero. The number system represents Zero as a
digit.
The system has a limited set of symbols. There is a fixed set of digits with many
different methods of using them.
There is no place value concept. Each digit’s value is determined by its place
value.
The system was invented for simple The system was invented for modern and
counting and record-keeping by ancient complex mathematics and computing.
civilizations.
It’s not easy to perform arithmetic The system was invented to make arithmetic
calculations. calculations easier.

Converting (3456)10 to the binary system:

Solution:
2 3456 Remainder
2 1728 0
2 864 0
2 432 0
2 216 0
2 108 0
2 54 0
2 27 0
2 13 1
2 6 1
2 3 0
2 1 1
0 1

Writing the remainders from bottom to top, we have


(3456)10 = (110110000000)2
The binary conversion of (3456)10 is (110110000000)2.
3.
The differences between RAM and ROM are as follows:
RAM (Random Access Memory) ROM (Read-Only Memory)
It has volatile memory. The data is lost when It has non-volatile memory. The data is
power is switched off. retained when power is switched off.
It is usually larger in size. It is usually smaller in size.
It is used for running applications and the It is used for storing firmware and bootstrap
operating system for the computer. programs.
Instructions can be written, erased and read Instructions are strictly read-only. They
from the memory. The instructions can be cannot be updated.
updated by memory users.
It is a temporary storage for data and It is a permanent storage for firmware and
instructions. system boot information.

Operating System:
There are two types of computer software:
a. Application software, which performs services that the user wants.
b. System software, which manages all the operations of the computer.
An operating system (OS) is one of the most important system software which controls all the
computer’s resources. It acts as the medium between the user and the computer hardware. Without
an operating system, it would be rather difficult, if not impossible, for a layman to execute any
application software. In other words, the user interface is made easier with the OS.
The OS is quite similar to a government. By itself, a government has no true function. However,
when the other bodies in the surrounding start performing work, the government makes sure that
they complete the work accurately, properly, and in a timely manner.

Functions of Operating System:


An operating system is the core of a computer system with various functions. Here are some of the
key functions of an OS:
 Data Management
For a computer to function properly, the data fed into it needs to be stored and managed properly.
An OS keeps track of permanent and temporary data, locates the designated place for storage,
intention for usage of the file, and brings it forth when the user needs it.
The operating system manages data through a programming interface.
 Resource Management
The computer has multiple resources to perform its tasks. An operating system manages these
resources like primary and secondary memory, input and output devices, and such. These resources
can go awry if not managed properly. For the users, the operating system provides a UI-friendly
virtual machine. Behind the scenes, the operating system is controlling all the data and resources
in real time.
 Task Management
The computer is synonymous to multi-tasking. It is able to play music while also enabling users to
edit a document online. The operating system prioritizes all the applications and programs in the
mainframe, making sure that everything runs smoothly.
 Device Management
The OS helps a computer control extra parts you plug in, like printers or keyboards. Each part uses
its own special language to talk to the computer through the OS. The little program that helps the
computer understand this language is called a "driver." When you add a new part, like a new
printer, you need to install its driver so the computer can use it.
So, drivers are like translators that make sure the computer can understand and use all the different
gadgets you connect to it.
 User Interface
The user interface is one of the most important parts of a computer. It translates all the complex
data from the system into viable information for the users. Whether the user needs online access
or navigation help, the OS makes it easier for everyone to find what they need.

SET – II
4.
Data Communication:
Data communication is the process of transferring various data and information from one source
(hard disk, SSD) to a receiver (pen drive, CD). The hard disk will transmit the data and the receiver
collects and stores it. Data moves shorter and longer distances within a computer. For short
distance, the data is transmitted over copper conductors as two-level electrical signals.
Circuit designers usually do not worry much about the shape of the conductor or the analog
characteristics of signal transmission except in the fastest computers. Data communication also
involves transmitting digital messages to external devices, which are typically independently
powered circuitry existing beyond the computer’s chassis. The goal of any communication system
is to provide the highest possible transmission rate with the least noise and power.

Basic Elements of Communication System:

Understanding the basic elements is essential for ensuring efficient and accurate data transfer in
any communication system. The system comprises several basic elements, mainly involving data
and signals. Here is a brief detail of the elements:

- Data and Signals:


o A signal is an electrical current or electromagnetic field that transmits data from
one location to another within a computer or into an external device. Signals vary
from the simplest forms to the most advanced. The simplest form is a direct current
that switches on and off. The complex ones use alternating current that carry
multiple data streams. The necessary data is carried through these signals with the
help of modulation.
- Analog and Digital Signals:
o Analog signals have an infinite number of intensity levels over time. As the wave
moves from one point to another, it passes through countless values. For example,
an analog clock provides continuous information through the movement of its
hands. Digital signals, in contrast, have a finite number of distinct values, typically
represented by 1 and 0. These signals are fundamental for data storage and
transmission in digital devices.
- Periodic and Non-Periodic Signals:
o Both analog and digital signals can be either periodic or non-periodic. A periodic
signal repeats a specific pattern over a measurable period, known as a cycle. This
repetition occurs at regular intervals. Non-periodic signals do not follow a repeating
pattern and change unpredictably over time.

OSI Reference Model:

The OSI model is short form for Open Systems Interconnection model. It is a type of layered model
of data communications and networks consisting of seven layers. The OSI layer was touted to be
the ultimate model of data communications. However, the internet adopted the TCP/IP model and
the OSI model was discarded.

Here are the seven layers of OSI reference model:


- Physical Layer - Data Link Layer - Network Layer - Transport Layer

- Session Layer - Presentation Layer - Application Layer

When a message moves from one device to another, it goes through these layers like climbing
stairs. Sometimes, however, it only goes through the first three layers, especially when passing
through intermediary devices.

The OSI Reference Model is a way of breaking down how computers communicate into
manageable pieces. Each layer plays a role in the transfer of data. Even though it didn't become
the main standard, it still helps us understand how networks function.

5.

TCP/IP Model:

The TCP/IP model is short form for the Transmission Control Protocol/Internet Protocol model. It
serves as the universal language that computers use to communicate with each other, either on the
internet or within private networks. A computer uses the TCP/IP model to exchange information
with other computers when connected to the internet. When a computer has direct access to the
internet, it is automatically provided with the TCP/IP program copy.

The model is structured in four layers that makes comprehension and utilization of data easier.
Functional comparisons cannot be extracted from the model as each layer has a different job to do.
The layers are represented in a protocol stack, which means that each layer within the TCP/IP
model possesses its own distinct role. These layers collaborate, with each one offering services to
the layer above it while utilizing services from the layer below it.

The IP layer’s job is to transport the data from one computer to the other. It does not have the
responsibility of reliable delivery. Therefore, the TCP layer provides reliable data stream delivery.

Here is a brief explanation of the four layers of the TCP/IP model:

Fig. I: The TCP/IP Model

 Application Layer: The initial layer is the Application layer, where programs communicate
by utilizing the TCP/IP model. Examples of such programs include Telnet and FTP.
 Transport Layer: Next is the Transport layer, which is responsible for transmitting data
from one application to another. The prevalent protocol employed here is TCP, which
ensures reliable data delivery. A common example of transport layer is the User Datagram
Protocol (UDP), which provides trouble-free, albeit unreliable service.
 Network Layer: This is the most important layer in the whole model. Following the
Transport layer is the internetwork layer, also referred to as the network layer. It establishes
the virtual network image and employs the Internet Protocol (IP) to route messages. This
layer doesn’t provide reliable information flow or error recovery. Instead, it relies on the
higher levels for those things. It only provides a highway to deliver the information to their
destination via the following layer.
 Network Interface Layer: The final layer is called the Network Interface layer, which
connects to the actual network hardware. This layer is not known for its reliable delivery,
and is packet or steam oriented. TCP/IP exhibits flexibility as it can function with various
types of network interfaces. It does not specify a particular protocol for the Network
Interface layer. That’s why it allows it to adapt to any interface available.

6.

Object-Oriented Design:

The characteristics of an object-oriented design (OOD) are:

1. Each object independently manages its own state information.


2. Objects keep their data and actions separate from others, making the system more
organized and less dependent on other parts.
3. Objects have data and actions that describe their behavior and state.
4. Classes can inherit attributes and methods from other classes, promoting code reuse.
5. Objects interact by passing messages, invoking methods on other objects.
6. Objects can be changed or updated on their own, making the system easier to maintain.
7. It's easy to understand and maintain the system because objects in the system closely match
real-world things.

Describe the following terms:

a. Software Development: Software Development: Software development is the process of


creating software applications in several stages. The several stages involves understanding
what the users need, designing the software, building the application, testing it and
deploying the end result. There is a thorough engineering process in building a software
application.

Here are the main stages in brief:

1. Analysis: This is where developers understand what the users need and translate these
needs into system requirements.
2. Design: This stage involves creating a detailed plan for how the system will work. It starts
with a problem statement and ends with a blueprint for the system.
3. Implementation: Here, the detailed design is turned into a working system that meets the
users' needs.

An example of this process is the waterfall model. This model starts by identifying what
needs to be done, then figuring out how to do it, followed by actually doing it. After that,
the results are tested to make sure they meet the users' requirements. Finally, the completed
system is used. However, the waterfall model has limitations because, in the real world,
problems are not always clear-cut.

b. Software Testing:
Software testing is the crucial process in software development where the developer
examines a software program they created to identify errors.

Successful testing reveals existing errors and provides insight into the software's
functionality, performance, and adherence to specifications. However, it cannot guarantee
error-free software; its purpose is to indicate the presence of errors rather than their
absence. The testing strategy follows a spiral model, starting from unit testing, where
individual components are examined, then progressing to integration testing, validation
testing, and finally system testing.

Each stage broadens the scope of testing and ensures that the software meets all functional,
behavioral, and performance requirements. Different testing techniques, such as white-box
and black-box testing, are employed at various stages to achieve comprehensive coverage
and thorough error detection.

c. Imperative Paradigms:
The imperative paradigm, rooted in the Latin word "imperium" meaning "to command,"
centers on commands that directly update variables stored in memory. In this paradigm,
programming languages offer statements, like assignment statements, explicitly altering
the computer's memory state. This model closely mirrors how computers execute
commands and typically boasts high execution efficiency. Many programmers find the
imperative paradigm intuitive for expressing instructions.

At its core, imperative programming emphasizes step-by-step instructions for the computer
to follow, akin to giving direct commands. Developers specify precisely how tasks should
be carried out, dictating the exact sequence of operations. Imperative languages often
feature variables, loops, and conditional statements, allowing programmers to manipulate
data directly and control the flow of execution.

Common examples of imperative languages include C, Java, and Python. These languages
enable developers to specify the exact sequence of operations, making them suitable for
tasks requiring precise control over memory and computation. Despite the prevalence of
other paradigms, the imperative paradigm remains widely used due to its straightforward
approach and efficiency in expressing computational tasks.

d. Functional programming Paradigms:


Functional programming paradigms center around expressing computations through the
evaluation of mathematical functions. In contrast to imperative paradigms, which rely on
mutable state and explicit commands to update variables, functional programming treats
values as immutable entities.

Instead of modifying values, computations in functional languages involve transforming


existing values into new ones by applying functions. This approach emphasizes the use of
pure functions, which produce consistent outputs for given inputs without side effects.
Functional programming languages, such as Haskell and Clojure, often feature higher-
order functions and function composition, enabling concise and expressive code. By
embracing immutability and mathematical functions, functional programming promotes
code that is easier to reason about, facilitating modularity, scalability, and maintainability.

e. Artificial Intelligence Software:


Artificial intelligence software (AI software) are programs whose algorithms that are not
based on numerical methods to solve complex problems that aren’t easily solved by
traditional computational or analytical approaches. For example, expert systems and
knowledge-based systems mimic human decision-making processes to offer intelligent
solutions.
Another application area is pattern recognition, where AI software can identify and
interpret patterns in various types of data or images. Additionally, AI software is adept at
playing games, demonstrating strategic thinking and adapting to different scenarios within
game environments.
AI software is gaining momentum in the current times because it addresses challenges
across different fields and brings a simple answer by simulating intelligent behavior.

You might also like