0% found this document useful (0 votes)
18 views16 pages

Kwasu Cyb101

The document outlines the fundamentals of computing and cybersecurity, covering key concepts such as data, information, computer systems, and the importance of protecting these elements. It introduces the CIA triad (Confidentiality, Integrity, Availability) as essential principles of cybersecurity, along with methods for authentication, access control, and non-repudiation. Additionally, it discusses best practices for building secure systems, including security policies, testing, and incident response strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views16 pages

Kwasu Cyb101

The document outlines the fundamentals of computing and cybersecurity, covering key concepts such as data, information, computer systems, and the importance of protecting these elements. It introduces the CIA triad (Confidentiality, Integrity, Availability) as essential principles of cybersecurity, along with methods for authentication, access control, and non-repudiation. Additionally, it discusses best practices for building secure systems, including security policies, testing, and incident response strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CYB101: FUNDAMENTALS OF COMPUTING AND CYBERSECURITY

FULL LECTURE NOTE (MODULES AND UNITS)

MODULE 1: INTRODUCTION TO COMPUTING

Unit 1.1: Basic Concepts of Computing


1.1.1 Data
Data are raw facts and figures that have not yet been given meaning. Examples include numbers,
names, dates or measurements typed into a computer. A cybersecurity breach often begins with
attackers trying to access sensitive data such as marks, salaries or passwords.

1.1.2 Information
Information is data that have been processed, organised and given meaning. When the raw marks
of students are arranged in a result sheet with names and courses, they become information that
can be used for decisions. Protecting information, not just raw data, is a central goal of
cybersecurity.

1.1.3 Computer and Computer System


A computer is an electronic device that accepts data, processes them according to instructions and
produces information. A computer system is a complete set of related parts that work together,
including hardware, software, users, procedures and data. In cybersecurity, we are concerned with
how to protect the whole system, not just the machine.

Unit 1.2: Main Components of a Computer System


1.2.1 Hardware
Hardware refers to the physical parts of the computer that can be seen and touched, such as the
system unit, monitor, keyboard, mouse and storage devices. Hardware can be damaged, stolen or
tampered with, so physical security is an important part of cybersecurity.

1.2.2 Software
Software is the set of programs and instructions that tell the hardware what to do. It includes
system software such as the operating system and application software such as word processors
and browsers. Many cyber-attacks exploit weaknesses in software, so understanding that software
controls behaviour is important for security.
1.2.3 Users and Procedures
Users are the people who interact with the computer system, while procedures are the agreed steps
and rules for using the system correctly. Even with strong hardware and software, careless or
uninformed users can create serious security problems. Cybersecurity therefore focuses heavily on
user behaviour and clear procedures.

Unit 1.3: Types of Computers


1.3.1 Personal Computers and Mobile Devices
Personal computers, such as desktops and laptops, and mobile devices, such as smartphones and
tablets, are used daily by students, staff and organisations. They often hold personal files, login
details and access to online services, which makes them common targets for malware and phishing
attacks.

1.3.2 Servers
Servers are computers that provide services to many users over a network, such as web hosting,
email, file sharing or databases. If a server is attacked, many users can be affected at once, so
servers require stronger and more carefully managed security controls.

1.3.3 Embedded Systems and IoT Devices


Embedded systems and Internet of Things (IoT) devices are small computers built into everyday
objects such as cameras, smart TVs, routers and sensors. They often have limited security features
but are connected to networks, which means they can be used by attackers as entry points into
larger systems.

1.3.4 Standalone and Networked Computers


A standalone computer operates without being connected to any other system, while a networked
computer is connected to others through a local network or the internet. Networking brings great
benefits for communication and resource sharing but also greatly increases exposure to cyber
threats. Most modern cybersecurity work focuses on protecting networked systems.
Unit 1.4: Link between Computing and Cybersecurity
1.4.1 Dependence on Computer Systems
Modern education, business, banking, health care and government all depend heavily on computer
systems for storing and processing information. Any disruption or compromise of these systems
can have serious academic, financial or social consequences.

1.4.2 Why Cybersecurity is Necessary


Because computers store valuable data and are connected through networks, they attract attackers
who try to steal, alter or destroy information or deny access to services. A basic understanding of
computing concepts helps students appreciate the security principles that follow in later modules,
such as confidentiality, integrity, availability, authentication, access control, cryptography and the
evolution of cyber-attacks.

MODULE 2: INTRODUCTION TO CYBER SECURITY AND THE CIA TRIAD

Unit 2.1: Concept of Cyber Security


Cyber security is the field that deals with protecting computers, networks, software, and data from
harm, misuse, theft, or disruption. As individuals, organisations, and governments rely more on
computers and the internet for banking, health care, education, communication, and even national
security, the need to protect digital systems becomes critical.

The aim of cyber security is to ensure that systems and information remain safe and reliable.
Attackers may try to steal passwords, credit card details, exam questions, or personal messages.
They may also try to shut down systems or modify data. Cyber security uses a combination of
technologies, processes, and human awareness to defend against these threats. This includes
technical measures like firewalls, antivirus software, and encryption, but also non-technical
measures such as staff training, security policies, and physical protection of equipment.

In summary, cyber security is not just about “hacking” or coding. It is a broad discipline that
balances people, processes, and technology to protect information and keep digital services
running safely.
Unit 2.2: Confidentiality
Confidentiality refers to the protection of information from being seen or accessed by people who
are not authorized to see it. When confidentiality is maintained, only the right people, at the right
time, and for the right reasons, can gain access to certain data.

For example, the marks of students in an exam should not be visible to every staff member in a
school; only authorized lecturers, examination officers, and the student concerned should see
them. Similarly, a hospital must keep patient records confidential, so that a stranger cannot walk
in and read someone’s medical history.

Confidentiality is usually achieved through mechanisms like passwords, user accounts with proper
permissions, and encryption. When data is encrypted, it is converted into a form that looks
meaningless to anyone who does not have the correct decryption key. Confidentiality also requires
physical protection; for instance, servers might be kept in locked rooms, and printed documents
may be stored in locked cabinets. If confidentiality is broken, sensitive information can be leaked,
leading to reputation damage, financial loss, or even danger to human life.

Unit 2.3: Integrity


Integrity means that data and systems are accurate, complete, and have not been tampered with in
an unauthorized way. When integrity is preserved, you can trust that the information you see is
exactly what was originally stored, sent, or intended, without hidden modifications or corruptions.

Imagine if a bank’s database were changed by an attacker so that a balance of ₦50,000 appears as
₦5,000 or ₦500,000. This would be a serious violation of integrity. In another example, if a medical
prescription is altered so that the dosage of a drug is changed, it could cause harm to the patient.
Even in academic settings, if someone changes grades in the school system without permission,
that is also an integrity violation.

To protect integrity, systems use techniques such as checksums, hash functions, and digital
signatures. A hash function takes data and produces a short, fixed-length value called a hash. If
the data changes even slightly, the hash will change significantly, revealing that the data has been
altered. Digital signatures combine hashing with cryptography to prove both who sent a message
and that the message has not been modified on the way. Backups and proper access control also
help restore and protect integrity when something goes wrong.

Unit 2.4: Availability


Availability means that information and systems are accessible and usable whenever authorized
users need them. A secure system is not useful if nobody can access it when required. Availability
focuses on ensuring that services remain up and running, even in the face of attacks, failures, or
disasters.

Examples of availability include being able to log in to an online banking service at any time,
students accessing the school portal during registration, or doctors being able to view patient
records in an emergency. Threats to availability include power failures, hardware damage, software
bugs, network outages, and deliberate attacks such as Denial of Service (DoS) attacks, where
attackers overwhelm a server with fake traffic so that legitimate users cannot connect.

To maintain availability, organisations use reliable hardware, redundant systems (for example,
backup servers), regular maintenance, power backup systems, and effective monitoring. Good
backup and disaster recovery plans also help restore services quickly after a disruption. Availability
works hand-in-hand with confidentiality and integrity as part of the overall security goals.

MODULE 3: IDENTITY, AUTHENTICATION, ACCESS CONTROL, AND NON-


REPUDIATION

Unit 3.1: Authentication


Authentication is the process of verifying the identity of a user, device, or system. In simple terms,
it answers the question: “Are you really who you claim to be?” Before a system grants access to
resources, it should make sure it is dealing with a legitimate and authorized entity.

The most common form of authentication is the use of a username and password. The username
identifies the person, and the password proves that the person really is the owner of that account.
However, passwords alone can be weak, especially if they are easy to guess or reused across many
sites. For stronger authentication, systems use additional factors such as something you have (like
a smart card, a security token, or a phone that receives a one-time code) and something you are
(biometric data such as fingerprints, facial recognition, or iris scans).

Modern security often uses multi-factor authentication, which means combining two or more of
these methods. For example, a bank may require both a password and a one-time code sent to
your phone. Good authentication prevents attackers from easily logging into someone else’s
account and is a key foundation for other security features like access control and non-repudiation.

Unit 3.2: Access Control


Access control refers to the methods and mechanisms used to decide who is allowed to do what
on a system. While authentication confirms who you are, access control determines what you are
allowed to access or perform after you are identified. It answers questions such as: “Can this user
read this file?”, “Can this nurse edit this patient’s record?”, or “Can this staff member approve this
financial transaction?”

Access control can be implemented using different models. In a role-based access control system,
users are assigned roles such as “student”, “lecturer”, “administrator”, or “nurse”. Each role has
certain permissions, and users automatically get the permissions of their roles. In a mandatory
access control system, a central authority strictly defines access levels, as is common in military or
highly classified environments. In discretionary access control, the owner of a resource can decide
who has access to it.

Access control rules are usually implemented in operating systems, database systems, and
application software, often through permissions and privileges. If access control is poorly
configured, users may get more permissions than they need, which increases the risk of abuse or
accidental damage. Good access control follows the principle of least privilege, which means giving
each user only the minimum permissions required to perform their duties.

Unit 3.3: Non-repudiation


Non-repudiation is the security property that ensures that a person or system cannot deny having
performed a particular action, such as sending a message, making a transaction, or approving a
request. In other words, it provides proof that something happened and who was responsible for
it.
In the digital world, it is easy to send messages, emails, or transactions, and later claim “I never
sent that” or “That was not me.” Non-repudiation uses cryptographic techniques, especially digital
signatures, to link actions to specific users in a way that can be verified later. When a digital
signature is attached to a document or message, it proves that the message came from the holder
of a particular private key and that it has not been modified since it was signed.

This is extremely important in e-commerce, online banking, legal documents, and electronic
contracts. For example, when a customer authorizes a large fund transfer, the bank needs to be
able to prove that the customer truly authorized it, in case there is a dispute later. Logging and
audit trails also support non-repudiation by recording who did what and when, so that actions can
be traced if necessary.

MODULE 4: BUILDING RELIABLE AND SECURE SYSTEMS

Unit 4.1: Fault-Tolerant Methodologies for Implementing Security


Fault tolerance is the ability of a system to continue operating properly even when some of its
components fail. When applied to security, fault-tolerant methodologies aim to maintain security
properties such as confidentiality, integrity, and availability even in the presence of hardware faults,
software bugs, or partial system failures.

One common approach is redundancy, in which critical components are duplicated. For example,
instead of having a single server, an organisation may have two or more servers that can take over
if one fails. Data can also be stored in multiple locations or disks so that if one is damaged, the
data is not lost. Another approach is graceful degradation, where the system does not completely
crash when something goes wrong, but instead continues in a reduced or limited mode until full
service can be restored.

Fault-tolerant security also involves regular backups, failover mechanisms, error detection and
correction, and continuous monitoring. The goal is to avoid a situation where a small failure
becomes a total security breakdown, such as when one crashed server makes an entire service
unavailable or exposes sensitive data.
Unit 4.2: Security Policies
A security policy is a formal, written document that describes how an organisation manages,
protects, and distributes its information and resources. It sets out rules, responsibilities, and
acceptable behaviours for staff, contractors, and sometimes customers. While technical tools are
important, they are guided and made effective by strong, clear policies.

A typical security policy might cover areas such as password rules, acceptable use of the internet
and email, data classification (for example, public, internal, confidential), incident reporting
procedures, physical access to buildings, and rules for using personal devices for work. By clearly
stating what is allowed and what is not, a security policy reduces confusion and helps enforce
consistent behaviour.

Security policies should also reflect legal and regulatory requirements. For example, a hospital
policy must respect laws concerning patient privacy. For a policy to be effective, it must be
communicated to all users, regularly reviewed, and updated as technology and threats evolve.
Policies are often supported by training and awareness programs, as well as disciplinary measures
for violations.

Unit 4.3: Best Current Practices


Best current practices are methods and approaches that, based on current knowledge and
experience, are widely accepted as effective for improving security. They are not rigid rules, but
recommended ways of doing things that reduce risk and strengthen defences.

Examples of best practices include regularly updating software and operating systems to patch
vulnerabilities, using strong and unique passwords, enabling multi-factor authentication, limiting
user privileges, performing regular backups, encrypting sensitive data, and conducting security
awareness training for users. In network environments, best practices might involve segmenting
the network so that an attacker who gains access to one part cannot easily move to others.

Best current practices evolve over time as new threats and technologies emerge. A practice that
was considered good enough ten years ago may be inadequate today. Therefore, organisations and
professionals must stay updated through standards, guidelines from security bodies, and ongoing
research in the field of cyber security.
Unit 4.4: Testing Security
Testing security involves checking systems, applications, and procedures to identify weaknesses
before attackers do. It is an essential part of maintaining a strong security posture. Simply installing
security software is not enough; you must verify that it is correctly configured and effective.
Security testing can take several forms. Vulnerability assessment involves scanning systems to
detect known weaknesses, such as missing patches or misconfigurations. Penetration testing,
sometimes called “ethical hacking,” goes further by simulating real attacks to see how far an
attacker could get into a system. There are also code reviews and application security tests that
look for programming errors which might lead to vulnerabilities such as SQL injection or buffer
overflows.

Regular security testing helps organisations fix problems early, improve their defences, and comply
with regulations or standards that require evidence of security controls. It is important that such
testing is done carefully, with authorization, and often by trained professionals, so that it does not
accidentally cause damage.

Unit 4.5: Incident Response


Incident response is the organized approach an organisation takes when a security incident occurs.
A security incident is any event that threatens the confidentiality, integrity, or availability of
information or systems, such as a malware infection, a data breach, a denial-of-service attack, or
unauthorized access.

An incident response plan typically includes steps such as preparation, detection, containment,
eradication, recovery, and lessons learned. Preparation involves having tools, trained staff, and
clear procedures in place before anything happens. Detection means noticing that an incident is
occurring, often through monitoring and alerts. Containment aims to limit the spread or impact
of the incident, for example by isolating affected systems. Eradication involves removing the cause,
such as deleting malware or closing a vulnerability. Recovery focuses on restoring systems and
services to normal operation, often from clean backups. Finally, the lessons learned phase evaluates
what happened, what worked, what failed, and how to improve for the future.

Effective incident response reduces damage, shortens downtime, and helps prevent similar
incidents from happening again.
MODULE 5: RISK MANAGEMENT, DISASTER RECOVERY, AND ACCESS
CONTROL IN DEPTH
Unit 5.1: Risk Management
Risk management in cyber security is the process of identifying, assessing, and prioritizing risks to
information and systems, and then taking steps to reduce those risks to an acceptable level. A risk
is typically described as the likelihood that a particular threat will exploit a vulnerability and cause
harm.

The risk management process usually begins with identifying assets, such as data, hardware,
software, and people, and determining their value to the organisation. Next, potential threats are
considered, such as hackers, insider misuse, malware, natural disasters, or human error.
Vulnerabilities in systems, processes, or physical security that could be exploited by these threats
are then identified. Once threats and vulnerabilities are known, the organisation evaluates the
likelihood and potential impact of different risk scenarios.

After analysis, the organisation decides how to treat each risk. Options include reducing the risk
by implementing controls (for example, installing a firewall), transferring the risk (for example,
buying cyber insurance), avoiding the risk (for example, not offering a certain online service), or
accepting the risk if it is low and controls would be too expensive. Risk management is not a one-
time activity; it must be reviewed regularly as technology, business operations, and the threat
landscape change.

Unit 5.2: Disaster Recovery


Disaster recovery refers to the strategies and processes used to restore IT systems and data after a
major disruptive event. A disaster may be a natural event like a flood or fire, or a human-caused
event like a large-scale cyber attack, hardware failure, or accidental data deletion.

A disaster recovery plan typically specifies which systems and data are critical, how often backups
are made, where those backups are stored, and how they will be restored. It may include the use
of off-site backups, cloud storage, secondary data centres, or mirrored systems in different
locations. The plan also outlines responsibilities, contact lists, and step-by-step recovery
procedures.
Two important concepts in disaster recovery are the Recovery Time Objective (RTO), which is
how quickly a system must be restored after a disaster, and the Recovery Point Objective (RPO),
which is the maximum acceptable amount of data loss measured in time. For example, an RPO of
one hour means the organisation can tolerate losing at most one hour of data. Testing the disaster
recovery plan through drills and simulations is crucial so that, during a real crisis, staff know what
to do and systems can be restored as planned.

Unit 5.3: Access Control in Organisational Context


While access control was introduced earlier as a general concept, in the context of risk management
and disaster recovery it is important to emphasise how organisations design their access control
structures. Proper access control reduces the risk of both accidental and deliberate misuse of
systems.

Organisations define user roles and responsibilities, then assign permissions based on those roles.
For example, a data entry clerk may only enter information, while a supervisor may edit and
approve records. During a disaster or incident, special access control rules may apply. Some
accounts may be disabled to prevent further damage, while certain administrators may get
emergency privileges to repair systems.

Audit logs that record who accessed what and when are also part of access control in practice.
These logs are important both for detecting suspicious behaviour and for investigating incidents
after they occur. Overall, carefully planned and consistently enforced access control is one of the
most powerful tools for reducing security risk.

MODULE 6: BASIC CRYPTOGRAPHY AND SOFTWARE APPLICATION


VULNERABILITIES

Unit 6.1: Basic Cryptography


Cryptography is the science and art of protecting information by transforming it into a form that
is unreadable to anyone who does not possess the correct key. It underpins many modern security
services, including confidentiality, integrity, authentication, and non-repudiation.

In traditional or symmetric cryptography, the same key is used for both encryption (turning
plaintext into ciphertext) and decryption (turning ciphertext back into readable form). Both sender
and receiver must share this secret key and keep it safe. Examples of symmetric algorithms include
AES. In public-key or asymmetric cryptography, there are two related keys: a public key that can
be shared with anyone and a private key that must remain secret. Data encrypted with the public
key can only be decrypted with the corresponding private key. This enables secure communication
without needing to share a secret key in advance and also allows the creation of digital signatures.

Hash functions are another important cryptographic tool. A hash function takes input data and
produces a fixed-size output called a hash or digest. A small change in the input produces a very
different hash, making hashes useful for detecting changes to data. Since hash functions are one-
way, it is not feasible to recover the original data from the hash. Cryptography is used in many
applications such as securing websites (HTTPS), protecting Wi-Fi networks, securing emails,
encrypting files on a disk, and verifying software updates.

Unit 6.2: Software Application Vulnerabilities


Software application vulnerabilities are weaknesses or flaws in programs that can be exploited by
attackers to cause unintended behaviour. These vulnerabilities may arise from programming errors,
poor design, misconfigurations, or failure to validate user input.

Common types of software vulnerabilities include buffer overflows, where a program writes more
data into a memory buffer than it can hold, causing data corruption or allowing arbitrary code
execution; SQL injection, where an attacker inserts malicious commands into a database query
through user input; and cross-site scripting (XSS), where malicious scripts are injected into web
pages viewed by other users. Other issues involve weak authentication, poor session management,
or failure to encrypt sensitive data.

Attackers scan for these vulnerabilities and use them to gain unauthorized access, escalate
privileges, steal data, or disrupt services. To reduce vulnerabilities, developers should follow secure
coding practices, validate all inputs, apply the principle of least privilege, and keep dependencies
and frameworks updated. Security testing, code reviews, and the use of automated scanning tools
also help detect and fix vulnerabilities before software is deployed or while it is in use.
MODULE 7: EVOLUTION OF CYBER-ATTACKS

Unit 7.1: Evolution of Cyber-Attacks


Cyber-attacks have evolved significantly over time, both in complexity and in motivation. In the
early days of computing and networking, many attacks were carried out by hobbyists or “hackers”
seeking curiosity, reputation, or challenges. Early malware such as simple viruses and worms often
aimed to replicate or cause mischief, like displaying messages or slowing down systems.

As the internet expanded and became central to business and everyday life, cyber-attacks became
more financially motivated. Attackers developed sophisticated methods to steal credit card details,
online banking credentials, and personal data that could be sold on underground markets. Phishing
attacks, where fake emails or websites trick users into revealing passwords or other sensitive
information, became widespread. Ransomware emerged, where attackers encrypt a victim’s files
and demand payment to restore access.

Over time, attacks have also become more targeted and professional. Organised cybercrime
groups, sometimes operating like businesses, conduct large-scale operations. Nation-states use
cyber-attacks for espionage, political influence, or even as part of military operations, targeting
critical infrastructure such as power grids, transportation systems, and communication networks.
Advanced Persistent Threats (APTs) describe long-term, stealthy campaigns in which attackers
gain and maintain access to a network, slowly stealing data without being detected.

At the same time, the attack surface has expanded with the rise of smartphones, cloud computing,
and the Internet of Things (IoT), where everyday objects like cameras, sensors, and home
appliances are connected to the internet. Each new technology brings new opportunities but also
new vulnerabilities. Understanding the evolution of cyber-attacks helps security professionals
anticipate emerging threats and design more robust defences.

7.1.1 Early Computer Era and Hobbyist Attacks


In the early days of computing and networking, cyber-attacks were relatively simple and were often
carried out by hobbyists or enthusiasts who were curious about how systems worked. Many of
these early attackers, commonly called “hackers,” were motivated more by curiosity, challenge and
the desire for recognition than by money. During this period, attacks usually targeted standalone
systems or small networks, and security was not a major concern because computers were rare and
not deeply integrated into everyday life.

Early malicious programs, later grouped under the general name “malware,” began to appear.
Malware is a broad term that refers to any software created to harm, misuse or disrupt a computer
system. Two important early types of malware were viruses and worms. A virus is a program that
attaches itself to legitimate files or programs and spreads when those files are copied or shared. A
worm is a self-contained program that can spread on its own, often over a network, without
attaching to other files. Many early viruses and worms were designed mainly to display messages,
play pranks or slow down systems rather than to cause serious damage, but they revealed how
easily software could be abused.

7.1.2 Growth of the Internet and Financially Motivated Attacks


As the internet became widely available and more organisations and individuals connected their
computers, the impact of cyber-attacks increased dramatically. Online banking, e-commerce and
electronic payment systems introduced large volumes of financial transactions that attracted
criminals. At this stage, the main motivation for many attackers shifted from curiosity to financial
gain.

Attackers developed more sophisticated malware and techniques to steal money and valuable data.
One major technique that emerged is phishing. Phishing is a form of social engineering in which
attackers send fake emails or create fake websites that look like legitimate ones, such as online
banking or social media sites. The goal is to trick users into entering their passwords, credit card
numbers or other sensitive information. Once the victim provides the information, the attacker
uses it to steal money or hijack accounts.

Another important development was the rise of ransomware. Ransomware is a type of malware
that encrypts a victim’s files so they can no longer be accessed. The attacker then demands a
ransom, usually in digital currency, in exchange for the decryption key. This kind of attack can
affect individuals, businesses, hospitals and even government agencies, causing serious disruption.
At the same time, stolen personal data and login credentials became commodities that could be
sold on underground markets on the dark web. Cybercrime moved from being an individual
activity to a large illegal business.
7.1.3 Organised Cybercrime, Botnets and Nation-State Attacks
Over time, cyber-attacks became more organised and professional. Groups of attackers formed
criminal organisations that operate like businesses, with clear roles such as developers, testers,
money launderers and brokers who sell stolen data. These groups may run large campaigns against
banks, online shops, cryptocurrency users or any organisation that holds valuable information.

One important tool used by organised cybercriminals is the botnet. A botnet is a network of many
infected computers, often called “bots” or “zombies,” that are remotely controlled by an attacker
without the owners’ knowledge. Botnets can be used to send large volumes of spam, distribute
more malware or carry out Distributed Denial of Service (DDoS) attacks. In a DDoS attack, the
attacker uses many infected machines at once to flood a server or network with traffic, making it
unavailable to legitimate users.

At the same time, nation-states began to develop their own cyber capabilities. Governments use
cyber-attacks for espionage, stealing sensitive information from other countries, companies or
organisations. They may also attempt to influence political processes, disrupt communication
networks or target critical infrastructure such as power grids, transport systems and water supply.
Such operations are often highly secret and technically advanced. The term Advanced Persistent
Threat (APT) is used to describe long-term, stealthy campaigns in which skilled attackers gain
access to a network and remain hidden for months or years, quietly stealing data or preparing the
ground for future disruption. APTs are often associated with well-resourced groups, sometimes
linked to states or powerful criminal organisations.

7.1.4 Expansion of the Attack Surface: Mobile, Cloud and IoT


As technology continued to advance, the number and variety of devices connected to the internet
increased. Smartphones and tablets became common, and many people began to use them for
banking, messaging, social media and work-related tasks. Mobile devices store personal data and
are often always connected, creating many new opportunities for attackers. Mobile malware,
malicious apps and attacks through messaging platforms appeared, targeting both users and
organisations.

Cloud computing also changed the way data and services are delivered. Instead of running all
applications on local servers, many organisations now store data and run services on servers in the
“cloud,” operated by third-party providers. While cloud platforms offer many benefits, such as
flexibility and cost savings, they also introduce new security challenges. Misconfigured cloud
storage, weak access controls or vulnerabilities in cloud-based applications can expose large
amounts of data. Attackers have adapted by looking for poorly secured cloud resources and by
trying to compromise accounts that control cloud services.

The Internet of Things (IoT) has further expanded the attack surface. IoT refers to everyday
objects that have computing and communication capabilities, such as smart cameras, home
assistants, industrial sensors, medical devices and even connected cars. Many of these devices have
limited processing power, run simple software and may lack strong security features. They are
often deployed in large numbers and sometimes left with default passwords or outdated software.
Attackers have exploited these weaknesses to build large botnets from IoT devices or to use them
as entry points into larger networks.

7.1.5 Current Trends and Future Directions


Modern cyber-attacks are increasingly automated, fast and difficult to detect. Attackers make use
of tools that scan the internet for vulnerable systems, exploit newly discovered flaws called zero-
day vulnerabilities (vulnerabilities that are not yet known or patched by the software vendor) and
hide their activities using encryption and other techniques. Social engineering remains a powerful
method, as attackers know that tricking people is often easier than breaking strong technical
controls.

There is also a growing use of artificial intelligence and machine learning on both sides. Security
professionals use these technologies to detect unusual patterns and respond quickly to threats,
while attackers experiment with them to generate more convincing phishing messages, evade
detection or automatically search for weaknesses. Critical sectors such as health care, finance,
education and government continue to face complex threats, and the consequences of successful
attacks can be severe, ranging from financial loss to threats to human life and national security.

Understanding how cyber-attacks have evolved from simple experiments by hobbyists to complex,
organised and state-sponsored operations helps students appreciate why cybersecurity must also
keep evolving. By studying past and current trends, security professionals can better predict future
attack methods, design stronger defences and develop policies, technologies and practices that
reduce risk in an increasingly connected world.

You might also like