AI-Computer_Networks__Virtualisation.docx
AI-Computer_Networks__Virtualisation.docx
Document Details
Submission ID
trn:oid:::30225:459956609 28 Pages
Download Date
File Name
Computer_Networks__Virtualisation.docx
File Size
2.1 MB
The percentage indicates the combined amount of likely AI-generated text as It is essential to understand the limitations of AI detection before making decisions
well as likely AI-generated text that was also likely AI-paraphrased. about a student’s work. We encourage you to learn more about Turnitin’s AI detection
capabilities before using the tool.
Detection Groups
29 AI-generated only 41%
Likely AI-generated text from a large-language model.
Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
writing that is likely AI generated as AI generated and AI paraphrased or likely AI generated and AI paraphrased writing as only AI generated) so it should not be used as the sole basis for
adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any
academic misconduct has occurred.
False positives (incorrectly flagging human-written text as AI-generated) are a possibility in AI models.
AI detection scores under 20%, which we do not surface in new reports, have a higher likelihood of false positives. To reduce the
likelihood of misinterpretation, no score or highlights are attributed and are indicated with an asterisk in the report (*%).
The AI writing percentage should not be the sole basis to determine whether misconduct has occurred. The reviewer/instructor
should use the percentage as a means to start a formative conversation with their student and/or use it to examine the submitted
assignment in accordance with their school's policies.
Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.
In the ever-evolving digital landscape, the need for efficient, secure, and scalable
communication across computer networks is paramount. At the heart of global connectivity lies
the Transmission Control Protocol/Internet Protocol (TCP/IP) model, a robust and widely
implemented suite of communication protocols that form the backbone of the Internet. It
provides a standardized framework that governs how data is transmitted and received across
interconnected networks.
The TCP/IP model supports end-to-end data communication and is designed to work reliably
over unreliable and heterogeneous networks. Each of its layers plays a critical role in ensuring
data integrity, proper routing, and efficient delivery. As network infrastructure becomes more
dynamic and complex—driven by trends such as cloud computing, virtualisation, and Internet
of Things (IoT)—the traditional static approach of TCP/IP-based networking is increasingly
being challenged.
To address the growing need for flexibility and control, Software-Defined Networking (SDN)
has emerged as a transformative paradigm. SDN decouples the control plane (which decides
how data is forwarded) from the data plane (which actually forwards the data), enabling
centralised control and programmability of network behavior. This report provides a detailed
analysis of the TCP/IP architecture, including the functions and protocols of each layer, a
comparison with the OSI model, and a critical discussion of SDN’s role and impact on
traditional network design and management.
The TCP/IP model, developed in the 1970s, was designed to provide a set of standards for
interconnecting different computer systems. It consists of four layers:
1. Application Layer
2. Transport Layer
3. Internet Layer
Each of these layers abstracts a certain set of responsibilities, forming a modular stack that
promotes interoperability, scalability, and layered troubleshooting.
Application Layer
The Application Layer provides network services to end-users and applications. Unlike the OSI
model, which separates the application, presentation, and session layers, TCP/IP consolidates
all three into this single layer. Common protocols at this layer include:
This layer is responsible for initiating communication, formatting data for transmission, and
handling the interface with end-user processes.
Transport Layer
This layer is responsible for the logical communication between application processes. It
provides services such as segmentation, error checking, flow control, and retransmission of lost
packets. The two most prominent transport layer protocols are:
TCP ensures ordered data delivery through sequence numbers and acknowledgements, while
UDP is suitable for real-time applications where speed is prioritised over reliability (e.g.,
streaming, VoIP).
Internet Layer
The Internet Layer is crucial for routing and addressing packets across networks. It defines how
data is encapsulated into packets and routed from the source to the destination host. Protocols
include:
ICMP (Internet Control Message Protocol): Provides error messages and operational
information.
This layer is responsible for packet forwarding and routing decisions based on IP addresses.
Link Layer
The Link Layer governs communication between nodes on the same physical network. It
includes both the Data Link and Physical Layers of the OSI model. Protocols operating here
include:
Responsibilities of this layer include framing, physical addressing (MAC), and managing
access to the transmission medium.
The Open Systems Interconnection (OSI) model is a conceptual framework developed by ISO
that standardises the functions of a communication system into seven layers. These include:
1. Application
2. Presentation
3. Session
4. Transport
5. Network
6. Data Link
7. Physical
While OSI offers finer granularity, TCP/IP is more practical and widely implemented. A
comparison of both models is shown below:
The OSI model is prescriptive, ideal for understanding, while the TCP/IP model is descriptive,
reflecting actual protocol implementation.
Each layer of the TCP/IP model comes with its own design considerations and challenges:
Flow and Congestion Control: TCP uses sliding windows and congestion avoidance
algorithms.
Traditional networks configure devices manually, leading to scalability and consistency issues.
SDN centralises control in a software-based controller that communicates with hardware
through southbound APIs like OpenFlow.
application demands.
Transport SDN enables real-time traffic engineering, allowing the controller to optimise
TCP and UDP performance. It can dynamically adjust routes to meet Quality
of Service (QoS) requirements and reduce congestion.
Link At the data link layer, SDN uses Open vSwitch, VLANs, and VXLANs to
virtualise switching. This allows administrators to manage networks abstractly,
without being limited by physical topology.
SDN introduces Northbound APIs for applications to communicate with the controller, and
Southbound APIs like OpenFlow to communicate with the physical devices.
1. Simplified Management
2. Enhanced Scalability
With SDN, scaling a network becomes more seamless. Technologies like VLANs and
VXLANs allow virtualised networks to span large infrastructures, particularly in data
centres and cloud environments, without disrupting existing services or topology.
3. Security
SDN enables dynamic and granular security policies. Centralised controllers can detect
anomalies and deploy Access Control Lists (ACLs) instantly across the network. It
supports traffic isolation, segmentation, and rapid threat containment.
4. Optimised Performance
5. Network Virtualisation
8. Conclusion
The TCP/IP model remains an essential framework in understanding and implementing modern
network communications. Its layered architecture, coupled with well-defined protocols,
supports robust and scalable data exchange across global networks. However, as digital
environments grow increasingly dynamic, the limitations of hardware-centric, distributed
control become evident.
software-driven management that enhances scalability, flexibility, and security. SDN aligns
well with and augments the TCP/IP model by abstracting complexity and introducing
programmability at every layer. As the networking landscape evolves, the integration of SDN
with traditional architectures will be pivotal in shaping future-proof, efficient, and intelligent
networks.
Part B
Task 1: Network Design and Subnetting
Network Diagram
Subnetting Calculation
A 4 1 2 7
B 4 1 2 7
C 4 1 2 7
D 4 1 2 7
Each lab thus requires at least 7 usable IP addresses. For scalability, we assume space for 20–30
hosts per lab.
Using Variable Length Subnet Masking (VLSM) allows for efficient allocation:
Inter-router connections
VM guest networks
Future expansion
32 total IPs
Each lab will have its PCs assigned sequentially from lower to higher IPs in the subnet range.
The VM will occupy the upper IPs for consistency and easy identification.
a. Scalability
Using /27 subnetting provides room for future devices without reallocating the address plan.
With only 128 of 256 IPs used, we retain 128+ addresses for:
Routing links
b. Modularity
Each lab is self-contained with its own router and switch. This simplifies:
Independent upgrades
c. Performance
e. Security
f. Cost-Effectiveness
Each lab has only essential hardware, optimising budget and energy.
After finalising the design in Task 1, the network was physically constructed and configured
using Cisco Packet Tracer. Each lab was connected as a distinct subnet through its own switch
and router. Routers were interconnected to allow routing of packets between labs, enabling
complete internetworking.
1. Assign IP addresses to all PCs, routers, and switches as per the subnetting plan.
5. Configure DNS (optional) and verify layer 3 connectivity using ping and simulation.
Each lab's PCs were successfully able to ping their default gateway (router interface) and each
other.
Using Packet Tracer’s Simulation Mode, a PDU was sent from Lab D (PC12) to Lab A (PC2).
The packet travelled through Router3 → Router2 → Router0, confirming that packet
forwarding and path selection are correct.
Physical and logical connections, protocol configurations, and routing are functioning correctly
across the network.
The network designed and implemented for the Camden House labs was evaluated against core
user requirements, focusing on five key parameters: capacity, speed, fault tolerance, security,
and quality of service (QoS). The results confirm that the network meets foundational
requirements while allowing room for future upgrades.
a. Capacity
Each lab is configured with a /27 subnet mask, allowing up to 30 usable IP addresses per
subnet. This is sufficient for 20–25 physical and virtual devices, while still leaving space for
future additions such as IoT devices, printers, or departmental servers. Furthermore, with only
50% of the 192.168.10.0/24 address space used, additional subnets can be allocated without
requiring major reconfiguration.
Tests conducted in Cisco Packet Tracer indicate that intra-lab communication latency is
consistently below 1ms, mimicking real-world switch behavior in local environments. Inter-lab
routing via RIP also exhibited minimal delays. While simulation tools are limited in bandwidth
emulation, the overall topology supports high-throughput, low-latency communications for
most typical educational lab scenarios.
c. Fault Tolerance
The modular architecture—with each lab having its own switch and router—ensures that a fault
in one segment (e.g., a router failure) does not affect the operation of other labs. Each router
operates independently, and communication paths are isolated, which prevents cascading
failures and increases resilience.
d. Security
While advanced security measures are not implemented in this phase, the use of separate
routers and subnets for each lab inherently supports network segmentation. This allows for the
future integration of access control lists (ACLs) or firewall policies to restrict or monitor inter-
lab communication.
Although QoS mechanisms are not simulated in Packet Tracer, the current setup provides the
structural foundation to introduce router-level traffic prioritisation in future phases. For
example, bandwidth shaping or priority queuing could be configured for VM traffic or
administrative systems to ensure service reliability.
While the current network design at Camden House successfully delivers essential connectivity,
reliability, and modularity, there are several areas where the network can be improved to
enhance its efficiency, scalability, and manageability. Below are three targeted
recommendations:
The network currently uses RIP (Routing Information Protocol), a distance-vector routing
protocol known for its simplicity. However, RIP has significant limitations, including a
maximum hop count of 15, slow convergence, and susceptibility to routing loops. For a campus
environment expected to scale with additional labs and services, it is advisable to replace RIP
with OSPF (Open Shortest Path First). OSPF is a link-state protocol that provides faster
convergence, hierarchical network design through areas, and more efficient bandwidth usage. It
also supports route summarisation and can handle more complex topologies without
degradation in performance.
Although the labs are physically segmented, logical segmentation using VLANs (Virtual
LANs) would allow for more refined traffic control. For example, PCs used by students,
lecturers, and administrative staff can be placed in separate VLANs, even if they share the same
physical switch. VLANs enhance network security, reduce broadcast traffic, and simplify
policy enforcement. VLAN-aware switches and a trunking protocol (e.g., IEEE 802.1Q) would
be required to support this upgrade.
Currently, all routers are manually configured, which is time-intensive and difficult to scale.
Implementing a Software-Defined Networking (SDN) framework using a controller like
OpenDaylight or ONOS would allow centralised control and automation of routing, traffic
shaping, access control lists (ACLs), and quality of service (QoS) rules. This would improve
agility, reduce administrative overhead, and future-proof the network for cloud and
virtualisation needs.
In modern educational and corporate IT environments, virtual machines (VMs) are essential
tools for simulating complex, multi-operating system environments. For this task, a virtual lab
is created where two different operating systems—Ubuntu Linux and Windows 10—are
installed concurrently using a virtualisation platform such as VMware Workstation Player or
Oracle VirtualBox. This setup helps simulate cross-platform networking scenarios, Active
Directory testing, and security configurations without the need for multiple physical machines.
This guide outlines how to install and configure a virtual machine environment capable of
hosting both Windows and Linux systems with internet connectivity.
Virtualisation Platform: VMware Workstation Player (free for personal use) or Oracle
VirtualBox
Both ISO images must be downloaded in advance. A system with at least 8 GB RAM and 60
GB of free storage is recommended to run both VMs efficiently.
Begin by launching VMware Workstation Player or Oracle VirtualBox, both of which are free
virtualisation platforms. Select “Create a New Virtual Machine.” Follow the on-screen wizard
to allocate system resources:
Choose the number of CPU cores (typically 1–2 per VM for testing environments).
Select the ISO file for the operating system you intend to install.
This process creates a bootable virtual hardware instance capable of running an operating
system independently of the host.
For Ubuntu, select the language and keyboard layout, create a user account, and
proceed with the default desktop installation. Ubuntu is known for its ease of setup and
broad compatibility with VMware/VirtualBox.
For Windows 10, follow the setup prompts to partition the disk, enter license details (if
required), and configure initial preferences.
Users with advanced hardware can enable nested virtualization, allowing them to install and
run both OSes simultaneously within one hypervisor for parallel testing.
NAT (Network Address Translation) allows the VM to share the host’s internet
connection.
Bridged Adapter lets the VM act as a standalone device on the same network as the
host.
To simulate concurrent VM usage, run both VMs side-by-side (resources permitting). This
allows:
Configuring a local network between VMs for services such as DNS, DHCP, or AD
domain joining.
Advanced users may also install VirtualBox inside Ubuntu to emulate nested virtualization
and run Windows within Linux.