0% found this document useful (0 votes)
13 views85 pages

Server and Systems Administration - 05

This good book
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views85 pages

Server and Systems Administration - 05

This good book
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Server and Systems Administration

Code:CSC3504

Instructor : KWIZERA Ildephonse


Managing Folder and File Security In
active directory
• Managing folder and file security in Active Directory (AD) involves
controlling access to shared resources, ensuring only authorized users and
groups have access to sensitive data, and applying appropriate permissions.
1. Use of NTFS Permissions
• NTFS (New Technology File System) permissions allow you to control who
can access files and folders on your Windows file system. These permissions
can be set on folders and files and are essential for securing them.
Types of NTFS Permissions:
Types
• Read: Users can view the contents of files and folders.
• Write: Users can modify files and folders.
• Modify: Users can delete and modify files.
• Full Control: Users have complete control over files and folders, including changing permissions.
• Setting NTFS Permissions:
1. Right-click the folder or file and select Properties.
2. Go to the Security tab.
3. Click Edit to add or modify permissions for users or groups.
4. Select the user/group, then assign the appropriate permissions.
2. Implementing Active Directory Groups

• Active Directory groups are a powerful tool for managing access control.
Instead of assigning permissions to individual users, you can assign them to
groups, and then assign permissions to the group. This simplifies
management, especially in large environments.
Types of Active Directory Groups:
• Types
• Domain Local Groups: Best for assigning permissions to resources within the same domain.
• Global Groups: Typically used for organizing users based on common attributes (e.g.,
departments).
• Universal Groups: Used for assigning permissions across multiple domains within a forest.
• Group Memberships:
• Create groups based on roles or departments (e.g., “HR,” “Finance”).
• Assign users to these groups based on their roles.
• Apply folder and file permissions to these groups, rather than to individual users.
3. Share Permissions for Network Shares

• If the folders are shared across the network, you'll also need to configure share
permissions. Share permissions apply to files accessed over the network, while
NTFS permissions apply to both local and network access.
• Types of Share Permissions:
• Read: Users can only view the files.
• Change: Users can modify files and folders.
• Full Control: Users can modify, delete, and change permissions on the shared folder.
4. Delegating Administrative Control
Active Directory allows delegation of specific administrative tasks without granting
full control over the entire AD infrastructure.
• Delegation of Folder and File Permissions:
• Use Active Directory Users and Computers (ADUC) to delegate permissions to manage
folders and file shares to specific users or groups.
• Right-click on an organizational unit (OU) or object, and select Delegate Control.
• Follow the wizard to assign specific permissions (such as managing group memberships,
creating/deleting objects, etc.).
5. Audit and Monitor File Access
To ensure security and compliance, you should enable auditing of folder and file access.
• Enabling Auditing:
1. Open Group Policy Management and create a GPO.
2. Go to Computer Configuration > Policies > Windows Settings > Security Settings >
Advanced Audit Policy Configuration.
3. Enable Object Access auditing.
4. Apply the policy to the target machine(s).
5. In the file or folder’s Security tab, click Advanced and go to the Auditing tab to configure
auditing for specific events (e.g., file access, file modification).
6. Using Group Policy for Security Settings
You can apply security settings at the domain level using Group Policy to enforce
restrictions on file access.
• Examples of Group Policy Settings:
• Folder Redirection: Redirect user folders (e.g., Documents, Desktop) to a central location.
• AppLocker: Restrict the types of applications that can run on machines.
• Security Options: Set policies related to file encryption, password complexity, etc.
7. Encrypting Files and Folders
• To add an extra layer of security, you can use Encrypting File System (EFS)
to encrypt files and folders on NTFS volumes.
• Steps for Encrypting Files:
1. Right-click the file or folder and select Properties.
2. Go to the General tab, click Advanced.
3. Check the option Encrypt contents to secure data.
4. Once encrypted, only authorized users will be able to decrypt and access the data.
8. Managing Permissions Using PowerShell
• You can automate and manage security configurations using PowerShell. Below is an example of setting NTFS permissions using
PowerShell:
# Grant read access to a group
$folderPath = "C:\SharedFolder"
$group = "Domain\GroupName"
$acl = Get-Acl $folderPath $permission = “
$group", "Read", "Allow"
$accessRule = New-Object [Link]
$permission
$[Link]
($accessRule) Set-Acl -Path
$folderPath -AclObject $acl
Server monitoring
• Server monitoring is essential to ensure that your server infrastructure is
running smoothly, securely, and efficiently. It involves tracking the
performance, health, and availability of servers and services to prevent
downtime, identify issues early, and optimize performance.
Components that server can monitor
1. Hardware Monitoring

Monitoring hardware helps ensure the physical components of the server are functioning properly.
• CPU Usage: Track CPU utilization to ensure it’s not constantly running at high percentages,
which could indicate resource exhaustion or an issue with the system.
• Memory (RAM): Monitor RAM usage to identify memory leaks or heavy memory
consumption by processes that could affect server performance.
• Disk Space: Ensure there is adequate disk space and track disk performance (read/write speed,
I/O) to avoid server crashes.
• Network Traffic: Monitor network throughput and latency to ensure network connectivity and
performance.
• Temperature and Power: Check server temperatures (important for physical hardware) and
power supply health.
2. Operating System Monitoring
Monitoring the health of the operating system itself is essential to maintaining
performance.
• System Logs: Monitor event logs (Windows Event Viewer, Linux syslog, etc.)
for errors, warnings, and potential security threats.
• System Load: Check load averages (Linux) or performance counters
(Windows) to determine how well the system is handling the workloads.
• Processes and Services: Ensure that critical processes and services are
running as expected. Set alerts for service crashes or unexpected shutdowns.
3. Application Monitoring
Many servers are running specific applications or services that need constant monitoring.
• Web Servers (Apache, Nginx, IIS): Monitor web server availability, response times,
and error rates (e.g., 404 or 500 errors).
• Database Servers (SQL, MySQL, Oracle): Track database performance (queries per
second, slow queries, connection pool utilization), resource utilization, and replication
status.
• File Servers: Ensure file shares are accessible and monitor file system performance
and usage.
• Email Servers (Exchange, Postfix): Monitor email traffic, server availability, and
error rates (failed sends/receives).
4. Security Monitoring
Security monitoring ensures the server is protected from unauthorized access and threats.
• Firewall Status: Ensure firewall rules are intact and logs are being reviewed for
suspicious activity.
• Intrusion Detection: Monitor for unusual network traffic patterns, failed login
attempts, and other signs of a security breach (using IDS/IPS systems like Snort,
Suricata, or integrated solutions).
• Antivirus/Antimalware Status: Verify that antivirus software is running and up to
date, scanning for threats.
• User Account and Access Auditing: Track login attempts, failed logins, privilege
escalations, and access to sensitive areas of the system.
5. Service and Process Monitoring
Critical background services and processes must be continuously monitored.
• Availability: Monitor essential services like DNS, DHCP, and other network
services for availability.
• Resource Utilization: Track memory and CPU usage of critical background
processes, such as database engines, web servers, or application services.
• Alerting and Recovery: Configure alerts for when services are down or
nearing their resource limits (e.g., high CPU usage, low disk space).
6. Backup and Disaster Recovery
Monitoring
Ensure backups are working and are regularly completed.
• Backup Status: Monitor whether backups run successfully and are
completed on schedule.
• Backup Integrity: Ensure that backup files are not corrupted and can be
restored when needed.
• Disaster Recovery: Test and monitor the disaster recovery plan to ensure
that data can be restored in case of failure.
Tools for Server Monitoring
1. Native Tools
• Windows Performance Monitor: Built into Windows, it allows you to track
CPU, memory, disk, and network usage, as well as specific application-level
performance counters.
• Task Manager: Quick overview of running processes, CPU, and memory
usage.
• Event Viewer: Provides logs related to system, security, and application
events.
2. Third-Party Monitoring Tools
Third-party tools provide more advanced features like remote monitoring, alerting,
and historical data analysis.
• Nagios: A popular open-source monitoring tool that provides monitoring for
servers, applications, and network devices.
• Zabbix: An open-source solution that monitors network performance, server
health, and application availability.
• PRTG Network Monitor: A tool for monitoring the availability and
performance of IT infrastructure, including servers, network devices, and
services.
• SolarWinds Server & Application Monitor: A commercial tool that provides deep
visibility into server performance, including applications, system resources, and
uptime.
• Datadog: A cloud-based monitoring solution that integrates with cloud
environments, servers, and applications.
• New Relic: Provides monitoring for servers, applications, and cloud services,
offering deep performance insights.
• Checkmk: Another open-source monitoring tool that provides detailed monitoring
and alerts for servers, databases, and services.
3. Cloud-Based Monitoring
Cloud providers (like AWS, Azure, and Google Cloud) often provide their own
server monitoring tools:
• AWS CloudWatch: Monitors AWS resources like EC2 instances, disk
performance, and network traffic.
• Azure Monitor: Provides monitoring for Azure-based servers, including
application insights and metrics.
• Google Cloud Operations Suite (formerly Stackdriver): Monitors Google
Cloud resources with integrated logging and alerting.
Server protection
• Server protection, also known as server security, is the process of
safeguarding a server from malicious activity and unauthorized access. It
involves using tools and methods to ensure data privacy, accuracy, and
availability.
Server protection methods
• Encryption: Transforms data into a code that can't be decrypted, even if it's intercepted
• Firewalls: Inspect and authenticate data packets to prevent unwanted traffic
• Intrusion detection systems (IDS): Monitor network traffic for signs of unauthorized
activity
• Patch management: Keep systems up to date with the latest security patches
• Privileged access management (PAM): Control access to users and accounts
• Strong passwords: Use strong passwords and multi-factor authentication
• Regular backups: Implement regular backups of the server
• Security audits: Conduct regular security audits and vulnerability assessments
Other server protection methods include:

• Using VPNs and private networks


• Limiting superuser/root access
• Disabling unnecessary services
• Using dedicated servers
• Monitoring server logs
• Securing the filesystem
• Implementing ongoing security training
Server data backup
• Server data backup is the process of copying data from a server to a secure
location for recovery if the original data is lost or corrupted. Backups are
important for preserving data, minimizing downtime, and protecting against
cyberthreats.
Why is server data backup important?
• Data preservation: Backups protect important data like business documents,
customer records, and server configurations.
• Data recovery: Backups allow for quick data restoration in the event of a
system failure or data loss.
• Data security: Backups protect against cyberthreats like ransomware and
malware by isolating them from the primary network.
How to perform a server data backup?
1. Create a baseline by running initial backups for both on-premise and cloud
environments.
2. Monitor the backup process to ensure it completes successfully.
3. Store the backup on a different media and in a different location.
What tools can be used for server data backup?

• Acronis VE: A backup software that offers multitenant self-service backup


and recovery
• FileCloud: A cloud server backup service that automatically backs up data at
regular intervals
• SQL Backup Master: A free SQL backup software that allows users to
store backups in the cloud
1. Backup Types
• Different types of backups are used depending on the specific needs of the
organization or environment. Each type has its benefits and limitations:
• Full Backup: A complete copy of all data being backed up. While it is the
most comprehensive, it also takes the longest to perform and consumes the
most storage space.
• Pros: Easy to restore; contains all data.
• Cons: Time-consuming; requires more storage.
• Incremental Backup: Only backs up data that has changed since the last backup (whether
that was a full or incremental backup). This is efficient in terms of storage space and time.
• Pros: Faster to complete; uses less storage.
• Cons: Restoration is slower because you need to restore the full backup first, followed by each
incremental backup.
• Differential Backup: Similar to incremental backup, but it backs up all data that has
changed since the last full backup, regardless of any incremental backups.
• Pros: Faster restore than incremental backups.
• Cons: Requires more storage than incremental backups; still slower restore times than a full backup.
• Mirror Backup: A real-time backup that mirrors the original data exactly. Any changes
made to the original data (including deletions) are reflected in the backup.
• Pros: Fast access to backup.
• Cons: If data is deleted or corrupted, the backup will reflect these changes, potentially making it an
incomplete recovery option.
2. Backup Locations
Choosing where to store your backups is crucial for security, availability, and disaster
recovery.
• Onsite Backups: Backups stored locally within your physical environment (e.g., external
hard drives, NAS devices, or tape storage).
• Pros: Fast access and restore times; easy to manage.
• Cons: Vulnerable to physical disasters like fire, theft, or flooding; limited scalability.
• Offsite Backups: Backups stored at a different physical location (e.g., a remote data
center or an offsite server).
• Pros: Protection from local disasters; can be more secure.
• Cons: May incur additional costs; longer restore times due to location.
Nagios Core installation and configuration

• Nagios is an open-source monitoring system that helps organizations


monitor their IT infrastructure, including servers, applications, network
devices, and services. It’s widely used for network monitoring, server health
checks, and overall system performance monitoring
Core Functionality:
• Monitoring: Nagios provides comprehensive monitoring capabilities for various types of IT
systems such as:
• Hosts (servers, routers, etc.)
• Services (HTTP, DNS, SMTP, etc.)
• Network devices (switches, firewalls, etc.)
• Alerting: When an issue is detected, Nagios can trigger alerts via email, SMS, or other
notification methods to inform system administrators.
• Plugins: Nagios uses plugins (custom or predefined) to monitor different types of services and
devices. These plugins can be written in various scripting languages (Bash, Python, Perl, etc.).
• Thresholds: Nagios allows you to define thresholds for various performance metrics (e.g.,
CPU usage, disk space, response time) and triggers alarms if the thresholds are breached.
Architecture of Nagios
• The architecture of Nagios is built on the basis of server-client architecture.
• The server of Nagios usually run on a host and the plugins run on the
remote server/ or remote host which are to be monitored.
• The plugins of Nagios collect the useful data and send them to the process
scheduler, which displays the information over the graphical user interface
(GUI).
• Following are the three main components in the architecture of Nagios
application:
1. Scheduler
2. GUI
3. Plugin.
• Scheduler: The scheduler is the server part of the Nagios system. This component
checks the plugins at regular interval and according to their results perform some
action.
• GUI: It is a user interface of the Nagios system, which is displayed on the web
pages generated by the CGI. GUI can be a button to red or green, graph, sound, etc.
• The button of green color becomes red color on GUI, when the plugins returns an
error or warning.
• Plugins: Plugins is a component of the Nagios system, which is configurable by the
users. This component detects the services and return the results to the server of
Nagios.
Procedure
• Install the prerequisites:
• [user@nagios]# yum install -y httpd php php-cli gcc glibc glibc-common gd gd-devel net-snmp
openssl openssl-devel wget unzip
• Open port 80 for httpd:
• [user@nagios]# firewall-cmd --zone=public --add-port=80/tcp
• [user@nagios]# firewall-cmd --zone=public --add-port=80/tcp –permanent
• Create a user and group for Nagios Core:
▪ [user@nagios]# useradd nagios
▪ [user@nagios]# passwd nagios
▪ [user@nagios]# groupadd nagcmd
▪ [user@nagios]# usermod -a -G nagcmd nagios
▪ [user@nagios]# usermod -a -G nagcmd apache
• Download the latest version of Nagios Core and Plug-ins:
• [user@nagios]# wget --inet4-only
[Link]
• [user@nagios]# wget --inet4-only [Link] [Link]/download/nagios-
[Link]
• [user@nagios]# tar zxf [Link]
• [user@nagios]# tar zxf [Link]
• [user@nagios]# cd nagios-4.3.1
• Run ./configure:
• [user@nagios]# ./configure --with-command-group=nagcmd
• Compile the Nagios Core source code:
• [user@nagios]# make all
• Install Nagios source code:
• [user@nagios]# make install
• [user@nagios]# make install-init
• [user@nagios]# make install-config
• [user@nagios]# make install-commandmode
• [user@nagios]# make install-webconf
• Copy the event handlers and change their ownership:
• [user@nagios]# cp -R contrib/eventhandlers/ /usr/local/nagios/libexec/
• [user@nagios]# chown -R nagios:nagios /usr/local/nagios/libexec/eventhandlers
• Run the pre-flight check:
• [user@nagios]# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/[Link]
• Make and install the Nagios Core plug-ins:
• [user@nagios]# cd ../nagios-plugins-2.2.1
• [user@nagios]# ./configure --with-nagios-user=nagios --with-nagios-group=nagios
• [user@nagios]# make
• [user@nagios]# make install
• Create a user for the Nagios Core user interface:
• [user@nagios]$ sudo htpasswd -c /usr/local/nagios/etc/[Link] nagiosadmin
• Important
• If adding a user other than nagiosadmin, ensure the /usr/local/nagios/etc/[Link]
file gets updated with the username too.
• Also modify the /usr/local/nagios/etc/objects/[Link] file with the user
name, full name and email address as needed.
Starting the Nagios Core service
• Add Nagios Core as a service and enable it:
• [user@nagios]# chkconfig --add nagios
• [user@nagios]# chkconfig --level 35 nagios on
• Start the Nagios Core daemon and Apache:
• [user@nagios]# systemctl start nagios
• [user@nagios]# systemctl enable httpd
• [user@nagios]# systemctl start httpd
Logging into the Nagios Core server
• With Nagios up and running, log in to the web user interface:
• [Link]
• Nagios Core will prompt for a user name and password.
• Input the login and password of the default Nagios Core user.
NTOP monitoring system
• ntop (short for network top) is an open-source network monitoring and traffic
analysis tool. It provides real-time visibility into network traffic, allowing users
to monitor the flow of data and analyze the behavior of network devices and
applications. ntop is well-suited for network administrators who need to
identify network bottlenecks, traffic patterns, or security issues.
• ntop comes in two main variants:
• ntopng: The next-generation version of ntop, providing a modern web-based user
interface and enhanced features.
• ntop: The older, command-line-based version.
Features of ntopng:
• Real-Time Network Traffic Monitoring: ntopng provides an overview of network traffic
in real-time, showing the sources, destinations, protocols, and types of traffic flowing
through your network. It displays this data in an intuitive web interface.
• Protocol and Flow Analysis: ntopng can identify traffic flows by application or protocol
(HTTP, FTP, DNS, etc.), and it can analyze traffic patterns over time. It supports multiple
flow protocols such as NetFlow, sFlow, IPFIX, and JFlow.
• Traffic Visualization: ntopng presents the data in a graphical and interactive manner. It
generates charts and graphs to display network traffic statistics, including traffic by protocol,
top talkers, and traffic flows. Historical data can be visualized to understand trends and
patterns.
• Traffic Classification: ntopng can classify traffic into categories (e.g., video streaming,
VoIP, web browsing) to help administrators understand what kind of applications are using
the network.
• Alerting and Notifications: ntopng can send alerts based on customizable thresholds,
helping network administrators quickly identify potential problems, like bandwidth
congestion or suspicious traffic patterns.
• Host and Device Monitoring: ntopng tracks the behavior of individual hosts and devices
on the network. It gives detailed information about each host's IP address, interface, traffic
volume, and application usage.
• Security and Anomaly Detection: ntopng includes basic security features such as
anomaly detection, identifying unusual traffic behavior that may indicate attacks (e.g.,
DDoS, port scanning). It also offers integration with tools like Suricata and Zeek for more
advanced network security monitoring.
• Export and Integration: ntopng allows data export to various formats such as CSV or
JSON, and it integrates with external monitoring and management tools. You can also
export traffic flows in NetFlow, sFlow, or IPFIX formats to forward data to other
systems for deeper analysis.
Concept of Virtualization
• Virtualization refers to the creation of virtual versions of physical resources
such as servers, storage devices, or networks. It enables the abstraction of
hardware, allowing multiple virtual systems (VMs) to run on a single physical
machine. This helps in resource optimization, isolation, and management of
various workloads without the need for separate physical infrastructure.
Types of Virtualization
1. Hardware Virtualization (Server Virtualization):

• Involves creating virtual machines (VMs) on a physical server (host machine) that
each run their own operating system (guest OS). The hypervisor manages the
distribution of hardware resources to each VM.
• Example: Running multiple operating systems on a single physical server, such as
running Windows and Linux on the same server.
• Hypervisors: Can be Type 1 (bare-metal, runs directly on hardware) or Type 2
(hosted, runs on top of a host OS).
• Examples: VMware ESXi, Microsoft Hyper-V, KVM (Kernel-based Virtual
Machine), Xen.
[Link] Virtualization:
• This type of virtualization combines hardware and software network
resources into a virtualized network. It abstracts physical networking
hardware, creating virtual networks for different traffic types and
applications, which can improve performance and flexibility.
• Example: Creating virtual LANs (VLANs) or virtual networks for isolated
environments.
3. Storage Virtualization
• Combines multiple physical storage devices into a single virtualized storage
pool, making it easier to manage storage resources. Virtual storage allows
data to be pooled from multiple physical locations and accessed as a unified
resource.
• Example: Using software-defined storage (SDS) solutions to manage
distributed storage resources like SAN (Storage Area Network) or NAS
(Network-Attached Storage)
4. Desktop Virtualization:
• Provides users with virtual desktops that can be accessed remotely. It
involves running desktop operating systems on a server or in the cloud,
allowing users to access their desktop environment from any device.
• Virtual Desktop Infrastructure (VDI), where individual user desktop
environments are hosted on centralized servers.
• Examples: VMware Horizon, Citrix Virtual Apps and Desktops, Microsoft
RDS (Remote Desktop Services).
5. Application Virtualization:
• Separates applications from the underlying operating system so that the
application runs in a virtualized environment and can be accessed from
various devices without needing to install the software directly on each
device.
• Running a software application on a virtual machine while allowing users to
access it remotely.
• Examples: VMware ThinApp, Microsoft App-V, Citrix XenApp.
6. Operating System Virtualization
(Containerization):
• Involves creating lightweight virtual environments (containers) that share the
host OS kernel but are isolated from each other. This is ideal for running
applications in isolated, portable environments without the overhead of full
virtual machines.
• Docker containers that package applications and their dependencies into a
portable container.
• Examples: Docker, Kubernetes, LXC (Linux Containers).
Virtualization Vendors and Platforms
• Several vendors and platforms provide solutions for different types of
virtualization.
[Link]:
• Platform: VMware offers a variety of virtualization solutions for both
enterprise and individual use.
• Products:
• VMware vSphere (for server virtualization)
• VMware Workstation (for desktop virtualization)
• VMware vSAN (for storage virtualization)
• VMware NSX (for network virtualization)
• VMware Horizon (for desktop and application virtualization)
[Link]:
• Platform: Microsoft provides virtualization solutions primarily for businesses
and enterprises.
• Products:
• Hyper-V (for server virtualization)
• Windows Virtual Desktop (for desktop virtualization)
• Microsoft Azure (cloud platform with extensive virtualized resources like VMs, storage, and
networking)
• Remote Desktop Services (RDS) (for application and desktop virtualization)
[Link]:
• Platform: Citrix is a major player in virtualization solutions, with a focus on
providing virtual desktops and applications.
• Products:
• Citrix Hypervisor (formerly XenServer) (for server virtualization)
• Citrix Virtual Apps and Desktops (for desktop and application virtualization)
• Citrix ADC (for application delivery and load balancing)
• Citrix Workspace (for unified endpoint management and virtual environments)
4. Red Hat (and Linux-based solutions):

• Platform: Red Hat is a key provider of open-source virtualization


technologies, especially in the Linux space.
• Products:
• Red Hat Virtualization (for server and desktop virtualization)
• KVM (Kernel-based Virtual Machine) – an open-source virtualization solution
integrated with Linux
• OpenShift (for containerization and Kubernetes orchestration)
[Link]:
• Platform: Oracle provides comprehensive virtualization solutions, often for
large enterprises.
• Products:
• Oracle VM (for server virtualization)
• Oracle VirtualBox (for desktop virtualization)
• Oracle Cloud Infrastructure (for virtualized computing resources in the cloud)
6. Nutanix:
• Platform: Nutanix is a provider of hyper-converged infrastructure solutions,
combining virtualization, storage, and computing into a single platform.
• Products:
• Nutanix Acropolis (for server and storage virtualization)
[Link] and Kubernetes (for Containers
and Container Orchestration):
• Platform: These tools are specialized in containerization and container
orchestration.
• Products:
• Docker (for containerization)
• Kubernetes (for orchestrating containers in large-scale environments)
What is VMkernel?
• VMkernel is the operating system kernel used by VMware ESXi, which is a
bare-metal hypervisor. It's responsible for managing system resources such as
CPU, memory, and I/O, and it interacts directly with the hardware of the
physical server. VMkernel also provides the platform for running virtual
machines (VMs), managing VM resources, and handling critical functions like
networking and storage for virtual environments.
The functionalities of VMkernel
• Resource Management: VMkernel manages the physical server’s hardware resources, such
as CPU, memory, and I/O devices, and allocates them to virtual machines (VMs). It ensures
that these resources are used efficiently and isolates VMs from each other so they don't
interfere with each other's operations.
• Virtual Machine Execution: It enables the creation and execution of virtual machines
(VMs). When you run a VM, the VMkernel manages its lifecycle, including starting, running,
pausing, and stopping the VM.
• Hardware Abstraction: VMkernel provides an abstraction layer between the physical
hardware and the VMs running on it. This makes the virtual machines independent of the
underlying hardware, allowing you to move VMs between physical hosts seamlessly (via
features like vMotion).
• Device Drivers: VMkernel includes device drivers to communicate with
hardware components like network adapters, storage controllers, and other I/O
devices. It allows for direct access to hardware resources for the virtual machines,
while also maintaining isolation and security.
• Network and Storage: VMkernel provides network and storage services
required for virtual machines to communicate and access storage. It supports key
protocols like NFS, iSCSI, and Fibre Channel for storage, and it manages
networking components, including virtual switches.
• Security and Isolation: VMkernel plays a role in ensuring that VMs are securely
isolated from one another. It enforces resource policies and helps ensure that one
VM cannot affect the performance or integrity of others on the same physical
server.
V-switching and routing
• vSwitching and routing in VMware ESXi are critical components for
managing networking in virtual environments
1. VMware vSwitch (Virtual Switch)
• A vSwitch is a virtual network switch in VMware ESXi that enables
communication between virtual machines (VMs) and between VMs and the
external network (e.g., the internet, physical network). It functions like a
traditional physical switch, but it operates entirely in the virtualized
environment, managed by ESXi.
Types of vSwitches
• Standard vSwitch: The most common type, available in all versions of
ESXi, allowing VMs to communicate with each other and the outside world.
You can configure NICs, port groups, and VLANs.
• Distributed vSwitch: Available in vSphere environments with vCenter
Server, it provides centralized management across multiple ESXi hosts. It is
used for more advanced networking setups and improves scalability.
1. vSphere Standard Switch (vSS)
• The vSphere Standard Switch (vSS) is the simpler, more basic virtual switch.
It is configured on a per-host basis and allows you to manage networking for
virtual machines (VMs) on individual ESXi hosts.
Features of vSphere Standard Switch (vSS)
• Host-specific configuration: vSS is configured and managed locally on each ESXi host.
The network configuration is isolated to that particular host, and there is no centralized
management across hosts.
• Port Groups: You create port groups for VMs to connect to the vSwitch. Each port group
can have its own VLAN and other network configurations.
• NIC Teaming: vSS supports NIC teaming (combining multiple physical NICs to provide
redundancy and load balancing)
• Security & Traffic Filtering: You can configure security policies for virtual machines (e.g.,
promiscuous mode, MAC address changes, forged transmits).
• Basic Monitoring: Provides basic network monitoring like packet counters and link status.
• Use Case: Suitable for smaller or single-host environments where
centralized management of the network configuration across multiple
hosts is not required.
• Limitations:
• Does not support centralized management for multiple ESXi hosts.
• Lacks advanced features available in vDS, like network I/O control
and centralized monitoring.
2. vSphere Distributed Switch (vDS)
• The vSphere Distributed Switch (vDS) is a more advanced virtual switch
designed for larger, more complex environments. It allows for centralized
management of networking across multiple ESXi hosts, simplifying network
configuration and monitoring in a datacenter.
Features of vSphere Distributed Switch (vDS)

• Centralized Management: vDS is configured and managed from a central location (vCenter
Server). Changes to the vDS are automatically applied across all associated hosts, making it easier
to manage a large number of hosts.
• Port Groups: Similar to vSS, but port groups on a vDS are not tied to a single host and can span
multiple hosts.
• Advanced NIC Teaming and Load Balancing: vDS supports more sophisticated NIC
teaming configurations, with greater flexibility and additional load balancing algorithms.
• Network I/O Control (NIOC): Allows for prioritization of different types of traffic (e.g.,
vMotion, storage, etc.) across the network.
• Traffic Shaping: Provides more granular control over bandwidth allocation for specific traffic
types (e.g., VM traffic, vMotion traffic).
• Flow Monitoring & Advanced Traffic Management: Provides enhanced
network monitoring capabilities and the ability to track network traffic flows across
the entire virtual infrastructure.
• VLAN Tagging & QoS: Supports advanced VLAN tagging and Quality of
Service (QoS) for more fine-grained control of network traffic.
• Use Case: Ideal for large datacenters or environments with multiple hosts, where
centralized management and advanced features are necessary for network
scalability and control.
• Limitations:
• Requires a vCenter Server to manage and configure.
• More complex to set up compared to vSS.
• Requires a license for vDS, which is not included in the standard vSphere license.
Security Architecture
• In network security architecture, it’s crucial to understand the various
attacks, services, security mechanisms, and how they can be implemented
to protect systems and networks
1. Types of Attacks
▪ Denial of Service (DoS): An attack designed to overwhelm a network or service, making it
unavailable to legitimate users.
▪ Man-in-the-Middle (MitM): An attacker intercepts and potentially alters communications
between two parties without their knowledge.
▪ SQL Injection: Malicious SQL queries are injected into an application’s input fields to execute
arbitrary commands on the backend database.
▪ Cross-Site Scripting (XSS): Attacks targeting web applications where an attacker injects
malicious scripts into the pages viewed by others.
▪ Phishing: Fraudulent attempts to obtain sensitive information (like login credentials) by
pretending to be a trustworthy entity.
▪ Privilege Escalation: An attacker gains higher privileges than those initially granted to exploit
systems or data.
▪ Spoofing: The act of pretending to be another device or user, usually to gain unauthorized access
or confuse systems.
2. Services & Security Mechanisms
• To protect networks from the aforementioned attacks, various services and
security mechanisms are implemented. Some of the key ones include:
• Encryption:
• SSL/TLS (for HTTPS): Encrypts data in transit between a client and server.
• IPsec: A protocol suite for securing Internet Protocol (IP) communications through
encryption and authentication.
• Authentication:
• Multi-Factor Authentication (MFA): Requires more than one form of verification (e.g., password
and fingerprint).
• OAuth/OpenID: Authorization protocols used for third-party services and APIs.
• Access Control:
• Role-Based Access Control (RBAC): Access rights are assigned based on roles (e.g., admin, user).
• Mandatory Access Control (MAC): Uses labels and policies to restrict how resources can be
accessed.
• Intrusion Detection/Prevention Systems (IDS/IPS):
• IDS: Monitors network traffic for suspicious activities.
• IPS: Actively prevents detected malicious activities.
• Firewalls:
• Firewalls are used to monitor and filter network traffic based on predefined security rules. They are
typically deployed at the boundary between internal networks and the internet.
3. Port Forwarding and NAT
• Network Address Translation (NAT):
• NAT is a technique used in routing to translate private IP addresses to public
IP addresses, typically used for allowing internal devices (on a private
network) to communicate with external resources (the internet) using a
shared public IP address.
Types of NAT:
• Static NAT: A single private IP is mapped to a single public IP.
• Dynamic NAT: A private IP is dynamically mapped to a pool of public IPs.
• PAT (Port Address Translation): Multiple private IP addresses are mapped
to a single public IP address, but with different port numbers.
Port Forwarding:
• Port forwarding is a technique that directs incoming traffic on specific ports to a particular internal
device or service behind a router/firewall.
• Use Case: It’s often used to allow remote access to services such as a web server (HTTP), FTP, or
game server hosted within an internal network, which would otherwise be blocked by NAT or firewall
policies.
• Example: If you want to host a web server (HTTP) on an internal server with IP [Link] but
your router has a public IP [Link], you would configure the router to forward HTTP traffic on
port 80 to the internal IP [Link]:
4. Firewalls and Their Configurations
• Types of Firewalls:
• Packet Filtering Firewalls: Inspects packets at the network layer and allows or
blocks traffic based on predefined rules (source IP, destination IP, port, etc.).
o Pros: Simple, efficient, and easy to configure.
o Cons: Limited functionality, doesn’t inspect traffic beyond the network layer.
• Stateful Inspection Firewalls:
o How it Works: Tracks the state of active connections and makes decisions based on the context of the traffic.
This provides more granular control compared to packet filtering.
o Pros: More secure than simple packet filtering.
o Cons: Higher resource consumption compared to packet filtering firewalls.
• Proxy Firewalls:
o How it Works: Acts as an intermediary between the client and the server. It processes traffic and then sends
requests on behalf of clients, offering better control and inspection.
o Pros: High level of security and can provide traffic inspection at the application layer.
o Cons: Performance overhead and potential latency issues.
• Next-Generation Firewalls (NGFW):
o How it Works: Combines traditional firewall features with advanced capabilities such as deep packet
inspection (DPI), intrusion prevention, application awareness, and integrated threat intelligence.
o Pros: Offers robust security, capable of blocking sophisticated threats.
o Cons: Expensive and complex to configure.
Thanks

You might also like