0% found this document useful (0 votes)
27 views26 pages

Cloud Computing Module 3 Notes

The document discusses data center design, cloud deployment models, and cloud security strategies. It covers various types of data centers, including warehouse-scale and modular data centers, and outlines public, private, hybrid, and community cloud models. Additionally, it highlights the importance of data center reliability, cooling systems, and security measures in cloud computing environments.

Uploaded by

kg2126067
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views26 pages

Cloud Computing Module 3 Notes

The document discusses data center design, cloud deployment models, and cloud security strategies. It covers various types of data centers, including warehouse-scale and modular data centers, and outlines public, private, hybrid, and community cloud models. Additionally, it highlights the importance of data center reliability, cooling systems, and security measures in cloud computing environments.

Uploaded by

kg2126067
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Module – 3

Data Center design and inter-connection networks: Warehouse-scale data center, modular
data centers
Cloud Deployment models: Public, private and hybrid clouds, examples.
Cloud security: Cloud security defense strategies, Distributed intrusion/anamoly detection

Data Center design (Warehouse):


● In cloud computing, data centers serve as the physical infrastructure that houses the
computing resources, storage, and networking equipment necessary to deliver cloud
services.
● They are the foundation upon which cloud platforms are built. Data centers serve as the
foundation of cloud computing, enabling the delivery of a vast array of services to users
worldwide.
● Data centers provide the hardware and systems that enable on-demand access to
computing power, storage, and applications over the internet.
● A data center is often built with a large number of servers through a huge
interconnection network.
● Recently, there has been significant advancement in data centers. They have grown from
mere server rooms located within an organization to sophisticated computer
installations that house a wide range of computing resources.
● A small data center could have 1,000 servers.
● The larger the data center, the lower the operational cost.
● Microsoft has over 300 data centers, Amazon has over 120 data centers while Google
has over 35 data centers.
● Over 150 datacenters exist in India positioning India at 14th globally.
● With a scale of thousands of servers, concurrent failure, either hardware failure or
software failure, of 1 percent of nodes is common in a data centre setup. Many failures
can happen in hardware; for example, CPU failure, disk I/O failure, and network failure.
It is even quite possible that the whole data center does not work in the case of a power
crash. Also, some failures are brought on by software. The service and data should not
be lost in a failure situation. Reliability can be achieved by redundant hardware. The
software must keep multiple copies of data in different locations and keep the data
accessible while facing hardware or software errors.
● Data centers can be classified into several types based on their ownership, size, and
function. Common types include Enterprise (Owned by an enterprise), Colocation
(rented space to host data center), Cloud (Virtual Assets on demand), Managed
Services (Third party Provider), and Edge data centers (Close to users offering
minimal latency). Each type serves a different purpose and offers varying levels of
control, scalability, and cost-effectiveness.
● Data center cooling is essential for maintaining optimal operating conditions for IT
equipment and preventing failures due to overheating. It involves managing
temperature and humidity to ensure servers and other components operate efficiently
and reliably, preventing downtime and data loss. Efficient cooling also contributes to
energy savings and operational cost reduction.
● The key components of a typical data center architecture include:
o Servers: Classified into different types based on their physical structure and size,
including rack servers, blade servers, and tower servers.
o Storage Systems: Data centers use various storage technologies such as Storage
Area Networks (SANs), Network Attached Storage (NAS), and Direct Attached
Storage (DAS) to store and manage data
o Networking Equipment: Switches, routers, firewalls, and load balancers provide
efficient data communication and security within the data center and to external
networks
o Power Infrastructure: Uninterruptible Power Supply (UPS) systems, backup
generators, and power distribution units (PDUs) deliver a stable and reliable
power supply to the data center equipment
o Cooling Systems: Computer Room Air Conditioning (CRAC) units, liquid cooling
systems, and hot/cold aisle containment maintain optimal temperature and
humidity levels for the hardware to function properly
o Enclosures: Racks and cabinets used in data centers include open frame racks
(two- and four-post racks), enclosed racks, wall-mounted racks, and network
cabinets
o Cabling: Structured cabling systems, including twisted pair cables (for Ethernet,
such as Cat5e, Cat6), fiber optic cables (single-mode and multi-mode), and coaxial
cables
o Security Systems: Physical security measures like biometric access control,
surveillance cameras, and security personnel, as well as cybersecurity solutions
like firewalls, intrusion detection/prevention systems (IDS/IPS), and encryption
protect the data center from unauthorized access and threats
o Management Software: Data Center Infrastructure Management (DCIM)
software helps monitor, manage, and optimize the performance and energy
efficiency of the data center components.
● Data centers are measured for their reliability and availability and are categorised
accordingly as 4 tiers, Tier 4 being the highest level of availability and Tier 1 being the
lowest.

● The three-tier data center network architecture is a traditional network topology that
has been widely adopted in many older data centers. Redundancy is a key part of this
design, with multiple paths from the access layer to the core, in addition to helping
networks achieve high availability and efficient resource allocation.
ModularDataCenterinShippingContainers
● A modern data center is structured as a shipyard of server clusters housed in truck-
towed containers.
● Inside the container, hundreds of blade servers are housed in racks surrounding the
container walls.
● An array of fans forces the heated air generated by the server racks to go through a heat
exchanger, which cools the air for the next rack on a continuous loop.
● Large-scale data center built with modular containers appear as a big shipping yard of
container trucks. This container-based data center was motivated by demand for lower
power consumption, higher computer density, and mobility to relocate data centers to
better locations with lower electricity costs, better cooling water supplies, and cheaper
housing for maintenance engineers.
● Sophisticated cooling technology enables up to 80% reduction in cooling costs
compared with traditional warehouse data centers. Both chilled air circulation and cold
water are flowing through the heat exchange pipes to keep the server racks cool and
easy to repair.
● The modular container design includes the network, computer, storage, and cooling gear.
● The container must be designed to be weatherproof and easy to transport.
● The modular data-center approach supports many cloud service applications.
For example, the health care industry will benefit by installing a data center at all clinic
sites.
Cloud Deployment Models
● Clouds constitute the primary outcome of cloud computing.
● Clouds build the infrastructure on top of which services are implemented and
delivered to customers. Such infrastructures can be of different types and
provide useful information about the nature and the services offered by the
cloud.
● A more useful classification is given according to the administrative domain of a
cloud: It identifies the boundaries within which cloud computing services are
implemented, provides hints on the underlying infrastructure adopted to support
such services, and qualifies them.
● Following are the different types of cloud deployment models:
o Public clouds. The cloud is open to the wider public.
o Privateclouds. The cloud is implemented with in the private premises of
an institution and generally made accessible to the members of the
institution or a subset of them.
o Hybrid clouds. The cloud is a combination of the two previous solutions
and most likely identifies a private cloud that has been augmented with
resources or services hosted in a publiccloud.
o Communityclouds. The cloud is characterized by a multi-administrative
domain involving different deployment models (public, private, and
hybrid), and it is specifically designed to address the needs of a specific
industry/criteria/context.
Public clouds
● Public clouds constitute the first expression of cloud computing.
● They are a realization of the actual view of cloud computing in which the services
offered are made available to anyone, from anywhere, and at any time through the
Internet.
● From a structural point of view they are a distributed system, most likely composed
of one or more datacenters connected together, on top of which the specific services
offered by the cloud are implemented.
● Any customer can easily sign in with the cloud provider, enter credential and billing
details, and use the services offered.
● Historically, public clouds were the first class of cloud that were implemented and
offered.
● They offer solutions for minimizing IT infrastructure costs and serve as a viable
option for handling peak loads on the local infrastructure.
● They have become an interesting option for small enterprises, which are able to start
their businesses without large up-front investments by completely relying on public
infrastructure for their IT needs.

● The ability to grow or shrink according to the needs of the related business has made
public cloud attractive. By renting the infrastructure or subscribing to application
services, customers were able to dynamically upsize or downsize their IT according
to the demands of their business.
● Currently, public clouds are used both to completely replace the IT infrastructure of
enterprises and to extend it when it is required.
● A fundamental characteristic of public clouds is multitenancy. A public cloud is
meant to serve a multitude of users, not a single customer.
● A public cloud can offer any kind of service : infrastructure, platform, or applications.
● For example, AmazonEC2 is a public cloud that provides infrastructure-as-a-service;
Google App Engine is a public cloud that provides an application development
platform-as-a-service; and SalesForce.com is a public cloud that provides software-
as- a-service.
● What makes public clouds special is the way they are consumed: They are available
to every one and are generally architected to support a large quantity of users.
● What characterizes public clouds is their natural ability to scale on demand and
sustain peak loads.
● Public clouds can be composed of geographically dispersed data centers to share the
load of users and better serve them according to their locations. For example,
Amazon Web Services has data centers installed in the United States, Europe,
Singapore, Australia etc. and allow their customers to choose from different regions:
us-west-1, us-east-1, or eu-west-1.
● Each region is priced differently and are further divided into availability zones,
which map to specific datacenters. According to the specific class of services
delivered by the cloud, a different software stack is installed to manage the
infrastructure: virtual machine managers, distributed middleware, or distributed
applications.
Private clouds
● Public clouds are appealing and provide a viable option to cut IT costs and reduce
capital expenses, but they are not applicable in all scenarios. For example, a very
common critique to the use of cloud computing in its canonical implementation is the
loss of control.
● In the case of public clouds, the provider is in control of the infrastructure and,
eventually, of the customers’ core logic and sen- sitive data. Even though there could
be regulatory procedure in place that guarantees fair manage- ment and respect of
the customer’s privacy, this condition can still be perceived as a threat or as an
unacceptable risk that some organizations are not willing to take.
● In particular, institutions such as government and military agencies will not consider
public clouds as an option for processing or storing their sensitive data.
● The risk of a breach in the security infrastructure of the provider could expose such
information to others; this could simply be considered unacceptable.
● In other cases, the loss of control of where your virtual IT infrastructure resides
could open the way to other problematic situations. More precisely, the geographical
location of a datacenter generally determines the regulations that are applied to
management of digital information. As a result, according to the specific location of
data, some sensitive information can be made accessible to government agencies or
even considered outside the law if processed with specific cryptographic techniques.
● Thus, all these aspects make the use of a public computing infrastructure not always
possible.
● The solution lies in private clouds, which are similar to public clouds, but their
resource provisioning model is limited within the boundaries of an organization.
● Private clouds are virtual distributed systems that rely on a private infrastructure
and provide internal users with dynamic provisioning of computing resources.
● Instead of a pay-as-you-go model as in public clouds, there could be other schemes in
place, taking into account the usage of the cloud and proportionally billing the
different departments or sections of an enterprise.
● Private clouds have the advantage of keeping the core business operations in-house
by relying on the existing IT infrastructure and reducing the burden of maintaining it
once the cloud has been set up.
● In this scenario, security concerns are less critical, since sensitive information does
not flow out of the private infrastructure.
● From an architectural point of view, private clouds can be implemented on more
heterogeneous hardware: They generally rely on the existing IT infrastructure
already deployed on the private premises.
● Private clouds can provide in-house solutions for cloud computing, but if compared
to public clouds they exhibit more limited capability to scale elastically on demand.

Hybrid clouds
● Public clouds are large software and hardware infrastructures that have a capability
that is huge enough to serve the needs of multiple users, but they suffer from
security threats and administrative pitfalls.
Although the option of completely relying on a public virtual infrastructure is appeal-
ing for companies that did not incur IT capital costs and have just started
considering their IT needs (i.e., start-ups), in most cases the private cloud option
prevails because of the existing IT infrastructure.
● Private clouds are the perfect solution when it is necessary to keep the processing of
information within an enterprise’s premises or it is necessary to use the existing
hardware and software infrastructure.
One of the major drawbacks of private deployments is the inability to scale on
demand and to efficiently address peak loads.
● In this case, it is important to leverage capabilities of public clouds as needed. Hence,
a hybrid solution could be an interesting opportunity for taking advantage of the
best of the private and public worlds. This led to the development and diffusion of
hybrid clouds.
● Hybrid clouds allow enterprises to exploit existing IT infrastructures, maintain
sensitive information within the premises, and naturally grow and shrink by
provisioning external resources and releasing them when they are no longer needed.
● Security concerns are then only limited to the public portion of the cloud that can be
used to perform operations with less stringent constraints but that are still part of
the system workload.
● Hybrid cloud is a heterogeneous distributed system resulting from a private cloud
that integrates additional services or resources from one or more public clouds.
● Whereas the concept of hybrid cloud is general, it mostly applies to IT infrastructure
rather than software services.
Community Clouds
● Community clouds are distributed systems created by integrating the services of
different clouds to address the specific needs of an industry, a community, or a
business sector.
● In community cloud, the infrastructure is shared by several organizations and
supports a specific community that has shared concerns (e.g., mission, security
requirements, policy, and compliance considerations).
● The following diagram describes the concept of community cloud.
● The users of a specific community could fall into a well-identified community,
sharing the same concerns or needs; they can be government bodies, industries, or
even simple users, but all of them focus on the same issues for their interaction with
the cloud.
● Community cloud differ from public clouds, which serve a multitude of users with
different needs. Community clouds are also different from private clouds, where the
services are generally delivered within the institution that owns the cloud.
● From an architectural point of view, a community cloud is most likely implemented
over multi- ple administrative domains. This means that different organizations such
as government bodies, private enterprises, research organizations, and even public
virtual infrastructure providers contribute with their resources to build the cloud
infrastructure.
● Candidate sectors for community clouds include Media industry, Healthcare
industry, Public Services, Scientific research etc.
PUBLIC CLOUD PLATFORMS: AWS, AZURE and GAE
Five Major Cloud Platforms and Their Service Offerings
Model IBM Amazon Google Microsoft Salesforce
Google Cloud
IaaS - AWS - -
Platform
(GCP)
BlueClou Google App
PaaS - Engine Windows Force.com
d, WCA,
RC2 (GAE) Azure
Gmail, .NET Online
SaaS Lotus Live -
Google service, CRM,
Docs Dynamic Gifttag
CRM
OS level
Virtualization - Hardware, (Applicat OS level/ -
OS and ion Hypel-V
Xen Containe
r)
SOA, B2, GFS, Apex,
Service EC2, S3, SQS,
TSAM, RAD, Chubby, Live, SQL visual
Offerings SimpleDB
Web 2.0 BigTable, Hotmail force,
MapReduce record
security
Programming .NET
AMI - Python Framework Apex
Support

Amazon Web Services (AWS):


● Amazon has been a leader in providing public cloud services (http://aws.amazon.com/)
since 2006.
● Amazon applies the IaaS model in providing its services.
● Prominent users of AWS include Netflix, Meta, LinkedIn, Disney, BBC, Coursera etc.
● The following figure shows the AWS architecture.
● The following table summarizes the service offerings by AWS in 12 application tracks.

Table - AWS Offerings


Service Area Service Modules and Abbreviated Names

Compute Elastic Compute Cloud (EC2), Lambda, Elastic MapReduce, Auto Scaling
Messaging Simple Queue Service (SQS), Simple Notification Service (SNS)
Storage Simple Storage Service (S3), Elastic Block Storage (EBS), AWS Import/Export
Content Delivery Amazon CloudFront
Monitoring Amazon CloudWatch
Support AWS Premium Support
Database Amazon SimpleDB, Relational Database Service (RDS), DynamoDB
Networking Virtual Private Cloud (VPC), Elastic Load Balancing
Web Traffic Alexa Web Information Service, Alexa Web Sites
E-Commerce Fulfillment Web Service (FWS)
Payments and Billing Flexible Payments Service (FPS),
Amazon DevPay Workforce Amazon Mechanical Turk

● Common AWS Services:


1. Compute:
o EC2 (Elastic Compute Cloud) – Virtual servers for running applications.
o Lambda – Serverless computing for event-driven applications.
2. Storage:
o S3 (Simple Storage Service) – Scalable object storage.
o EBS (Elastic Block Store) – Persistent block storage for EC2.
3. Databases:
o RDS (Relational Database Service) – Managed SQL databases.
o DynamoDB – NoSQL database for key-value storage.
4. Networking, Content Delivery & Security:
o VPC (Virtual Private Cloud) – Isolated cloud environment.
o Route 53 – Scalable domain name system (DNS).
o CloudFront – Content delivery network (CDN) for faster content distribution.
o Elastic Load Balancer (ELB) – Distributes traffic across multiple servers.
5. Analytics & AI/ML:
o Athena – Serverless SQL analytics for S3 data.
o SageMaker – Managed service for building ML models.

● EC2 provides the virtualized platforms to the host VMs where the cloud application can
run. VMs can be used to share computing resources both flexibly and safely.
● S3 (Simple Storage Service) provides the object-oriented storage service for users.
● EBS (Elastic Block Service) provides the block storage interface which can be used to
support traditional applications.
● SQS (Simple Queue Service) ensures a reliable message service between two processes.
The message can be kept reliably even when the receiver processes are not running.
● ELB (Elastic Load Balancing) automatically distributes incoming application traffic
across multiple Amazon EC2 instances and allows user to avoid nonoperating nodes and
to equalize load on functioning images.
● CloudWatch is a web service that provides monitoring for AWS cloud resources, starting
with Amazon EC2. It provides customers with visibility into resource utilization,
operational performance, and overall demand patterns, including metrics such as CPU
utilization, disk reads and writes, and network traffic.
● Lambda AWS Lambda is a serverless compute service that allows you to run code
without provisioning or managing servers. It executes code in response to events and
automatically manages the computing resources required.
● CloudFront Amazon CloudFront is a content delivery web service. It acts as a Content
Delivery Network (CDN) that accelerates the delivery of web content to users worldwide
by caching content at edge locations near those users. This results in faster loading
times and improved performance for websites and applications.
● Users can access their objects through SOAP with either browsers or other client
programs which support the SOAP standard.
● Amazon provides a more flexible cloud computing platform for developers to build
cloud applications. Small and medium-size companies can put their business on the
Amazon cloud platform.
● Using the AWS platform, they can service large numbers of Internet users and make
profits through those paid services.
● Both auto-scaling and ELB are enabled by CloudWatch which monitors running instances.
● Amazon offers a Relational Database Service (RDS) with a messaging interface. RDS
brings the familiarity of SQL engines like MySQL, PostgreSQL, or SQL Server to the cloud,
offering ACID compliance and complex querying for structured data.
● The Elastic MapReduce capability is equivalent to Hadoop running on the basic EC2
offering.
● AWS Import/Export allows one to ship large volumes of data to and from EC2 by
shipping physical disks; it is well known that this is often the highest bandwidth
connection between geographically distant systems.
● Amazon CloudFront implements a content distribution network.
Microsoft Windows Azure
● Microsoft Windows Azure
● In 2010, Microsoft launched the Windows Azure platform to meet the challenges in cloud
computing.
● This platform is built over Microsoft data centers.
● The Azure platform is divided into three major component platforms:
● IaaS (Infrastructure as a Service)
● PaaS (Platform as a Service)
● SaaS (Software as a Service)
● Windows Azure offers a cloud platform built on Windows OS and based on Microsoft
virtualization technology.

● Applications are installed on VMs deployed on the data-center servers.

● Azure manages all servers, storage, and network resources of the data center.

● On top of the infrastructure are the various services for building different cloud applications.The
following figure shows the overall architecture of Microsoft’s cloud platform.

● Cloud-level services provided by the Azure platform are introduced below.


o Live service - Users can visit Microsoft Live applications and apply the data
involved across multiple machines concurrently.
o .NET service - This package supports application development on local hosts and
execution on cloud machines.
o SQL Azure - This function makes it easier for users to visit and use the relational
database associated with the SQL server in the cloud.
o SharePoint service - This provides a scalable and manageable platform for users
to develop their special business applications in upgraded web services.
o Dynamic CRM service - This provides software developers a business platform in
managing CRM applications in financing, marketing, and sales and promotions.
● All these cloud services in Azure can interact with traditional Microsoft software
applications, such as Windows Live, Office Live, Exchange online, SharePoint online, and
dynamic CRM online.
● The Azure platform applies the standard web communication protocols SOAP and REST.
● The Azure service applications allow users to integrate the cloud application with other
platforms or third-party clouds
● The powerful SDK allows Azure applications to be developed and debugged on the
Windows hosts.
● Prominent users of Azure include Microsoft, LinkedIn, Adobe, BMW etc.
● Popular Azure Services:
1. Compute:
o Azure Virtual Machines (VMs) – Cloud-based virtual servers.
o Azure Kubernetes Service (AKS) – Managed Kubernetes service.
o Azure Functions – Serverless computing platform.
2. Storage:
o Azure Blob Storage – Scalable object storage.
o Azure Files – Managed file shares in the cloud.
3. Databases:
o Azure SQL Database – Fully managed relational database.
o Cosmos DB – NoSQL database with global distribution.
4. Networking & Security:
o Azure Virtual Network (VNet) – Secure network environment.
o Azure Firewall – Cloud-native firewall for security.
5. AI & Machine Learning:
o Azure Machine Learning – End-to-end ML development.
o Azure Cognitive Services – AI-powered APIs for vision, speech, and language.

Google App Engine (GAE):


● The Google platform is based on its search engine expertise, but as discussed earlier
with MapReduce, this infrastructure is applicable to many other areas.
● Google has 30+ data centers across the globe and has installed more than 460,000
servers worldwide.
● Data items are stored in text, images, and video and are replicated to tolerate faults or
failures.
● Google offers Google’s App Engine (GAE) which offers a PaaS platform supporting various
cloud and web applications.
● GAE enables users to run their applications on a large number of data centers associated
with Google’s search engine operations.
● Prominent users of GAE include Snapchat, Spotify, Udemy etc.
● The following figure shows the major building blocks of the Google cloud platform which
has been used to deliver the cloud services highlighted earlier.

● GFS (Google File System) is used for storing large amounts of data.
● MapReduce is used for application program development.
● Chubby is used for distributed application lock services.
● BigTable offers a storage service for accessing NoSQL Data.
● Third-party application providers can use GAE to build cloud applications for providing
services.
● The applications all run in data centers under tight management by Google engineers.
Inside each data center, there are thousands of servers forming different clusters.
● The building blocks of Google’s cloud computing application include the Google File
System for storing large amounts of data, the MapReduce programming framework for
application developers, Chubby for distributed application lock services, and BigTable as
a storage service for accessing structural or semi-structural data.
● GAE runs the user program on Google’s infrastructure. As it is a platform running third-
party programs, application developers now do not need to worry about the
maintenance of servers.
● Functional Modules of GAE include datastore (data storage services with BigTable),
application runtime environment (scalable web programming – Python, Java), software
development kit (SDK) (local application development), administration console (easy
management of user application) , GAE web service infrastructure (special interfaces).
● Well-known GAE applications include the Google Search Engine, Google Docs, Google
Earth, and Gmail. These applications can support large numbers of users
simultaneously.

Difference between AWS vs Azure Vs GAE

Amazon Web Google Cloud


Subject Microsoft Azure
services Platform

Launched 2006 2010 2008

Storage Domain S3 Blocked storage Cloud Storage

Azure Application Stackdriver


Monitoring Cloud watch
Insight monitoring services

Block Storage EBS Page blobs Persistent disk

Market Share 33% 22% 12%

Web Fortigate Next


Firewall Application
Application Generation Firewall
Gateway
Firewall

Cloud Services
Cloud Armor DDos Shield
(Protection)

Location 26 Regions 60+ Regions 22 Regions

Azure traffic
DNS Service Amazon Route Cloud DNS
manager
53

Compute Engine
Automation AWS Opsworks Azure Automation
Management

Per-second
Pricing pricing with a Per-minute basis Per-minute basis
60-second
minimum
Amazon Web Google Cloud
Subject Microsoft Azure
services Platform

Azure Cloud
Security AWS Security
Security security
Hub
Centre Command
Centre

CLOUD SECURITY AND TRUST MANAGEMENT:


● Lack of trust between service providers and cloud users has hindered the universal
acceptance of cloud computing as a service on demand.
● For cloud services, trust and security become more demanding, because leaving user
applications completely to the cloud providers has faced strong resistance by most of
the users.
● Cloud platforms become sources of worry to some users for lack of privacy protection,
security assurance, etc.
● Technology can enhance trust, justice, reputation, credit, and assurance in Internet
applications.
● As a virtual environment, the cloud poses new security threats that are more difficult to
contain than traditional client and server configurations.
● Trust is a social problem rather than a pure technical issue. However, the social problem
can be solved with a technical approach.

Cloud Security Defense Strategies:

● A healthy cloud ecosystem is desired to free users from cheating, hacking, viruses, spam
and privacy violations etc.
● The security demands of three cloud service models, IaaS, PaaS, and SaaS vary from each
other.
● The security models / strategies are based on various SLAs between providers and users.
● Basically three cloud security enforcements or demands are expected.
o On-site security of data centers (Biometric readers, CCTV, motion detection,
and man traps)
o Network security and fault tolerance (external firewalls, intrusion detection
systems (IDSes) and third-party vulnerability assessment)
o Platform security (SSL and data decryption, strict password policies and system
trust certification)
● The following figure shows the mapping of cloud models, where special security
measures are deployed at various cloud service levels.

● A security-aware cloud architecture demands security enforcement. Malware-based


attacks such as network worms, viruses, and DDoS attacks exploit system
vulnerabilities. These attacks compromise system functionality or provide intruders
unauthorized access to critical information.
● To counter the attacks/threats, security defenses are needed to protect all cluster
servers and data centers.
● Following are some of the cloud components that demand special security protection:
o Protection of servers from malicious software attacks such as worms, viruses,
and malware.
o Protection of hypervisors from software-based attacks and vulnerabilities.
o Protection of VMs and VMMs from service disruption and DoS attacks.
o Protection of data and information from theft, corruption, and natural disasters.
o Providing authenticated and authorized access to critical data and services.
● In a cloud environment, newer attacks may result from hypervisor malware, guest
hopping and hijacking, or VM rootkits. Another type of attack is the man-in-the-middle
attack for VM migrations.
● In general, passive attacks steal sensitive data or passwords. Active attacks may
manipulate kernel data structures which will cause major damage to cloud servers.
● In general, virtualization enhances cloud security.
● With virtualization, a single physical machine can be divided or partitioned into multiple
VMs.
● This provides each VM with better security isolation and each partition is protected
from DoS attacks by other partitions.
● VM failures do not propagate to other VMs.
● The entire VM can be represented as a software component and can be regarded as
binary or digital data.
● VMs enable high availability and faster disaster recovery.
● The hypervisor(VMM) provides visibility of the guest OS, with complete guest isolation.
● Live migration of VMs was suggested by many researchers for building distributed
intrusion detection systems (DIDSes).
● Following are some of the protection schemes to secure public clouds/data centres.
Protection Schemes Description and Deployment Mechanism
Secure data centers and Choose hazard free location, enforce building safety,
computer buildings avoid windows, keep buffer zone around site, bomb
detection, CCTV, earth quake proof etc.
Use redundant utilities Multiple power and supplies, alternate network
at multiple sites connections, data consistency, data watermarking, user
authentication etc.
Trust delegation Certificates to delegate trust across PKI domains,
and Policies to resolve conflicts
negotiation
Worm containment and Internet worm containment and distributed defense
DDoS defense against DDoS attacks to secure all data centers and cloud
platforms
Reputation system Reputation system could be built with P2P technology;
for data one
centres can build a hierarchy of reputation systems from data
centers to distributed file systems.
Fine grained file access Fine grained access control at the file or object level to
control add security beyond firewalls and IDSes
Copyright Protection Piracy prevention through peer collusion prevention,
and piracy detection filtering of poisoned content etc.
Privacy protection Uses double authentication, biometric
identification,
disaster recovery, data watermarking etc.
Distributed intrusion/anamoly detection:
● Data security is the weakest link in all cloud models.
● We need new cloud security standards to apply common API tools to cope with the
network attacks or abuses.
● The IaaS model represented by Amazon is most sensitive to external attacks.
● Security threats may be aimed at VMs, guest OSes, and software running on top of the
cloud.
● Role based access control (RBAC) implementation helps in regulation access to secured
entities.
● Intrusion Detection Systems (IDSes) attempt to stop these attacks before they take effect.
● Both signature matching and anomaly detection can be implemented on VMs dedicated
to building IDSes.
● Signature-matching IDS technology is more mature, but require frequent updates of the
signature databases.
● Signature matching relies on predefined patterns of known threats, while anomaly
detection identifies deviations from established normal behaviour patterns. Network
anomaly detection reveals abnormal traffic patterns, such as unauthorized episodes of
TCP connection sequences, against normal traffic patterns.
● Distributed IDSes are needed to combat intrusion and network anamoly types of
intrusions.
Distributed Defense against DDoS Flooding Attacks:
● A DDoS defense system must be designed to cover multiple network domains spanned
by a given cloud platform.
● These network domains cover the edge networks where cloud resources are connected.
● DDoS attacks come with widespread worms.
● The flooding traffic is large enough to crash the victim server by buffer overflow, disk
exhaustion, or connection saturation.
● Following figure shows a flooding attack pattern. Here, the hidden attacker launched the
attack from many zombies toward a victim server at the bottom router R0. The flooding
traffic flows essentially with a tree pattern shown.
● Successive attack-transit routers along the tree reveal the abnormal surge in traffic. This
DDoS defense system is based on change-point detection by all routers. Based on the
anomaly pattern detected in covered network domains, the scheme detects a DDoS
attack before the victim is overwhelmed.
Data and Software Protection Techniques:

● Users desire to have a software environment that provides many useful tools to build
cloud applications over large data sets.
● In addition, users also desire to have security and privacy protection software for using
the cloud.
● The software that provides security and privacy protection should offer the following
features:
o Special APIs for authenticating users.
o Fine-grained access control to protect data integrity and deter hackers.
o Shared data sets protected from malicious alteration, deletion, or copyright
violation.
o Ability to secure the cloud service provider from invading users’ privacy.
o Personal firewalls at user ends.
Data Coloring and Cloud Watermarking:
● With shared files and data sets, privacy, security, and copyright information could be
compromised in a cloud computing environment.
● Users desire to work in a trusted software environment that provides useful tools to
build cloud applications over protected data sets.

● Above diagram illustrates how system generates special colors for each of the data object.
● Key Concepts of Data Coloring

1. Data Classification
o Sensitive Data (e.g., PII, financial records) → Red
o Confidential Data (e.g., internal documents) → Yellow
o Public Data (e.g., marketing materials) → Green
2. Metadata Tagging
o Each data object is assigned a tag or label indicating its classification.
o Helps enforce security policies when data moves between cloud services.
3. Policy Enforcement
o Access Control: Prevents unauthorized users from accessing classified data.
o Data Loss Prevention (DLP): Detects and restricts sharing of
sensitive information.
o Audit & Compliance: Ensures adherence to GDPR, HIPAA, and other
regulations.
4. Data Tracking & Monitoring
o Helps detect anomalies and prevent data breaches.
o Supports real-time monitoring of sensitive data movement in cloud
environments.

● Data coloring assigns unique "colors" (identifying information) to data fragments, while
cloud watermarking embeds invisible identifiers to prove ownership and integrity.
● Data coloring technique is used to preserve data integrity and user privacy in cloud.
● Watermarking is mainly used for digital copyright management.
● Data coloring means labeling each data object by a unique color. Thus differently colored
data objects are thus distinguishable.
● Cloud storage provides a process for the generation, embedding, extraction of water
marks in colored objects.
● This color matching process van be applied to implement different trust management
events.
● The user identification can also be colored to be matched with the data colors.
● Data coloring technique takes minimal number of calculations to color or decolour data
objects compared to encryption/decryption technique.
● Cryptography and coloring can be jointly used in a cloud environment.

*************************************************************************
https://www.nlyte.com/blog/data-center-basics/ https://www.youtube.com/watch?
v=BzJvVBxSEOM

You might also like