0% found this document useful (0 votes)
72 views

Unit Iv

This document discusses resource management and security in cloud computing. It describes inter-cloud resource management and the challenges of managing resources across multiple cloud platforms. It then discusses different methods for provisioning resources, including compute resources like virtual machines, as well as storage resources. Resource provisioning aims to allocate resources efficiently based on demand, events, or popularity, while avoiding under-provisioning that violates service level agreements or over-provisioning that wastes resources. Distributed file systems and databases are important for storing large amounts of data in the cloud.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

Unit Iv

This document discusses resource management and security in cloud computing. It describes inter-cloud resource management and the challenges of managing resources across multiple cloud platforms. It then discusses different methods for provisioning resources, including compute resources like virtual machines, as well as storage resources. Resource provisioning aims to allocate resources efficiently based on demand, events, or popularity, while avoiding under-provisioning that violates service level agreements or over-provisioning that wastes resources. Distributed file systems and databases are important for storing large amounts of data in the cloud.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT-4

UNIT IV
RESOURCE MANAGEMENT AND SECURITY IN CLOUD
Inter Cloud Resource Management – Resource Provisioning Methods –
Security Overview – Cloud Security Challenges – Data Security –Application
Security – Virtual Machine Security.
Inter Cloud Resource Management:

Inter Cloud Resource Management

 Resource management is a process for the allocation of computing, storage,


networking and subsequently energy resources to a set of applications, in
a context that aims to collectively meet the performance goals of
infrastructure providers, cloud users and applications.
 The cloud users prefer to concentrate on application performance while
the conceptual framework offers a high-level view of the functional aspect
of cloud resource management systems and all their interactions.
 Cloud resource management is a challenge due to the scale of modern data
centers, the heterogeneity of resource types, the interdependence between
such resources, the variability and unpredictability of loads, and the variety
of objectives of the different players in the cloud ecosystem.
 Whenever any service is deployed on cloud, it uses resources aggregated in
a common resource pool which are collected from different federated
physical servers.
- Sometimes, cloud service brokers may deploy cloud services
on shared servers for their customers which lie on different
cloud platforms.
 In that situation, the interconnection between different servers needs to
be maintained. Sometimes, there may be a loss of control if any particular
cloud server faces downtime which may generate huge business loss.
 Therefore, it’s quite important to look at inter cloud resource management
to address the limitations related to resource provisioning.
 We have already seen the NIST architecture for cloud computing which has
three layers namely infrastructure, platform and application. These three
layers are referred by three services like Infrastructure as a service,
Platform as a service and Software as a service respectively.

Panimalar Engineering College 1 S.Hariharan


UNIT-4

 The Infrastructure as a service is the foundation layer which provides


compute, storage and network services to other two layers like platform as
a service and software as a service.

 Even as the three basic services are different in use, they are built on top
of each other. In practical there are five layers required to run cloud
applications. The functional layers of cloud computing services are shown
in Fig. 4.1.1.

 The consequence is that one cannot directly launch SaaS applications on a


cloud platform. The cloud platform for SaaS cannot be built unless there
are compute, storage and network infrastructure are established.
- In above architecture, the lower three layers are more closely
connected to physical specifications.
- The Hardware as a Service (HaaS) is the lowermost layer which provides
various hardware resources to run cloud services.
- The next layer is Infrastructure as a Service that interconnects all
hardware elements using computer, storage and network services.
- The next layer has two services namely Network as a Service (NaaS) to
bind and provisioned cloud services over the network and Location
as a Service (LaaS) to provide collocation service to control, and
protect all physical hardware and network resources.

Panimalar Engineering College 2 S.Hariharan


UNIT-4

- The next layer is Platform as a Service for web application deployment


and delivery while topmost layer is actually used for on demand
application delivery.
 In any cloud platform, the cloud infrastructure performance is the primary
concern for every cloud service provider, while quality of services, service
delivery and security are the concerns for cloud users.
 Every SaaS application is subdivided into the different application areas
for business applications like CRM is used for sales, promotion, and
marketing services.
 The infrastructure for operating cloud computing services may be either a
physical server or a virtual server.
 By using VMs, the platform can be flexible, i.e. running services are not
associated with specific hardware platforms. This adds flexibility to cloud
computing platforms.
 The software layer at the top of the platform is a layer for storing huge
amounts of data. Like in the cluster environment, there are some runtime
support services accessible in the cloud computing environment.
 Cluster monitoring is used to obtain the running state of the cluster as a
whole.
 The scheduler queues the tasks submitted to the entire cluster and assigns
tasks to the processing nodes according to the availability of the node.
 The runtime support system helps to keep the cloud cluster working with
high efficiency.
- Runtime support is the software needed for browser-initiated
applications used by thousands of cloud customers.
 The SaaS model offers software solutions as a service, rather than requiring
users to buy software. As a result, there is no initial investment in servers
or software licenses on the customer side.
 On the provider side, the cost is rather low compared to the conventional
hosting of user applications.
 Customer data is stored in a cloud that is either private or publicly hosted
by PaaS and IaaS providers.

Panimalar Engineering College 3 S.Hariharan


UNIT-4

Resource Provisioning and Resource Provisioning Methods

 The rise of cloud computing indicates major improvements in the design of


software and hardware.
 Cloud architecture imposes further focus on the amount of VM instances or
CPU cores. Parallelism is being used at the cluster node level.
 This section broadly focuses on the concept of resource provisioning and its
methods.
 Resource Provisioning
- Provisioning of Compute Resources
- Provisioning of Storage Resources
- Provisioning in Dynamic Resource Deployment
 Methods of Resource Provisioning
- Demand-Driven Resource Provisioning
- Event-Driven Resource Provisioning
- Popularity-Driven Resource Provisioning

1.Provisioning of Compute Resources

 Cloud service providers offer cloud services by signing SLAs with end-
users.
- The SLAs must commit appropriate resources, such as CPU, memory,
and bandwidth that the user can use for a preset time.
 The lack of services and under provisioning of resources would contribute
to violation of the SLAs and penalties.
- The over provisioning of resources can contribute to under-use of
services and, as a consequence, to a decrease in revenue for the
supplier.
 The design of an automated system to provision resources and services
effectively is a difficult task.
- The difficulties arise from the unpredictability of consumer demand,
software and hardware failures, power management and disputes in
SLAs signed between customers and service providers.

Panimalar Engineering College 4 S.Hariharan


UNIT-4
 Cloud architecture and management of cloud infrastructure rely on
effective VM provisioning.
 Resource provisioning schemes are also used for the rapid discovery of
cloud computing services and data in cloud.
 The virtualized cluster of servers involve efficient VM deployment, live VM
migration, and fast failure recovery.
- To deploy VMs, users use virtual machines as a physical host with
customized operating systems for different applications.
 For example, Amazon’s EC2 uses Xen as the Virtual Machine Monitor (VMM)
which is also used in IBM’s Blue Cloud.
 Public or private clouds promise to streamline software, hardware and data
as a service, provisioned in order to save on-demand IT deployment and
achieving economies of scale in IT operations.

2. Provisioning of Storage Resources

 As cloud storage systems also offer resources to customers, it is likely that


data is stored in the clusters of the cloud provider.
 The provisioning of storage resources in cloud is often associated with the
terms like distributed file system, storage technologies and databases.
 Several cloud computing providers have developed large scale data storage
services to store a vast volume of data collected every day.
 A distributed file system is very essential for storing large data, as traditional
file systems have failed to do that.
 For cloud computing, it is also important to construct databases such as
large-scale systems based on data storage or distributed file systems
 Some examples of distributed file system are Google’s GFS that stores huge
amount of data generated on web including images, text files, PDFs or spatial
data for Google Earth.
 The Hadoop Distributed File System (HDFS) developed by Apache is another
framework used for distributed data storage from the open source community.
 Hadoop is an open-source implementation of Google's cloud computing
technology.

Panimalar Engineering College 5 S.Hariharan


UNIT-4
 The main aim is to store data in structured or semi-structured forms so
that application developers can use it easily and construct their applications
quickly.
 Traditional databases may meet the performance bottleneck while the system
is being extended to a larger scale. However, some real applications do not
need such a strong consistency.
 The size of these databases can be very growing.
 Typical cloud databases include
- Google’s Big Table,
- Amazons Simple DB or DynamoDB and
- Azure SQL service from Microsoft Azure.

3. Provisioning in Dynamic Resource Deployment

 The cloud computing utilizes virtual machines as basic building blocks to


construct the execution environment across multiple resource sites.
 Resource provisioning in dynamic environment can be carried out to
achieve scalability of performance.
 The Inter-Grid is a Java-implemented programming model that allows users
to build cloud-based execution environments on top of all active grid
resources.
 Peering arrangements established between gateways enable the allocation of
resources from multiple grids to establish the execution environment.
 In Figure 4.6, scenario is illustrated by which an inter-grid gateway (IGG)
allocates resources from a local cluster to deploy applications in three steps:
- (1) Requesting the VMs,
- (2) Enacting the leases, and
- (3) deploying the VMs as requested.
 Under peak demand, this IGG interacts with another IGG that can allocate
resources from a cloud computing provider.
 A grid has predefined peering arrangements with other grids, which the IGG
manages.
 Through multiple IGGs, the system coordinates the use of Inter-Grid
resources.

Panimalar Engineering College 6 S.Hariharan


UNIT-4

Figure 4.6
 An IGG is aware of the peering terms with other grids, selects suitable
grids that can provide the required resources, and replies to requests
from other IGGs.
 Request redirection policies determine which peering grid Inter-Grid
selects to process a request and a price for which that grid will perform
the task.
 An IGG can also allocate resources from a cloud provider.
 The cloud system creates a virtual environment to help users deploy
their applications. These applications use the distributed grid resources.
 The Inter-Grid allocates and provides a distributed virtual environment
(DVE). This is a virtual cluster of VMs that runs isolated from other virtual
clusters.
 A component called the DVE manager performs resource allocation and
management on behalf of specific user applications.
 The core component of the IGG is a scheduler for implementing
provisioning policies and peering with other gateways.
 The communication component provides an asynchronous message-
passing mechanism. Received messages are handled in parallel by a thread
pool.

Panimalar Engineering College 7 S.Hariharan


UNIT-4
Resource Provisioning Methods

Resource Provisioning Methods

 The user may give up the service by cancelling the demand, resulting in
reduced revenue for the provider.
 Both the user and provider may be losers in resource provisioning without
elasticity.
 Three resource-provisioning methods are presented in the following
sections.
- (1)The demand-driven method provides static resources and has been
used in grid computing for many years.

- (2)The event driven method is based on predicted workload by time.

- (3)The popularity-driven method is based on Internet traffic


monitored.

1. Demand-Driven Resource Provisioning

 This method adds or removes computing instances based on the current


utilization level of the allocated resources.
 The demand-driven method automatically allocates two Xeon processors for
the user application, when the user was using one Xeon processor more than
60 percent of the time for an extended period.
 In general, when a resource has surpassed a threshold for a certain
amount of time, the scheme increases that resource based on demand.
 When a resource is below a threshold for a certain amount of time, that
resource could be decreased accordingly.
 Amazon implements such an auto-scale feature in its EC2 platform. This
method is easy to implement. The scheme “does not work out right”, if the
workload changes abruptly.
 The x-axis in Figure 4.2 is the time scale in milliseconds. In the beginning,
heavy fluctuations of CPU load are encountered.

Panimalar Engineering College 8 S.Hariharan


UNIT-4

Figure 4.2
 All three methods have demanded a few VM instances initially.
 Gradually, the utilization rate becomes more stabilized with a maximum
of 20 VMs (100 percent utilization) provided for demand-driven provisioning
in Figure 4.2(a).
 However, the event-driven method reaches a stable peak of 17 VMs toward
the end of the event and drops quickly in Figure 4.2(b).
 The popularity provisioning shown in Figure 4.2(c) leads to a similar
fluctuation with peak VM utilization in the middle of the plot.

Panimalar Engineering College 9 S.Hariharan


UNIT-4

2. Event-Driven Resource Provisioning

 This scheme adds or removes machine instances based on a specific time


event.
 The scheme works better for seasonal or predicted events such as
Christmastime in the West and the Lunar New Year in the East.
 During these events, the number of users grows before the event period and
then decreases during the event period.
 This scheme anticipates peak traffic before it happens. The method results
in a minimal loss of QoS, if the event is predicted correctly.

3. Popularity-Driven Resource Provisioning

 In popularity driven resource provisioning, the resources are allocated based


on popularity of certain applications and their demands.
 In this method, the Internet searches for popularity of certain
applications and creates the instances by popularity demand.
 The scheme anticipates increased traffic with popularity.If traffic does not
happen as expected, resources may get wasted.
 Again, the scheme has a minimal loss of QoS, if the predicted popularity
is correct.
 Resources may be wasted if traffic does not occur as expected.

Cloud Security

Cloud Security

 A healthy cloud ecosystem is desired to free users from abuses, violence,


cheating, hacking, viruses, spam, and privacy and copyright violations.
 The security demands of three cloud service models, IaaS, PaaS, and SaaS,
are described in this section.
 These security models are based on various SLAs between providers and
users.

Basic Cloud Security:

 Three basic cloud security enforcements are expected.

Panimalar Engineering College 10 S.Hariharan


UNIT-4

 First, facility security in data centres demands on-site security year


round.
 Biometric readers, CCTV (close-circuit TV), motion detection, and man traps
are often deployed. Also, network security demands fault-tolerant external
firewalls, intrusion detection systems (IDSes), and third-party vulnerability
assessment.
 Finally, platform security demands SSL and data decryption, strict password
policies, and system trust certification.
 A security-aware cloud architecture demands security enforcement. Malware-
based attacks such as network worms, viruses, and DoS attacks exploit
system vulnerabilities.
 Thus, security defences are needed to protect all cluster servers and data
centres. Here are some cloud components that demand special security
protection:
- Protection of servers from malicious software attacks such as worms,
viruses, and malware
- Protection of hypervisors or VM monitors from software-based attacks
and vulnerabilities
- Protection of VMs and monitors from service disruption and DoS
attacks
- Protection of data and information from theft, corruption, and natural
disasters
- Providing authenticated and authorized access to critical data and
services

Security Challenges in VMs

 Traditional network attacks include buffer overflows, DoS attacks, spyware,


malware, rootkits, Trojan horses, and worms.
 In a cloud environment, newer attacks may result from hypervisor malware,
guest hopping and hijacking, or VM root-kits.
 Another type of attack is the man-in-the-middle attack for VM migrations.

Panimalar Engineering College 11 S.Hariharan


UNIT-4

 In general, passive attacks steal sensitive data or passwords. Active


attacks may manipulate kernel data structures which will cause major
damage to cloud servers.

Cloud Defence Methods

 Virtualization enhances cloud security. But VMs add an additional layer


of software that could become a single point of failure.
 With virtualization, a single physical machine can be divided or
partitioned into multiple VMs (e.g., server consolidation). This provides each
VM with better security isolation and each partition is protected from
DoS attacks by other partitions.
 Security attacks in one VM are isolated and contained from affecting the
other VMs. Table 4.9 lists eight protection schemes to secure public clouds
and data centers.

 VM failures do not propagate to other VMs. The hypervisor provides visibility


of the guest OS, with complete guest isolation.
 Fault containment and failure isolation of VMs provide a more secure and
robust environment.

Panimalar Engineering College 12 S.Hariharan


UNIT-4

 Malicious intrusions may destroy valuable hosts, networks, and storage


resources.
 Internet anomalies found in routers, gateways, and distributed hosts may
stop cloud services.
 Trust negotiation is often done at the SLA level.
 Public Key Infrastructure (PKI) services could be augmented with data-canter
reputation systems.
 It is harder to establish security in the cloud because all data and software
are shared by default.

Defence with Virtualization

 The VM is decoupled from the physical hardware.


 The entire VM can be represented as a software component and can be
regarded as binary or digital data.
 The VM can be saved, cloned, encrypted, moved, or restored with ease.
 VMs enable faster disaster recovery.
 Multiple IDS VMs can be deployed at various resource sites including data
centres.
 Security policy conflicts must be resolved at design time and updated
periodically.

Privacy and Copyright Protection

 The Amazon EC2 applies HMEC and X.509 certificates in securing


resources. It is necessary to protect browser-initiated application software
in the cloud environment.
 Here are several security features desired in a secure cloud:
- Dynamic web services with full support from secure web technologies
- Established trust between users and providers through SLAs and
reputation systems
- Effective user identity management and data-access management
- Single sign-on and single sign-off to reduce security enforcement
overhead

Panimalar Engineering College 13 S.Hariharan


UNIT-4

- Auditing and copyright compliance through proactive enforcement


- Protection of sensitive and regulated information in a shared
environment.

Cloud Security Challenges

 Virtualization and cloud computing can help companies accomplish more


by breaking the physical bonds between an IT infrastructure and its users,
sensitive security threats must be overcome in order to benefit fully from
this new computing paradigm. This is particularly true for the SaaS provider.
 Some security concerns are worth more discussion. For example, in the
cloud,
- You lose control over assets in some respects, so your security
model must be reassessed.
- Enterprise security is only as good as the least reliable partner,
department, or vendor. With the cloud model, you lose control over
physical security. In a public cloud, you are sharing computing resources
with other companies.
- In a shared pool outside the enterprise, you don’t have any knowledge
or control of where the resources run. Exposing your data in an
environment shared with other companies could give the government
“reasonable cause” to seize your assets because another company has
violated the law. Simply because you share the environment in the
cloud, may put your data at risk of seizure.
- Storage services provided by one cloud vendor may be incompatible with
another vendor’s services should you decide to move from one to the
other.
- Vendors are known for creating what the hosting world calls “sticky
services”
- Services that an end user may have difficulty transporting
from one cloud vendor to another.
- If information is encrypted while passing through the cloud, who
controls the encryption/decryption keys? Is it the customer or the
cloud vendor?

Panimalar Engineering College 14 S.Hariharan


UNIT-4

- Most customers probably want their data encrypted both


ways across the Internet using SSL (Secure Sockets Layer
protocol).
- They also most likely want their data encrypted while it is at
rest in the cloud vendor’s storage pool.
- Be sure that you, the customer, control the
encryption/decryption keys, just as if the data were still
resident on your own servers.
- Data integrity means ensuring that data is identically maintained
during any operation (such as transfer, storage, or retrieval). Put
simply, data integrity is assurance that the data is consistent and
correct.
- Ensuring the integrity of the data really means that it changes only
in response to authorized transactions.

Software-as-a-Service Security

 Cloud computing models of the future will likely combine the use of SaaS
(and other XaaS’s as appropriate), utility computing, and Web 2.0
collaboration technologies to leverage the Internet to satisfy their
customers’ needs.
 New business models being developed as a result of the move to cloud
computing are creating not only new technologies and business
operational processes but also new security requirements and challenges
as described previously.
 As the most recent evolutionary step in the cloud service model (see
Figure 4.5),
 SaaS will likely remain the dominant cloud service model for the expected
future and the area where the most critical need for security practices
and oversight will reside.

Panimalar Engineering College 15 S.Hariharan


UNIT-4

Fig 4.5
The technology analyst and consulting firm Gartner lists seven security
issues which one should discuss with a cloud-computing vendor:
1. Privileged user access

- Inquire about who has specialized access to data, and about the
hiring and management of such administrators.

2. Regulatory compliance

- Make sure that the vendor is willing to undergo external audits


and/or security certifications.

3. Data location

- Does the provider allow for any control over the location of data?

4. Data segregation

- Make sure that encryption is available at all stages, and that these
encryption schemes were designed and tested by experienced
professionals.

5. Recovery

- Find out what will happen to data in the case of a disaster. Do they
offer complete restoration? If so, how long would that take?

6. Investigative support

- Does the vendor have the ability to investigate any inappropriate


or illegal activity?

7. Long-term viability

- What will happen to data if the company goes out of business?


How will data be returned, and in what format?

Panimalar Engineering College 16 S.Hariharan


UNIT-4
Data Security:

Data Security:

 Physical security defines how you control physical access to the servers
that support your infrastructure. The cloud still has physical security
constraints. After all, there are actual servers running somewhere.
 When selecting a cloud provider, you should understand their physical
security protocols and the things you need to do on your end to secure
your systems against physical vulnerabilities.
 The ultimate challenge in cloud computing is data-level security, and
sensitive data is the domain of the enterprise, not the cloud computing
provider.
 Security will need to move to the data level so that enterprises can be sure
their data is protected wherever it goes. For example, with data-level
security, the enterprise can specify that this data is not allowed to go
outside of the United States.
 It can also force encryption of certain types data, and permit only
specified users to access the data.
 It can provide compliance with the Payment Card Industry Data Security
Standard (PCI DSS).
 True unified end-to-end security in the cloud will likely require an ecosystem
of partners.

Data Control

 The big gap between traditional data centers and the cloud is the location
of your data on someone else’s servers.
 Companies who have outsourced their data centers to a managed services
provider.
 The main practical problem is that factors that have nothing to do with your
business can compromise your operations and your data.
 For example, any of the following events could create trouble for your
infrastructure:
- The cloud provider declares economic failure and its servers are
seized or it ceases operations.

Panimalar Engineering College 17 S.Hariharan


UNIT-4

- A third party with no relationship to you (or, worse, a competitor)


sues your cloud provider and granting access to all servers owned
by the cloud provider.
- Failure of your cloud provider to properly secure portions of its
infrastructure
 especially in the maintenance of physical access controls
 results in the compromise of your systems.
 The solution is to do two things you should be doing anyway:
- Encrypt everything and keep off-site backups.
 Encrypt sensitive data in your database and in memory.
Decrypt it only in memory for the duration of the need for the
data.
- Encrypt your backups and encrypt all network communications.
 Choose a second provider and use automated, regular
backups (for which many open source and commercial
solutions exist) to make sure any current and historical data
can be recovered even if your cloud provider were to disappear
from the face of the earth.

Application Security

Application Security

 Application security is one of the critical success factors for a world-class


SaaS company.
 This is where the security features and requirements are defined and
application security test results are reviewed.
 Application security processes, secure coding guidelines, training, and
testing scripts and tools are typically a collaborative effort between the
security and the development teams.

Panimalar Engineering College 18 S.Hariharan


UNIT-4

 Although product engineering will likely focus on the application layer, the
security design of the application itself, and the infrastructure layers
interacting with the application, the security team should provide the
security requirements for the product development engineers to
implement.
 This should be a collaborative effort between the security and product
development team.
 External penetration testers are used for application source code reviews,
and attack and penetration tests provide an objective review of the security
of the application as well as assurance to customers that attack and
penetration tests are performed regularly.
 Fragmented and undefined collaboration on application security can result
in lower-quality design, coding efforts, and testing results.
 Since many connections between companies and their SaaS providers are
through the web, providers should secure their web applications by following
Open Web Application Security Project (OWASP) guidelines for secure
application development and locking down ports
 LAMP is an open-source web development platform, also called a web stack,
that uses Linux as the operating system, Apache as the web server, MySQL as
the relational database management system RDBMS, and PHP as the object-
oriented scripting language.
o Perl or Python is often substituted for PHP.
o

Virtual Machine Security

 In traditional network, several security attacks arise such as buffer overflows,


DoS attacks, spyware, malware, root kits,
 Trojan horses and worms. Newer attacks may arise in a cloud environment
such as hypervisor malware, guest hopping, hijacking, or VM root kits.
 The man-in-the-middle attack for VM migrations is another type of attack
happen on Virtual machines.

Panimalar Engineering College 19 S.Hariharan


UNIT-4

 The Passive attacks on VMs usually steal sensitive information or passwords


while active attacks manipulate the kernel data structures which cause
significant damage to cloud servers.

 To overcome the security attacks on VMs, Network level IDS or Hardware level
IDS can be used for protection, shepherding programs can be applied for code
execution control and verification and additional security technologies can be
used.

 The additional security technologies involve the use of RIO's v Safe and v
Shield software, hypervisor enforcement and Intel VPro technologies, with
dynamic optimization infrastructure or using the hardened OS environment or
are using isolated sandboxing and execution.

 Physical servers are consolidated on virtualized servers with several virtual

machine instances in the cloud environment.

 Firewalls, intrusion detection and prevention, integrity monitoring and log


inspection can all be deployed as software on virtual machines to enhance the
integrity of servers, increase protection and maintain compliance.

 Here applications to move from on-site to public cloud environments as a


virtual resource.

 The security software loaded on a virtual machine should be filled with two-
way state full Firewall which enables virtual machine isolation and
localization, enabling tighter policy and the flexibility to transfer the virtual
machine from the on premises to cloud resources to make it easier for the
centralized management of the server firewall policy.
 The integrity monitoring and log inspection should be used for virtual
machine level applications.
.

Panimalar Engineering College 20 S.Hariharan

You might also like