Unit 1 Cloud Computing..
Unit 1 Cloud Computing..
Evolution of Cloud Computing: The phrase “Cloud Computing” was first introduced in the
1950s to describe internet-related services, and it evolved from distributed computing to the
modern technology known as cloud computing. Cloud services include those provided by
Amazon, Google, and Microsoft. Cloud computing allows users to access a wide range of
services stored in the cloud or on the Internet. Cloud computing services include computer
resources, data storage, apps, servers, development tools, and networking protocols.
The evolution of cloud computing is marked by significant milestones that have transformed
the way technology is delivered and consumed. Here's an overview of its progression:
Before the term "cloud computing" existed, foundational concepts were already in motion.
1950s–1960s: Mainframe Computing & Time-Sharing
o Large mainframe computers were expensive and used by multiple users
simultaneously through time-sharing. This allowed businesses and institutions to
maximize computing power efficiently, which planted early seeds for shared
computing.
1970s–1980s: Virtualization & Networking
o Virtualization emerged with companies like IBM introducing virtual machines,
allowing multiple operating systems on a single physical machine.
o The development of computer networks (like ARPANET, the precursor to the
internet) laid the groundwork for distributed computing.
1990s: Rise of the Internet & SaaS Concepts
o With the explosion of the internet, businesses started offering web-based
applications. Salesforce launched in 1999 as one of the first Software as a
Service (SaaS) platforms, providing CRM tools over the web.
Cloud computing continues to evolve with advanced technologies and widespread adoption.
Serverless Computing
o Technologies like AWS Lambda enable developers to run code without
managing servers, paying only for the compute time used.
Edge Computing & IoT Integration
o Edge computing brings computation closer to data sources (like IoT devices),
reducing latency and improving performance for real-time applications.
Artificial Intelligence & Quantum Computing
o Major cloud providers offer AI and machine learning tools (e.g., TensorFlow on
Google Cloud, Azure AI), democratizing access to advanced analytics.
o Quantum computing is being explored within cloud platforms, offering
experimental tools for complex problem-solving.
Focus on Security & Compliance
o With growing concerns over data privacy, cloud providers now offer advanced
security features and compliance tools to meet global regulations (like GDPR).
Sustainability & Green Cloud
o Cloud providers are focusing on sustainable data centers and energy-efficient
operations to reduce the carbon footprint of cloud infrastructure.
Mainframe Computing: Mainframes which first came into existence in 1951 are highly
powerful and reliable computing machines. These are responsible for handling large data such as
massive input-output operations. Even today these are used for bulk processing tasks such as
online transactions etc. These systems have almost no downtime with high fault tolerance. After
distributed computing, these increased the processing capabilities of the system. But these were
very expensive. To reduce this cost, cluster computing came as an alternative to mainframe
technology.
Grid Computing: In 1990s, the concept of grid computing was introduced. It means that
different systems were placed at entirely different geographical locations and these all were
connected via the internet. These systems belonged to different organizations and thus the grid
consisted of heterogeneous nodes. Although it solved some problems but new problems emerged
as the distance between the nodes increased. The main problem which was encountered was the
low availability of high bandwidth connectivity and with it other network associated issues.
Thus. cloud computing is often referred to as “Successor of grid computing”.
Virtulization: Virtualization was introduced nearly 40 years back. It refers to the process of
creating a virtual layer over the hardware which allows the user to run multiple instances
simultaneously on the hardware. It is a key technology used in cloud computing. It is the base on
which major cloud computing services such as Amazon EC2, VMware vCloud, etc work on.
Hardware virtualization is still one of the most common types of virtualization.
Web 2.0
Web 2.0: Web 2.0 is the interface through which the cloud computing services interact with the
clients. It is because of Web 2.0 that we have interactive and dynamic web pages. It also
increases flexibility among web pages. Popular examples of web 2.0 include Google Maps,
Facebook, Twitter, etc. Needless to say, social media is possible because of this technology only.
It gained major popularity in 2004.
Service Orientation
Service Orientation: A service orientation acts as a reference model for cloud computing. It
supports low-cost, flexible, and evolvable applications. Two important concepts were introduced
in this computing model. These were Quality of Service (QoS) which also includes the SLA
(Service Level Agreement) and Software as a Service (SaaS).
Utility Computing
Cloud Computing: Cloud Computing means storing and accessing the data and programs on
remote servers that are hosted on the internet instead of the computer’s hard drive or local server.
Cloud computing is also referred to as Internet-based computing, it is a technology where the
resource is provided as a service through the Internet to the user. The data that is stored can be
files, images, documents, or any other storable document.
Advantages
1. It saves time and money because many resources working together cut down on time and
costs.
2. It may be difficult to resolve larger problems on Serial Computing.
3. You can do many things at once using many computing resources.
4. Parallel computing is much better than serial computing for modeling, simulating, and
comprehending complicated real-world events.
Disadvantages
There are various benefits of using distributed computing. It enables scalability and makes it
simpler to share resources. It also aids in the efficiency of computation processes.
Advantages
1. Data security and sharing are the main issues in distributed systems due to the features of
open systems
2. Because of the distribution across multiple servers, troubleshooting and diagnostics are
more challenging.
3. The main disadvantage of distributed computer systems is the lack of software support.
1. On-Demand Self-Service
o Users can automatically provision computing resources (like storage, processing
power, or applications) without needing human interaction with the service
provider.
o Example: You can spin up a virtual machine on AWS or Google Cloud anytime
with just a few clicks.
2. Broad Network Access
o Cloud services are accessible over the internet from a wide range of devices (e.g.,
laptops, smartphones, tablets).
o This ensures users can access resources from anywhere with an internet
connection.
3. Resource Pooling
o Cloud providers use multi-tenancy models, where computing resources (such as
storage, processing, and bandwidth) are pooled together to serve multiple
customers.
o Resources are dynamically allocated and reassigned according to user demand,
often without the user knowing the exact physical location of their data.
4. Rapid Elasticity and Scalability
o Cloud resources can be quickly scaled up or down based on demand. This
elasticity allows businesses to handle varying workloads efficiently.
o Example: E-commerce platforms can automatically scale up resources during
high-traffic events like Black Friday and scale them down afterward.
5. Measured Service (Pay-As-You-Go)
o Cloud systems automatically monitor and report resource usage, enabling a pay-
per-use or subscription-based billing model.
o Users are billed based on actual consumption, such as CPU hours used, storage
capacity, or network bandwidth.
6. High Availability and Reliability
o Cloud providers offer robust infrastructure with redundancy and failover
mechanisms to ensure 99.9% uptime or higher.
o Data is often replicated across multiple data centers for disaster recovery.
7. Multi-Tenancy and Shared Resources
o Multiple customers share the same infrastructure while maintaining data
isolation and security.
o This model improves efficiency and reduces costs for both providers and users.
8. Security and Compliance
o Leading cloud providers offer robust security measures, including encryption,
firewalls, identity management, and compliance with global standards (like
GDPR, HIPAA).
o Security is a shared responsibility between the provider and the customer.
9. Automation and Orchestration
o Cloud environments support automation tools and orchestration for tasks like
resource provisioning, monitoring, and scaling.
o Technologies like Infrastructure as Code (IaC) allow users to manage
infrastructure through code (e.g., using tools like Terraform).
10. Location Independence
Users can access services and data from virtually anywhere, as long as they have internet
connectivity.
The physical location of data centers is abstracted, offering flexibility to users.
1. Serverless Computing
o With serverless architectures, developers can deploy code without managing the
underlying infrastructure. The cloud provider handles the server management.
o Example: AWS Lambda lets you run code in response to events without
provisioning servers.
2. Edge Computing
o Cloud resources are increasingly being deployed closer to end-users (at the "edge"
of the network) to reduce latency and improve performance.
o This is especially important for IoT devices and real-time applications like
autonomous vehicles.
3. Sustainability and Energy Efficiency
o Modern cloud providers are focusing on green energy and sustainable data center
operations, optimizing resource utilization to reduce the carbon footprint.
Cloud computing has evolved over several decades, from basic time-sharing concepts to today’s
sophisticated cloud platforms. Below is a timeline of key developments in cloud computing
history:
Salesforce (1999): The first Software as a Service (SaaS) company, offering CRM
solutions via the internet.
Application Service Providers (ASPs): Companies started renting out software
applications over the internet.
Amazon Web Services (AWS) launched in 2006, offering Elastic Compute Cloud
(EC2) and Simple Storage Service (S3), marking the beginning of Infrastructure as a
Service (IaaS).
Google launched Google Docs & Google App Engine (2008), enabling cloud-based
collaboration and application hosting.
Microsoft introduced Azure (2010), expanding the cloud market further.
Hybrid & Multi-Cloud Adoption: Enterprises combined private and public clouds.
Containerization (Docker, Kubernetes): Improved cloud efficiency.
Edge Computing & AI Integration: Services like Google AI, AWS Lambda (2014),
and Azure AI emerged.
IBM, Oracle, and Alibaba Cloud expanded their cloud offerings.
Serverless Computing & FaaS: Developers use cloud platforms without managing
infrastructure.
Quantum Computing in the Cloud: Google, IBM, and AWS invest in quantum cloud
services.
Sustainability & Green Cloud: Cloud providers focus on carbon-neutral data centers.
1.Front-end
2.Back end
The below diagram shows the architecture of cloud computing -
Front-end
The client uses the front end. Applications and client-side interfaces necessary to access cloud
computing platforms are included. Web servers (such as Chrome, Firefox, Internet Explorer,
etc.), thin and fat clients, tablets, and mobile devices are all part of the front end.
The back end
The service provider uses the back end. It oversees every resource needed to deliver cloud
computing services. A vast number of data storage, servers, virtual machines, deployment
models, traffic management techniques, and security measures are all included.
Note: Both front end and back end are connected to others through a
network, generally using the internet connection.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client’s
requirement.
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
5. Storage
Storage is one of the most important components of cloud computing. It provides a huge amount
of storage capacity in the cloud to store and manage data.
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure
includes hardware and software components such as servers, storage, network devices,
virtualization software, and other storage resources that are needed to support the cloud
computing model.
7. Management
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.
8. Security
9. Internet
The Internet is medium through which front end and back end can interact and communicate
with each other.
Types of Cloud
In cloud computing, there are four primary types of cloud deployment models, each tailored to
different organizational needs and how resources are accessed. These models are:
1. Public Cloud
Description: In a public cloud, the cloud services (compute, storage, etc.) are provided
over the internet by third-party vendors and are available to anyone who wants to use or
purchase them.
Ownership: Managed by external cloud service providers.
Examples: Amazon Web Services (AWS), Microsoft Azure, Google Cloud.
Advantages:
o Low cost due to shared resources.
o Easy scalability.
o No need for maintenance or infrastructure management.
Use Cases: Startups, small businesses, or developers needing quick access to scalable
resources.
2. Private Cloud
Description: A hybrid cloud combines private and public clouds, allowing data and
applications to be shared between them. This enables more flexibility and deployment
options.
Ownership: Combination of both public and private clouds managed together.
Examples: AWS Outposts, Microsoft Azure Stack, Google Anthos.
Advantages:
o Flexibility to scale workloads between public and private clouds.
o Optimizes existing infrastructure while leveraging the benefits of the public
cloud.
o Ideal for businesses that require both security and scalability.
Use Cases: Organizations that need to keep critical workloads in private clouds but wish
to take advantage of public cloud for non-sensitive workloads.
4. Community Cloud
Summary Table
Cloud computing enables various business models, helping companies generate revenue through
cloud-based services. Here are the primary business models in cloud computing:
2. Pay-Per-Use Model
Customers pay only for the resources they use, such as computing power, storage, or
bandwidth.
Ideal for businesses with variable workloads.
Examples:
o Amazon Web Services (AWS EC2, S3)
o Google Cloud Compute Engine
o Microsoft Azure Virtual Machines
3. Freemium Model
Businesses buy cloud services from providers and resell them under their own brand.
Common in web hosting and cloud storage solutions.
Examples:
o GoDaddy reselling AWS services
o Managed Service Providers (MSPs) reselling Microsoft Azure
Cloud providers offer a marketplace where third-party vendors sell their services.
Acts as a mediator between customers and service providers.
Examples:
o AWS Marketplace
o Google Cloud Marketplace
o Microsoft Azure Marketplace
6. Hybrid Model
Issues in Cloud:
Cloud computing offers many benefits, but it also comes with several challenges. Here are some
of the major issues businesses and users face:
3. Vendor Lock-in
Managing multiple cloud services (AWS, Azure, Google Cloud) increases complexity.
Difficulties in integration and monitoring across different platforms.
Eucalyptus
Eucalyptus is open source software for building AWS-compatible private and hybrid clouds.
As an Infrastructure as a Service (IaaS) product, Eucalyptus allows your users to provision your
compute and storage resources on-demand.
Eucalyptus, in the context of cloud computing, is an acronym for Elastic Utility Computing
Architecture for Linking Your Programs. This architecture allows developers to build and
manage cloud computing environments using scalable and flexible resources. It thus enables
organisations to meet varying computational needs with ease.
Elastic: This refers to the ability of the system to scale resources dynamically.
Eucalyptus offers elasticity by adjusting computing power according to demand, ensuring
that users only pay for what they use. This feature is handy for businesses with
fluctuating workloads.
Utility Computing: Utility computing means that computational resources (such as
servers and storage) are provided as a service. Users access these resources without
maintaining physical infrastructure, making it more cost-effective and efficient.
Architecture: Eucalyptus provides a structured framework for building cloud
environments. It includes various components like the Cloud Controller and Node
Controller, which help manage virtual machines and resources.
Linking Your Programs: The architecture seamlessly integrates different applications
and services. It links various programs through a unified platform, enhancing their
communication and coordination in a cloud-based environment.
Eucalyptus offers several key features that make it a popular choice for cloud computing
solutions. Its architecture is designed to be both scalable and flexible, allowing users to manage
computing resources efficiently.
Eucalyptus allows users to dynamically scale resources based on demand, ensuring optimal
performance during peak loads. Its flexibility lets users choose the best configurations for their
workloads, adapting to the changing needs of businesses.
Eucalyptus seamlessly integrates with existing cloud services like Amazon Web
Services (AWS), providing a hybrid environment for enhanced functionality. This compatibility
supports many use cases, enhancing its versatility in diverse business environments.
Components of Eucalyptus
Nimbus:
Nimbus supports both the hypervisors Xen and KVM and virtual machine schedulers Portable
Batch System and Oracle Grid Engine. It allows deployment of self-configured virtual clusters
via contextualization. It is configurable with respect to scheduling, networking leases, and usage
accounting.
Key Features of Nimbus
Components of Nimbus
Open Nebula is an open source cloud computing platform for managing heterogeneous data
center, public cloud and edge computing infrastructure resources. Open Nebula manages on-
premises and remote virtual infrastructure to build private, public, or hybrid implementations
of infrastructure as a service (IaaS) and multi-tenant Kubernetes deployments. The two primary
uses of the Open Nebula platform are data center virtualization and cloud deployments based on
the KVM hypervisor, LXD/LXC system containers, and AWS Firecracker microVMs. The
platform is also capable of offering the cloud infrastructure necessary to operate a cloud on top
of existing VMware infrastructure. In early June 2020, Open Nebula announced the release of a
new Enterprise Edition for corporate users, along with a Community Edition. Open Nebula CE
is free and open-source software, released under the Apache License version 2. Open Nebula CE
comes with free access to patch releases containing critical bug fixes but with no access to the
regular EE maintenance releases. Upgrades to the latest minor/major version is only available for
CE users with non-commercial deployments or with significant open source contributions to the
Open Nebula Community. Open Nebula EE is distributed under a closed-source license and
requires a commercial Subscription.
Cloud Simulators
A cloud simulator is a software tool used for modeling, simulating, and analyzing cloud
computing environments. These simulators help researchers, developers, and businesses test
cloud strategies without deploying real infrastructure.
1. Cloud Sim
3. iCan Cloud
4. Green Cloud
6. EMUSIM
EdgeCloudSim IoT & Edge computing Latency & edge device simulation