0% found this document useful (0 votes)
8 views13 pages

Cloud Architectures

it all archite

Uploaded by

hsyas918
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views13 pages

Cloud Architectures

it all archite

Uploaded by

hsyas918
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Topics Covered

Cloud Architectures

• Dynamic Scalability Architecture

• Elastic resource capacity

• Service load balancing

• Cloud bursting Elastic Disk Provisioning Architecture Cloud Architectures

• Redundant storage

• Hypervisor clustering

• Load balanced virtual server Non-disruptive service relocation Cloud Advanced Architectures

• Zero downtime

• Cloud balancing

• Resource reservation Dynamic failure detection and recovery Cloud Advanced Architectures •
Distributed Data Sovereignty Architecture

• Virtual Private Cloud Architecture

Cloud Architectures
Introduction to Cloud Architectures
Cloud architecture refers to the components and subcomponents of a cloud computing system,
including the front-end platform, back-end platform, cloud-based delivery, and network. It ensures
scalability, elasticity, and optimal resource management.

1. Dynamic Scalability Architecture

Definition:

Dynamic scalability enables cloud systems to increase or decrease resources dynamically based
on real-time demands, ensuring performance and resource optimization.

Key Features:

• On-Demand Scaling: Resources are adjusted automatically.

• Horizontal and Vertical Scaling:

o Horizontal Scaling: Adding/removing servers (e.g., multiple VMs).

o Vertical Scaling: Increasing server capacity (e.g., CPU/RAM).

• Load Monitoring: Real-time monitoring triggers scaling operations.


How It Works:

1. Resource Monitor: Tracks CPU, memory usage, or response time.

2. Auto-Scaling Rules: Predefined thresholds activate scaling.

3. Resource Allocation: Virtual machines or storage resources are allocated dynamically.

Real-World Example:

• Amazon EC2 Auto Scaling: Automatically scales instances to match traffic demands.

Benefits:

• Cost-efficiency by avoiding over-provisioning.

• Improved user experience with consistent performance.

2. Elastic Resource Capacity

Definition:

Elastic resource capacity refers to the ability to allocate or deallocate cloud resources dynamically
to meet changing workloads.

How It Works:

• Elasticity ensures that resources scale in real-time as the demand fluctuates.

• Cloud providers provision resources dynamically, allowing systems to "stretch" during peak
times and "shrink" during low activity.

Components:

• Resource Monitoring Tools: Track usage patterns.

• Resource Management Policies: Based on CPU, storage, or bandwidth metrics.

• Dynamic Resource Provisioning: Using hypervisors or containerization.

Example:

• Netflix: Adjusts its cloud resources based on global streaming demands.

Benefits:

• Enhances agility and responsiveness.

• Reduces costs through efficient utilization.

3. Service Load Balancing


Definition:

Load balancing refers to distributing workloads evenly across multiple servers or resources to
optimize performance, ensure availability, and prevent system overload.

How It Works:

• Load Balancer acts as an intermediary between users and backend servers.

• Incoming requests are analyzed and routed to servers based on load, health, or response
time.

Load Balancing Algorithms:

1. Round Robin: Distributes requests sequentially to servers.

2. Least Connections: Sends requests to the least busy server.

3. Dynamic Load Balancing: Adjusts based on real-time server health and resource load.

Example Services:

• AWS Elastic Load Balancing (ELB): Routes traffic to healthy instances.

• Google Cloud Load Balancer.

Benefits:

• Prevents resource bottlenecks and failures.

• Ensures high availability and reliability.

4. Cloud Bursting

Definition:

Cloud bursting is a hybrid cloud deployment where an application primarily runs in a private cloud
but bursts into a public cloud when demand spikes.

How It Works:

1. Applications are hosted in a private cloud.

2. When the private cloud hits its resource limits, excess load is shifted to the public cloud.

3. Once the spike ends, resources are de-provisioned from the public cloud.

Key Components:

• Monitoring Tools: Detect workload thresholds.

• Cloud Connector: Facilitates seamless transition between clouds.

• Load Balancer: Redirects workloads to public cloud resources.


Example:

• A retailer uses a private cloud during normal traffic but leverages AWS or Azure during
seasonal sales to handle surges.

Benefits:

• Cost savings: No need to over-provision private resources.

• Flexibility and scalability during demand peaks.

5. Elastic Disk Provisioning Architecture

Definition:

Elastic disk provisioning allows cloud systems to dynamically allocate or expand storage volumes
without downtime as the data grows.

Key Concepts:

• Thin Provisioning: Allocates only the storage needed initially and expands dynamically as
the system demands more storage.

• Thick Provisioning: Pre-allocates a fixed amount of storage regardless of current use.

How It Works:

1. Monitoring Tools: Track storage usage growth.

2. Elastic Volume Service: Automatically expands storage size (e.g., Amazon EBS).

3. Data Migration: For performance optimization, storage may be migrated across tiers.

Example:

• Amazon Elastic Block Store (EBS): Provides scalable storage volumes that can expand
dynamically.

• Azure Managed Disks.

Benefits:

• Ensures seamless storage scalability.

• Eliminates downtime caused by manual resizing.

• Reduces cost by allocating only necessary storage.

Summary Table
Topic Purpose Example

Dynamic Scalability Automatically adjusts resources Amazon EC2 Auto Scaling

Elastic Resource Capacity Allocates/deallocates resources as needed Netflix Resource Scaling

Service Load Balancing Distributes workloads across servers AWS Elastic Load Balancer

Cloud Bursting Extends private cloud to public cloud Retailer Cloud Bursting

Elastic Disk Provisioning Dynamically scales storage volumes Amazon EBS

Lecture Notes: Cloud Architectures

Introduction to Cloud Architectures

Cloud architectures are designed to provide high availability, scalability, and fault tolerance.
Modern cloud systems ensure seamless performance through features like redundant storage,
hypervisor clustering, load balancing, and non-disruptive service relocation.

1. Redundant Storage

Definition:

Redundant storage ensures data availability and reliability by storing copies of data across multiple
storage devices or locations. If one system fails, a backup copy of the data can be accessed
seamlessly.

Key Concepts:

1. Data Replication: Creating identical copies of data across multiple servers or storage
systems.

o Synchronous replication: Real-time replication.

o Asynchronous replication: Slight delay in data replication.

2. RAID (Redundant Array of Independent Disks): A method to distribute and duplicate data
across multiple drives.

o Common levels: RAID 1 (mirroring), RAID 5 (striping with parity), RAID 10 (mirroring +
striping).

3. Multi-region Redundancy: Storing data in geographically dispersed regions for disaster


recovery.

How It Works:

• Data is written simultaneously to primary and redundant storage.

• If the primary storage fails, data is retrieved from the backup storage without downtime.
Real-World Examples:

• AWS S3 Redundancy: Data is replicated across Availability Zones.

• Google Cloud Storage: Automatically maintains redundancy across zones.

Benefits:

• Prevents data loss in case of hardware failure.

• Ensures high availability and fault tolerance.

• Supports disaster recovery and business continuity.

2. Hypervisor Clustering

Definition:

Hypervisor clustering involves grouping multiple hypervisors (virtualization managers) to create a


fault-tolerant and high-availability environment for virtual machines (VMs).

Key Concepts:

• Hypervisor: Software that creates and manages virtual machines (e.g., VMware ESXi,
Microsoft Hyper-V, KVM).

• Clustering: Combining hypervisors into a group that can share workloads and resources.

• Failover: If one hypervisor fails, virtual machines automatically migrate to a healthy


hypervisor.

How It Works:

1. Cluster Configuration: Multiple hypervisors are connected into a cluster.

2. Resource Sharing: Hypervisors share CPU, memory, and storage resources.

3. VM Migration: Using technologies like VMware vMotion or Microsoft Live Migration, VMs
are moved seamlessly if a hypervisor fails.

Real-World Example:

• VMware vSphere HA (High Availability): Automatically restarts VMs on another hypervisor


if one fails.

• Microsoft Hyper-V Clusters: Provides failover clustering for virtual machines.

Benefits:

• Ensures high availability and uptime.

• Supports automatic recovery during hypervisor failures.


• Optimizes resource utilization across hypervisors.

3. Load Balanced Virtual Server

Definition:

Load balanced virtual servers use load balancers to distribute workloads evenly across multiple
virtual servers to optimize performance, reliability, and resource usage.

How It Works:

1. Load Balancer Deployment: A load balancer acts as an intermediary between clients and
virtual servers.

2. Traffic Distribution: Requests are routed to virtual servers based on:

o Server health checks

o Resource load (CPU, memory usage)

o Load balancing algorithms (e.g., Round Robin, Least Connections).

3. Auto-Scaling Integration: Load balancers dynamically work with auto-scaling to launch or


terminate virtual servers.

Load Balancing Algorithms:

• Round Robin: Sequentially distributes requests.

• Least Connections: Routes requests to the server with the fewest active connections.

• IP Hash: Maps client IPs to specific virtual servers.

Real-World Examples:

• AWS Elastic Load Balancer (ELB): Distributes traffic to EC2 instances.

• Azure Load Balancer: Routes incoming traffic across virtual machines.

• HAProxy: Open-source software for virtual server load balancing.

Benefits:

• Prevents overloading of servers.

• Ensures high availability and reliability.

• Improves performance and reduces response times.

4. Non-Disruptive Service Relocation

Definition:
Non-disruptive service relocation is the ability to migrate applications, services, or virtual machines
from one server or environment to another without downtime or user interruption.

Key Concepts:

• Live Migration: Moving a running virtual machine from one host to another while
maintaining its state.

• State Preservation: Active sessions, memory, and CPU states are transferred seamlessly.

• Storage Migration: Relocating data volumes dynamically across storage systems.

How It Works:

1. Pre-Migration: Service states (memory, CPU usage) are captured.

2. Live Migration: Data and resources are transferred to the target environment without
halting the service.

3. Post-Migration: Services continue running seamlessly on the new host.

Technologies Involved:

• VMware vMotion: Allows live migration of virtual machines.

• Microsoft Hyper-V Live Migration: Transfers VMs without downtime.

• KVM Live Migration: Enables non-disruptive relocation in Linux environments.

Example Use Cases:

• System Maintenance: Relocating VMs for hardware upgrades or repairs.

• Load Balancing: Moving services to less busy servers during peak traffic.

• Disaster Recovery: Migrating services to a safe location during failures.

Benefits:

• Eliminates downtime and service interruptions.

• Enhances system performance and availability.

• Supports seamless maintenance and upgrades.

Summary Table

Topic Purpose Example

AWS S3 Multi-AZ
Redundant Storage Ensures data reliability and availability
Replication
Topic Purpose Example

Hypervisor Clustering Fault tolerance and failover for VMs VMware vSphere HA

Distributes workloads for performance AWS Elastic Load


Load Balanced Virtual Server
optimization Balancer

Non-Disruptive Service
Migrates services without downtime VMware vMotion
Relocation

Lecture Notes: Cloud Advanced Architectures

Introduction to Cloud Advanced Architectures

Advanced cloud architectures ensure seamless operations, high availability, and resource
optimization even under challenging circumstances. They focus on providing zero downtime,
effective cloud balancing, resource reservation, and robust dynamic failure detection and
recovery mechanisms.

1. Zero Downtime

Definition:

Zero downtime refers to a system or service's ability to remain operational without any interruptions
during maintenance, updates, or failure scenarios.

Key Concepts:

1. High Availability (HA): Ensures continuous uptime through redundant systems and failover
mechanisms.

2. Live Migration: Moving active services or workloads from one host to another without
disruption.

3. Blue-Green Deployment: Deploying a new version alongside the existing one to ensure a
seamless transition.

4. Rolling Updates: Updating parts of a system gradually while keeping other parts active.

Technologies Involved:

• Load Balancers: Divert traffic to healthy nodes during updates.

• Containerization (Kubernetes): Supports rolling updates for containers.

• Virtual Machine Migration: Live migration using tools like VMware vMotion.

How It Works:

• Redundant components and live migration ensure no single point of failure.


• Maintenance, updates, or failure recovery happens in the background without affecting
end-users.

Real-World Example:

• Netflix: Uses cloud infrastructure to achieve near-zero downtime globally.

• AWS Auto-Scaling with ELB: Ensures servers are replaced or upgraded seamlessly.

Benefits:

• Enhanced user experience with uninterrupted service.

• Smooth deployments and upgrades.

• Increased reliability and fault tolerance.

2. Cloud Balancing

Definition:

Cloud balancing involves distributing workloads across multiple cloud resources, servers, or
regions to optimize performance, availability, and cost.

Key Concepts:

1. Workload Distribution: Distributing tasks evenly among servers to prevent overloading.

2. Multi-Cloud Balancing: Distributing workloads across multiple cloud providers (e.g., AWS,
Azure, GCP).

3. Geographic Load Balancing: Redirecting traffic based on the user's location to reduce
latency.

4. Resource Optimization: Ensures cloud resources are utilized efficiently.

How It Works:

• A cloud load balancer monitors resource health and distributes requests based on
algorithms:

o Round Robin

o Least Connections

o Latency-Based Routing

• Traffic is redirected to servers or regions with higher availability and lower load.

Technologies Involved:

• Global DNS Load Balancing (Route 53, Cloudflare): Distributes traffic globally.
• Kubernetes Load Balancer: Manages containerized workloads.

• AWS Elastic Load Balancer (ELB): Balances traffic across EC2 instances.

Real-World Example:

• Google Cloud Load Balancer: Balances workloads across Google’s global infrastructure.

• Azure Traffic Manager: Routes traffic efficiently across cloud regions.

Benefits:

• Prevents server overloading.

• Reduces latency and improves response time.

• Ensures high availability by redirecting traffic during failures.

3. Resource Reservation

Definition:

Resource reservation involves allocating specific cloud resources (CPU, memory, storage) in
advance to ensure they are available when needed.

Key Concepts:

1. Guaranteed Resources: Resources are reserved for critical workloads to prevent


contention.

2. Capacity Planning: Ensuring future resource availability based on usage predictions.

3. Reserved Instances (RIs): Cloud providers allow users to reserve resources at discounted
rates for long-term needs.

How It Works:

• Users can pre-allocate resources for specific workloads to avoid competition with other
users.

• Resource reservations are supported in virtual machines, storage, and network bandwidth.

Technologies Involved:

• Amazon EC2 Reserved Instances: Reserve virtual machines with discounts for 1 or 3
years.

• Kubernetes Resource Quotas: Reserves CPU and memory for pods.

• Azure Reserved VM Instances: Pre-purchased VMs for consistent workloads.

Real-World Example:
• AWS Reserved Instances: Businesses reserve computing power for long-term projects.

• Google Kubernetes Engine (GKE): Allocates resources based on workload requirements.

Benefits:

• Guaranteed resource availability for mission-critical tasks.

• Improved cost management through reservations.

• Predictable performance and resource usage.

4. Dynamic Failure Detection and Recovery

Definition:

Dynamic failure detection and recovery refers to the cloud's ability to identify failures in real time
and recover automatically to ensure minimal impact on services.

Key Concepts:

1. Failure Detection: Using health checks and monitoring tools to detect failures in servers,
networks, or applications.

2. Automatic Failover: Switching workloads to backup systems or regions upon detecting a


failure.

3. Self-Healing Systems: Automatically repair or replace failed components without human


intervention.

How It Works:

1. Monitoring and Health Checks: Systems like CloudWatch or Prometheus constantly


monitor resource health.

2. Failure Detection: When a failure is detected, alarms are triggered.

3. Automated Recovery: Using orchestration tools like Kubernetes, workloads are re-routed
or restarted on healthy resources.

Technologies Involved:

• AWS CloudWatch: Detects failures and triggers automated actions.

• Kubernetes Health Probes: Liveness and readiness probes detect failed containers.

• Auto-Healing VMs (Azure): Automatically restarts unhealthy virtual machines.

Real-World Example:

• Netflix Chaos Monkey: Simulates random failures to test system resilience.

• AWS Auto-Scaling: Detects unhealthy EC2 instances and replaces them.


• Kubernetes Self-Healing: Reschedules failed pods to healthy nodes.

Benefits:

• Reduces manual intervention and downtime.

• Ensures business continuity with rapid recovery.

• Enhances reliability and fault tolerance of cloud systems.

Summary Table

Topic Purpose Example

Ensures uninterrupted service AWS Auto-Scaling, Kubernetes


Zero Downtime
during updates. Rolling Updates

Distributes workloads for Google Cloud Load Balancer,


Cloud Balancing
performance optimization. Azure Traffic Manager

Reserves resources to ensure AWS Reserved Instances,


Resource Reservation
availability. Kubernetes Quotas

Dynamic Failure Detection Detects failures and recovers AWS CloudWatch, Kubernetes
& Recovery automatically. Self-Healing

You might also like