AWS
AWS
UNIT IV
Title: AWS CLOUD PLATFORM – IAAS (9)
Objective 4: To explore the roster of AWS services and illustrate the way to make applications in AWS
Amazon Web Services: AWS Infrastructure- AWS API- AWS Management Console - Setting up AWS Storage
- Stretching out with Elastic Compute Cloud - Elastic Container Service for Kubernetes- AWS Developer
Tools: AWS Code Commit, AWS Code Build, AWS Code Deploy, AWS Code Pipeline, AWS code Star - AWS
Management Tools: Cloud Watch, AWS Auto Scaling, AWS control Tower, Cloud Formation, Cloud Trail,
AWS License Manager.
AWS comes up with its own network infrastructure on establishing the datacentres in different regions mostly
all over the world. Its global Infrastructure acts as a backbone for operations and services provided by AWS. It
facilitates the users on creating secure environments using Amazon VPCs (Virtual Private Clouds). Essential services
like Amazon EC2 (Elastic compute cloud) and Amazon S3 (Simple storage service) for utilizing the compute and
storage service with elastic scaling. It supports the dynamic scaling of the applications with the services such as Auto
Scaling and Elastic Load Balancing (AWS ELB). It provides a good user-friendly AWS Management Console
facilitating seamless configuration and management of AWS services to the Users. Its Architecture ensures high
availability, fault tolerance making AWS as a versatile powerful Cloud Computing Platform.
In this cloud platform, understanding the key concepts such as Regions, Availability Zones, Global Network
Infrastructure, etc. is crucial. The fundamentals of AWS keep on maintaining the applications reliable and scalable
with services globally with coming to a strategic deployment of resources for optimal performance and resilience. The
following are the some of the main fundamentals of AWS:
Regions: AWS provide the services with respective division of regions. Each Region is a separate geographic
area. The regions are divided based on geographical areas/locations and will establish data centers. Based on
need and traffic of users, the scale of data centers is depended to facilitate users with low-latencies of
services.
Availability Zones (AZ): Availability Zones are multiple, isolated locations within each Region. It is created
to prevent the Data centers for the Natural Calamities or any other disasters. The Datacenters are established
as sub sections with isolated locations to enhance fault tolerance and disaster recovery management.
Local Zones: Local Zones provide you the ability to place resources, such as compute and storage, in
multiple locations closer to your end users.
In the rapid revolution of Cloud Computing, AWS facilitates with wide variety of services respect to the fields
and needs. The following are the top AWS services that are in wide usage:
Amazon EC2 (Elastic Compute Cloud): It provides the Scalable computing power via cloud allowing the users to
run applications and manage the workloads over their remotely.
Amazon S3 (Simple Storage Service): It offers scalable object Storage as a Service with high durability for storing
and retrieving any amount of data.
AWS Lambda: It is a service in Serverless Architecture with Function as a Service facilitating serverless computing
i.e., running the code on response to the events, the background environment management of servers is handled by
AWS automatically. It helps the developers to completely focus on the logic of code build.
Amazon RDS (Relational Database Service): This is an AWS service that simplifies the management of database
providing high available relational databases in the cloud.
Amazon VPC (Virtual Private Cloud): It enables the users to create isolated networks with option of public and
private expose within the AWS cloud, providing safe and adaptable configurations of their resources.
1.1 AWS INFRASTRUCTURE
The AWS architecture consists of Web, App and Database tiers. The AWS region bounds all the components
present in the architecture. Inside the region are the availability zones AZ1 and AZ2. The inner boxes are of VPC
(Virtual Private Cloud). In a VPC, three subnets are present as Web, App and Database tier from top to bottom.
Database Tier: It contains database services like Amazon RDS (Relational Database Service).
Load Balancing
Load balancing simply means to hardware or software load over web servers, that improver's the efficiency of the
server as well as the application. Following is the diagrammatic representation of AWS architecture with load
balancing. Hardware load balancer is a very common network appliance used in traditional web application
architectures. AWS provides the Elastic Load Balancing service, it distributes the traffic to EC2 instances across
multiple available sources, and dynamic addition and removal of Amazon EC2 hosts from the load-balancing rotation.
Elastic Load Balancing can dynamically grow and shrink the load-balancing capacity to adjust to traffic demands and
also support sticky sessions to address more advanced routing needs. Elastic Load Balancer is used to spread the
traffic to web servers, which improves performance. AWS provides the Elastic Load Balancing service, in which
traffic is distributed to EC2 instances over multiple available zones, and dynamic addition and removal of Amazon
EC2 hosts from the load-balancing rotation. Elastic Load Balancing can dynamically grow and shrink the load-
balancing capacity as per the traffic conditions.
Amazon Cloud-front
It is responsible for content delivery, i.e. used to deliver website. It may contain dynamic, static, and streaming
content using a global network of edge locations. Requests for content at the user's end are automatically routed to the
nearest edge location, which improves the performance. Amazon Cloud-front is optimized to work with other
Amazon Web Services, like Amazon S3 and Amazon EC2. It also works fine with any non-AWS origin server and
stores the original files in a similar manner. In Amazon Web Services, there are no contracts or monthly
commitments. We pay only for as much or as little content as we deliver through the service.
Security Management
Amazon’s Elastic Compute Cloud (EC2) provides a feature called security groups, which is similar to an inbound
network firewall, in which we have to specify the protocols, ports, and source IP ranges that are allowed to reach your
EC2 instances. Each EC2 instance can be assigned one or more security groups, each of which routes the appropriate
traffic to each instance. Security groups can be configured using specific subnets or IP addresses which limits access
to EC2 instances.
Elastic Caches
Amazon Elastic Cache is a web service that manages the memory cache in the cloud. In memory management, cache
has a very important role and helps to reduce the load on the services, improves the performance and scalability on the
database tier by caching frequently used information.
Amazon RDS
Amazon RDS (Relational Database Service) provides a similar access as that of MySQL, Oracle, or Microsoft SQL
Server database engine. The same queries, applications, and tools can be used with Amazon RDS. It automatically
patches the database software and manages backups as per the user’s instruction. It also supports point-in-time
recovery. There are no up-front investments required, and we pay only for the resources we use.
Hosting RDMS on EC2 Instances
Amazon RDS allows users to install RDBMS (Relational Database Management System) of your choice like MySQL,
Oracle, SQL Server, DB2, etc. on an EC2 instance and can manage as required. Amazon EC2 uses Amazon EBS
(Elastic Block Storage) similar to network-attached storage. All data and logs running on EC2 instances should be
placed on Amazon EBS volumes, which will be available even if the database host fails. Amazon EBS volumes
automatically provide redundancy within the availability zone, which increases the availability of simple disks.
Further if the volume is not sufficient for our databases needs, volume can be added to increase the performance for
our database. Using Amazon RDS, the service provider manages the storage and we only focus on managing the data.
Storage & Backups
AWS cloud provides various options for storing, accessing, and backing up web application data and assets. The
Amazon S3 (Simple Storage Service) provides a simple web-services interface that can be used to store and retrieve
any amount of data, at any time, from anywhere on the web.
Amazon S3 stores data as objects within resources called buckets. The user can store as many objects as per
requirement within the bucket, and can read, write and delete objects from the bucket. Amazon EBS is effective for
data that needs to be accessed as block storage and requires persistence beyond the life of the running instance, such
as database partitions and application logs. Amazon EBS volumes can be maximized up to 1 TB, and these volumes
can be striped for larger volumes and increased performance. Provisioned IOPS volumes are designed to meet the
needs of database workloads that are sensitive to storage performance and consistency. Amazon EBS currently
supports up to 1,000 IOPS per volume. We can stripe multiple volumes together to deliver thousands of IOPS per
instance to an application.
Auto Scaling
The difference between AWS cloud architecture and the traditional hosting model is that AWS can dynamically scale
the web application fleet on demand to handle changes in traffic. In the traditional hosting model, traffic forecasting
models are generally used to provision hosts ahead of projected traffic. In AWS, instances can be provisioned on the
fly according to a set of triggers for scaling the fleet out and back in. Amazon Auto Scaling can create capacity groups
of servers that can grow or shrink on demand.
RESTful APIs in AWS refer to APIs that follow the principles of REST (Representational State Transfer) and
are implemented using AWS services. RESTful APIs are designed to be stateless and use standard HTTP methods
(GET, POST, PUT, DELETE, etc.) to interact with resources. In AWS, several services facilitate the creation,
deployment, and management of RESTful APIs.
Step 3 − Select the service of your choice and the console of that service will open.
7 Selecting a Region
Many of the services are region specific and we need to select a region so that resources can be managed. Some of the
services do not require a region to be selected like AWS Identity and Access Management (IAM). To select a region,
first we need to select a service. Click the Oregon menu (on the left side of the console) and then select a region
Step 2 − Choose Security Credentials and a new page will open having various options. Select the password option to
change the password and follow the instructions.
Step 3 − After signing-in, a page opens again having certain options to change the password and follow the
instructions.
When successful, we will receive a confirmation message.
Now a new page will open having all the information related to money section. Using this service, we can pay AWS
bills, monitor our usage and budget estimation.
o Launch Instance: Click on "Launch Instance" and choose an Amazon Machine Image (AMI) based
on your application needs.
o Choose Instance Type: Select an instance type that fits your performance requirements.
o Configure Instance: Set details like the number of instances, VPC settings, IAM roles, and
monitoring options.
o Add Storage: Choose the type and size of storage volumes you need.
o Configure Security Group: Set inbound and outbound rules to control access to your instance.
o Review and Launch: Review your settings and launch the instance.
o Define scaling policies based on CloudWatch metrics (e.g., CPU utilization, memory usage).
Developer tools are technologies that make software development faster and more efficient. Software
development is a complex process of translating real-world objects into mathematical and electronic representations
that machines can understand and manipulate. Developer tools act as an interface between the physical reality and
computing processes. They include programming languages, frameworks, and platforms that abstract different levels
of complexity. This means you can interact with computers more easily and solve more complex problems. Instead of
working with hardware components and low-level coding languages, you can work with libraries, APIs, and other
abstractions that prioritize business use cases. Developer tools also include software applications, components, and
services that simplify the process of coding. Software teams use developer tools to overcome challenges when writing
code, testing programs, deploying applications, and monitoring production releases. With the right development tools,
you can reduce time to market, resolve bugs, optimize development workflows, and more.
Fully managed – CodeBuild eliminates the need to set up, patch, update, and manage your own build
servers.
On demand – CodeBuild scales on demand to meet your build needs. You pay only for the number of build
minutes you consume.
Out of the box – CodeBuild provides preconfigured build environments for the most popular programming
languages. All you need to do is point to your build script to start your first build.
How to run CodeBuild
You can use the AWS CodeBuild or AWS CodePipeline console to run CodeBuild. You can also automate the
running of CodeBuild by using the AWS Command Line Interface (AWS CLI) or the AWS SDKs.
The diagram shows how CodeBuild works with AWS CLI or AWS SDKs. As the following diagram shows, you can
add CodeBuild as a build or test action to the build or test stage of a pipeline in AWS CodePipeline. AWS
CodePipeline is a continuous delivery service that you can use to model, visualize, and automate the steps required to
release your code. This includes building your code. A pipeline is a workflow construct that describes how code
changes go through a release process.
The diagram shows how CodeBuild works with AWS CodePipeline. To use CodePipeline to create a pipeline and
then add a CodeBuild build or test action, see Use CodeBuild with CodePipeline. For more information about
CodePipeline, see the AWS CodePipeline User Guide. The CodeBuild console also provides a way to quickly search
for your resources, such as repositories, build projects, deployment applications, and pipelines. Choose Go to resource
or press the / key, and then enter the name of the resource. Any matches appear in the list. Searches are case
insensitive. You only see resources that you have permissions to view.
Blue/green deployments on an EC2/On-Premises compute platform can be used to deploy applications with nearly
zero downtime and the ability to roll back. The process involves running two identical environments, one with the
current application version and the other with a new version, and then redirecting traffic from the current environment
to the new one. Here are some benefits of blue/green deployments on an EC2/On-Premises compute platform:
Faster and more reliable switching: Traffic can be routed back to the original instances if they haven't been
terminated.
Avoids problems with long-running instances: New instances are provisioned for the deployment and have
the latest server configurations.
Near zero downtime: Releases can be deployed with almost no downtime.
Here are some things to consider when using blue/green deployments on an EC2/On-Premises compute
platform:
Blue/green deployments only work with Amazon EC2 instances.
When adding instances to the replacement environment, you can use existing instances or create new ones
manually.
You can use settings from an Amazon EC2 Auto Scaling group to define and create instances in a new
Amazon EC2 Auto Scaling group.
Blue/green on an AWS Lambda or Amazon ECS compute platform: Traffic is shifted in increments according to
a canary, linear, or all-at-once deployment configuration.
Canary: Traffic is shifted in two increments. You can choose from predefined canary options that specify the
percentage of traffic shifted to your updated Lambda function or ECS task set in the first increment and the
interval, in minutes, before the remaining traffic is shifted in the second increment.
Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You
can choose from predefined linear options that specify the percentage of traffic shifted in each increment and
the number of minutes between each increment.
All-at-once: All traffic is shifted from the original Lambda function or ECS task set to the updated function
or task set all at once.
Blue/green deployments through AWS CloudFormation: Traffic is shifted from your current resources to your
updated resources as part of an AWS CloudFormation stack update. Currently, only ECS blue/green deployments are
supported. To update an application running on Amazon Elastic Container Service (Amazon ECS), you can use a
CodeDeploy blue/green deployment strategy. This strategy helps minimize interruptions caused by changing
application versions. In a blue/green deployment, you create a new application environment (referred to as green)
alongside your current, live environment (referred to as blue). This allows you to monitor and test the green
environment before routing live traffic from the blue environment to the green environment. After the green
environment is serving live traffic, you can safely terminate the blue environment. To perform CodeDeploy
blue/green deployments on ECS using CloudFormation, you include the following information in your stack template:
A Hooks section that describes an AWS: CodeDeploy::BlueGreen hook.
A Transform section that specifies the AWS::CodeDeployBlueGreen transform.
AWS Management Tools helps the user to manage the components of the cloud and their account. It
programmatically allows the user to provision, monitor, and automate all the components. This AWS Management
tools help the user to control every part of the cloud infrastructure.
Benefits of AutoScaling
Setup scaling quickly: AWS Auto Scaling lets you set target utilization levels for multiple resources in a
single, intuitive interface. You can quickly see the average utilization of all of your scalable resources
without having to navigate to other consoles. For example, if your application uses Amazon EC2 and
Amazon DynamoDB, you can use AWS Auto Scaling to manage resource provisioning for all of the EC2
Auto Scaling groups and database tables in your application.
Make smart scaling decisions: AWS Auto Scaling lets you build scaling plans that automate how groups
of different resources respond to changes in demand. You can optimize availability, costs, or a balance of
both. AWS Auto Scaling automatically creates all of the scaling policies and sets targets for you based on
your preference. AWS Auto Scaling monitors your application and automatically adds or removes capacity
from your resource groups in real-time as demands change.
Automatically maintain performance: Using AWS Auto Scaling, you maintain optimal application
performance and availability, even when workloads are periodic, unpredictable, or continuously changing.
AWS Auto Scaling continually monitors your applications to make sure that they are operating at your
desired performance levels. When demand spikes, AWS Auto Scaling automatically increases the capacity
of constrained resources so you maintain a high quality of service.
Pay only for what you need: AWS Auto Scaling can help you optimize your utilization and cost
efficiencies when consuming AWS services so you only pay for the resources you actually need. When
demand drops, AWS Auto Scaling will automatically remove any excess resource capacity so you avoid
overspending. AWS Auto Scaling is free to use, and allows you to optimize the costs of your AWS
environment.
AWS CloudFormation is a powerful service that allows you to define and provision AWS infrastructure as
code. It uses templates written in JSON or YAML to describe the desired state of your cloud resources. Here’s
a breakdown of key concepts and features:
Key Concepts
1. Templates: These are JSON or YAML files that describe the AWS resources you want to create. Templates
can include parameters, resources, outputs, and more.
2. Stacks: A stack is a collection of AWS resources that are created and managed as a single unit. You create a
stack by submitting a CloudFormation template.
3. Resources: The actual AWS services that you want to provision (e.g., EC2 instances, S3 buckets, RDS
databases) are defined in the resources section of your template.
4. Parameters: These allow you to pass in values to your template at runtime, making it more dynamic and
reusable.
5. Outputs: You can specify values that you want to be returned after the stack is created. This can be useful for
referencing resource attributes.
6. Mappings: These provide a way to create simple lookup tables, allowing you to define variables based on
conditions, such as regions.
7. Conditions: Conditions enable you to specify whether certain resources are created or certain properties are
assigned, based on the values of parameters or mappings.
Features
Version Control: Since templates are code, they can be stored in version control systems, making it easier to
track changes.
Change Sets: Before applying changes to a stack, you can create a change set to review how the proposed
changes will affect your stack.
Rollback: If a stack creation or update fails, CloudFormation can automatically roll back the changes to the
last stable state.
Nested Stacks: You can organize your infrastructure by breaking it into smaller templates, known as nested
stacks.
Example
Here’s a simple YAML example of a CloudFormation template that creates an S3 bucket:
Best Practices
Use Parameters: To make your templates reusable and flexible.
Organize with Nested Stacks: For better maintainability, especially in large deployments.
Getting Started
To get started with AWS CloudFormation:
1. Create a Template: Define your infrastructure in JSON or YAML.
2. Upload to CloudFormation: Use the AWS Management Console, AWS CLI, or SDKs to create a stack
with your template.
3. Monitor and Manage: Use the console to monitor the stack’s creation, update, and deletion processes.
AWS CloudFormation is an excellent tool for automating infrastructure management, making deployments
consistent and repeatable.
AWS CloudTrail is a service that enables you to monitor and log activity within your AWS account. It provides a
comprehensive record of actions taken by users, roles, or AWS services, helping you maintain security, compliance,
and operational governance. Here’s a closer look at its key features and functionalities:
Key Features
1. Event Logging: CloudTrail records API calls made within your AWS account, including calls from the AWS
Management Console, AWS CLI, SDKs, and other AWS services. This includes details like who made the
request, the time it was made, the source IP address, and what actions were performed.
2. Management Events: These include operations that are performed on AWS resources, such as creating,
modifying, or deleting resources. They are useful for tracking changes to your infrastructure.
3. Data Events: These capture operations on specific resources, such as S3 object-level API calls or DynamoDB
table-level actions. Data events provide granular logging for specific resource operations.
4. Insight Events: This feature helps identify unusual API activity in your account, providing insights into
potentially unauthorized or unexpected actions.
5. S3 Bucket Logging: You can configure CloudTrail to store logs in an S3 bucket for long-term storage and
analysis.
6. Integration with AWS Services: CloudTrail integrates with other AWS services like AWS Lambda,
Amazon CloudWatch, and AWS Config, enabling automated responses and monitoring.
7. Multi-Region and Multi-Account Support: You can configure CloudTrail to log events from multiple AWS
accounts and regions, centralizing your logging and monitoring efforts.
Use Cases
1. Security and Compliance: CloudTrail helps track user activity, aiding in compliance with regulations and
internal policies by providing a clear audit trail.
2. Operational Troubleshooting: By analyzing CloudTrail logs, you can troubleshoot issues and identify the
root cause of operational problems.
3. Resource Management: Monitoring changes in resources helps ensure that your AWS environment aligns
with operational standards and best practices.
4. Cost Management: Understanding API usage can help identify unused or underutilized resources, leading to
cost-saving opportunities.
Example
To enable CloudTrail and create a trail, you can use the AWS Management Console or AWS CLI. Here’s how to
create a trail using the CLI:
Best Practices
Enable Logging for All Regions: This ensures that you capture all relevant activity across your AWS
infrastructure.
Use S3 Bucket Policies: Implement strict policies on your S3 bucket where CloudTrail logs are stored to
enhance security.
Integrate with CloudWatch: Set up CloudWatch Alarms to monitor for specific API calls or unusual
activity.
Review Logs Regularly: Schedule regular reviews of your CloudTrail logs to stay informed about account
activity.
Getting Started
To get started with AWS CloudTrail:
1. Enable CloudTrail: Use the AWS Management Console, CLI, or SDK to create a trail and specify your S3
bucket for log storage.
2. Monitor Events: Use the AWS CloudTrail console to view event history and logs.
3. Analyze Logs: You can analyze the logs in S3 using tools like Amazon Athena, or integrate with SIEM
systems for further analysis.
AWS CloudTrail is essential for maintaining transparency in your AWS account, ensuring that you can effectively
monitor, audit, and respond to activities within your environment.
AWS License Manager is a service that helps you manage software licenses across AWS and on-premises
environments. It simplifies the licensing process, ensures compliance, and provides visibility into license usage.
Key Features
1. Centralized Management: License Manager provides a single interface to manage software licenses from
various vendors, including Microsoft, SAP, Oracle, and more. This centralized view helps streamline
compliance and reporting.
2. License Tracking: The service allows you to track license usage, helping you to ensure that you are
compliant with licensing agreements. You can monitor both AWS and on-premises environments.
3. Automated License Consumption: AWS License Manager can automatically manage license consumption
for eligible AWS services, reducing the manual effort required to track license usage.
4. Integration with AWS Services: The service integrates with other AWS services, such as AWS Systems
Manager and Amazon EC2, to help manage licenses associated with those resources.
5. Custom License Models: You can create custom licensing rules that match your organization’s requirements,
allowing for more flexibility in managing licenses.
6. Compliance and Reporting: License Manager provides detailed reports on license usage and compliance
status, making it easier to demonstrate adherence to licensing agreements during audits.
Benefits
Cost Management: By effectively managing licenses, you can avoid over-provisioning and reduce costs
associated with unused or underutilized licenses.
Risk Reduction: Ensures compliance with licensing agreements, reducing the risk of costly penalties or legal
issues.
Operational Efficiency: Automating the license management process allows your IT teams to focus on other
strategic initiatives rather than manual tracking.
Scalability: As your organization grows and uses more software, AWS License Manager scales to meet your
needs.
Use Cases
1. Microsoft License Management: Manage Microsoft licenses for Windows Server, SQL Server, and other
Microsoft products used across AWS and on-premises environments.
2. Oracle Database Licensing: Track and manage Oracle licenses effectively, ensuring compliance with
Oracle’s licensing policies.
3. SaaS and Subscription Software: Keep track of SaaS licenses and subscriptions, ensuring that you only pay
for what you use.
Getting Started
To get started with AWS License Manager:
1. Access the Console: Go to the AWS Management Console and navigate to License Manager.
2. Create License Configurations: Define your licensing models and rules based on your software
requirements.
3. Track License Usage: Use the License Manager dashboard to monitor license consumption and compliance.
4. Generate Reports: Utilize the reporting features to generate insights into your license usage and compliance
status.
Example
Here’s a simple workflow to manage Microsoft Windows Server licenses:
1. Create a License Configuration: Define how many Windows Server licenses you need and the rules for
usage.
2. Provision Resources: When launching new EC2 instances, specify the license configuration to automatically
track the licenses consumed.
3. Monitor Usage: Regularly check the License Manager dashboard for reports on your Windows Server license
consumption.
AWS License Manager is an essential tool for organizations that want to maintain compliance and optimize costs
associated with software licenses.