0% found this document useful (0 votes)
22 views29 pages

AWS

The document provides an overview of Amazon Web Services (AWS), focusing on its Infrastructure as a Service (IaaS) offerings, including key services like EC2, S3, and RDS. It explains AWS's global infrastructure, management tools, and features such as Auto Scaling and Elastic Load Balancing, which enhance application performance and reliability. Additionally, it outlines the AWS Management Console and the process for accessing and managing AWS resources effectively.

Uploaded by

N Md Shakeel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views29 pages

AWS

The document provides an overview of Amazon Web Services (AWS), focusing on its Infrastructure as a Service (IaaS) offerings, including key services like EC2, S3, and RDS. It explains AWS's global infrastructure, management tools, and features such as Auto Scaling and Elastic Load Balancing, which enhance application performance and reliability. Additionally, it outlines the AWS Management Console and the process for accessing and managing AWS resources effectively.

Uploaded by

N Md Shakeel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

21CS1901 CLOUD TOOLS AND TECHNIQUES

VERTICAL II: CLOUD COMPUTING AND DATA CENTRE TECHNOLOGIES

UNIT IV
Title: AWS CLOUD PLATFORM – IAAS (9)

Objective 4: To explore the roster of AWS services and illustrate the way to make applications in AWS

CO4: Develop the Cloud Application in AWS platform

Amazon Web Services: AWS Infrastructure- AWS API- AWS Management Console - Setting up AWS Storage
- Stretching out with Elastic Compute Cloud - Elastic Container Service for Kubernetes- AWS Developer
Tools: AWS Code Commit, AWS Code Build, AWS Code Deploy, AWS Code Pipeline, AWS code Star - AWS
Management Tools: Cloud Watch, AWS Auto Scaling, AWS control Tower, Cloud Formation, Cloud Trail,
AWS License Manager.

1 AMAZON WEB SERVICES


Amazon Web Services (AWS) is a leading cloud platform in providing the web services of various domains. It
is an expanded cloud computing platform provided by Amazon Company. AWS provides a wide range of services
with a pay-as-per-use pricing model over the Internet such as Storage, Computing power, Databases, Machine
Learning services, and much more. AWS facilitates for both businesses and individual users with effectively hosting
the applications, storing the data securely, and making use of a wide variety of tools and services improving
management flexibility for IT resources. AWS follows the trends of digital IT and comes up needy services with
optimized performances covering a wide range of services from Compute to Storage. It covers a wider range of
customers of different domains to expand their business operations.

AWS comes up with its own network infrastructure on establishing the datacentres in different regions mostly
all over the world. Its global Infrastructure acts as a backbone for operations and services provided by AWS. It
facilitates the users on creating secure environments using Amazon VPCs (Virtual Private Clouds). Essential services
like Amazon EC2 (Elastic compute cloud) and Amazon S3 (Simple storage service) for utilizing the compute and
storage service with elastic scaling. It supports the dynamic scaling of the applications with the services such as Auto
Scaling and Elastic Load Balancing (AWS ELB). It provides a good user-friendly AWS Management Console
facilitating seamless configuration and management of AWS services to the Users. Its Architecture ensures high
availability, fault tolerance making AWS as a versatile powerful Cloud Computing Platform.

In this cloud platform, understanding the key concepts such as Regions, Availability Zones, Global Network
Infrastructure, etc. is crucial. The fundamentals of AWS keep on maintaining the applications reliable and scalable
with services globally with coming to a strategic deployment of resources for optimal performance and resilience. The
following are the some of the main fundamentals of AWS:
 Regions: AWS provide the services with respective division of regions. Each Region is a separate geographic
area. The regions are divided based on geographical areas/locations and will establish data centers. Based on
need and traffic of users, the scale of data centers is depended to facilitate users with low-latencies of
services.

 Availability Zones (AZ): Availability Zones are multiple, isolated locations within each Region. It is created
to prevent the Data centers for the Natural Calamities or any other disasters. The Datacenters are established
as sub sections with isolated locations to enhance fault tolerance and disaster recovery management.

 Local Zones: Local Zones provide you the ability to place resources, such as compute and storage, in
multiple locations closer to your end users.

In the rapid revolution of Cloud Computing, AWS facilitates with wide variety of services respect to the fields
and needs. The following are the top AWS services that are in wide usage:

Amazon EC2 (Elastic Compute Cloud): It provides the Scalable computing power via cloud allowing the users to
run applications and manage the workloads over their remotely.

Amazon S3 (Simple Storage Service): It offers scalable object Storage as a Service with high durability for storing
and retrieving any amount of data.

AWS Lambda: It is a service in Serverless Architecture with Function as a Service facilitating serverless computing
i.e., running the code on response to the events, the background environment management of servers is handled by
AWS automatically. It helps the developers to completely focus on the logic of code build.

Amazon RDS (Relational Database Service): This is an AWS service that simplifies the management of database
providing high available relational databases in the cloud.

Amazon VPC (Virtual Private Cloud): It enables the users to create isolated networks with option of public and
private expose within the AWS cloud, providing safe and adaptable configurations of their resources.
1.1 AWS INFRASTRUCTURE
The AWS architecture consists of Web, App and Database tiers. The AWS region bounds all the components
present in the architecture. Inside the region are the availability zones AZ1 and AZ2. The inner boxes are of VPC
(Virtual Private Cloud). In a VPC, three subnets are present as Web, App and Database tier from top to bottom.

Web Tier: It contains web servers like Apache.

App Tier: It contains app servers.

Database Tier: It contains database services like Amazon RDS (Relational Database Service).

Load Balancing
Load balancing simply means to hardware or software load over web servers, that improver's the efficiency of the
server as well as the application. Following is the diagrammatic representation of AWS architecture with load
balancing. Hardware load balancer is a very common network appliance used in traditional web application
architectures. AWS provides the Elastic Load Balancing service, it distributes the traffic to EC2 instances across
multiple available sources, and dynamic addition and removal of Amazon EC2 hosts from the load-balancing rotation.
Elastic Load Balancing can dynamically grow and shrink the load-balancing capacity to adjust to traffic demands and
also support sticky sessions to address more advanced routing needs. Elastic Load Balancer is used to spread the
traffic to web servers, which improves performance. AWS provides the Elastic Load Balancing service, in which
traffic is distributed to EC2 instances over multiple available zones, and dynamic addition and removal of Amazon
EC2 hosts from the load-balancing rotation. Elastic Load Balancing can dynamically grow and shrink the load-
balancing capacity as per the traffic conditions.
Amazon Cloud-front
It is responsible for content delivery, i.e. used to deliver website. It may contain dynamic, static, and streaming
content using a global network of edge locations. Requests for content at the user's end are automatically routed to the
nearest edge location, which improves the performance. Amazon Cloud-front is optimized to work with other
Amazon Web Services, like Amazon S3 and Amazon EC2. It also works fine with any non-AWS origin server and
stores the original files in a similar manner. In Amazon Web Services, there are no contracts or monthly
commitments. We pay only for as much or as little content as we deliver through the service.
Security Management
Amazon’s Elastic Compute Cloud (EC2) provides a feature called security groups, which is similar to an inbound
network firewall, in which we have to specify the protocols, ports, and source IP ranges that are allowed to reach your
EC2 instances. Each EC2 instance can be assigned one or more security groups, each of which routes the appropriate
traffic to each instance. Security groups can be configured using specific subnets or IP addresses which limits access
to EC2 instances.
Elastic Caches
Amazon Elastic Cache is a web service that manages the memory cache in the cloud. In memory management, cache
has a very important role and helps to reduce the load on the services, improves the performance and scalability on the
database tier by caching frequently used information.
Amazon RDS
Amazon RDS (Relational Database Service) provides a similar access as that of MySQL, Oracle, or Microsoft SQL
Server database engine. The same queries, applications, and tools can be used with Amazon RDS. It automatically
patches the database software and manages backups as per the user’s instruction. It also supports point-in-time
recovery. There are no up-front investments required, and we pay only for the resources we use.
Hosting RDMS on EC2 Instances
Amazon RDS allows users to install RDBMS (Relational Database Management System) of your choice like MySQL,
Oracle, SQL Server, DB2, etc. on an EC2 instance and can manage as required. Amazon EC2 uses Amazon EBS
(Elastic Block Storage) similar to network-attached storage. All data and logs running on EC2 instances should be
placed on Amazon EBS volumes, which will be available even if the database host fails. Amazon EBS volumes
automatically provide redundancy within the availability zone, which increases the availability of simple disks.
Further if the volume is not sufficient for our databases needs, volume can be added to increase the performance for
our database. Using Amazon RDS, the service provider manages the storage and we only focus on managing the data.
Storage & Backups
AWS cloud provides various options for storing, accessing, and backing up web application data and assets. The
Amazon S3 (Simple Storage Service) provides a simple web-services interface that can be used to store and retrieve
any amount of data, at any time, from anywhere on the web.

Amazon S3 stores data as objects within resources called buckets. The user can store as many objects as per
requirement within the bucket, and can read, write and delete objects from the bucket. Amazon EBS is effective for
data that needs to be accessed as block storage and requires persistence beyond the life of the running instance, such
as database partitions and application logs. Amazon EBS volumes can be maximized up to 1 TB, and these volumes
can be striped for larger volumes and increased performance. Provisioned IOPS volumes are designed to meet the
needs of database workloads that are sensitive to storage performance and consistency. Amazon EBS currently
supports up to 1,000 IOPS per volume. We can stripe multiple volumes together to deliver thousands of IOPS per
instance to an application.
Auto Scaling
The difference between AWS cloud architecture and the traditional hosting model is that AWS can dynamically scale
the web application fleet on demand to handle changes in traffic. In the traditional hosting model, traffic forecasting
models are generally used to provision hosts ahead of projected traffic. In AWS, instances can be provisioned on the
fly according to a set of triggers for scaling the fleet out and back in. Amazon Auto Scaling can create capacity groups
of servers that can grow or shrink on demand.

1.2 AWS API


Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain,
monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or
functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs
that enable real-time two-way communication applications. API Gateway supports containerized and serverless
workloads, as well as web applications. API Gateway handles all the tasks involved in accepting and processing up to
hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and
access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup
costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered
pricing model, you can reduce your cost as your API usage scales.

RESTful APIs in AWS refer to APIs that follow the principles of REST (Representational State Transfer) and
are implemented using AWS services. RESTful APIs are designed to be stateless and use standard HTTP methods
(GET, POST, PUT, DELETE, etc.) to interact with resources. In AWS, several services facilitate the creation,
deployment, and management of RESTful APIs.

1.3 AWS MANAGEMENT CONSOLE


AWS Management Console is a web application for managing Amazon Web Services. AWS Management
Console consists of list of various services to choose from. It also provides all information related to our account like
billing. This console provides an inbuilt user interface to perform AWS tasks like working with Amazon S3 buckets,
launching and connecting to Amazon EC2 instances, setting Amazon CloudWatch alarms, etc. Following is the
screenshot of AWS management console for Amazon EC2 service.

2 How to Access AWS?


Step 1 − Click on services. We get a list of various services.
Step 2 − Select the choice from the list of categories and we get their sub-categories such as Computer and Database
category is selected in the following screenshots.

Step 3 − Select the service of your choice and the console of that service will open.

3 Customizing the Dashboard

4 Creating Services Shortcuts


Click the Edit menu on the navigation bar and a list of services appears. We can create their shortcuts by simply
dragging them from the menu bar to the navigation bar.
5 Adding Services Shortcuts
When we drag the service from the menu bar to the navigation bar, the shortcut will be created and added. We can
also arrange them in any order. In the following screenshot we have created shortcut for S3, EMR and DynamoDB
services.

6 Deleting Services Shortcuts


To delete the shortcut, click the edit menu and drag the shortcut from the navigation bar to the service menu. The
shortcut will be removed. In the following screenshot, we have removed the shortcut for EMR services.

7 Selecting a Region
Many of the services are region specific and we need to select a region so that resources can be managed. Some of the
services do not require a region to be selected like AWS Identity and Access Management (IAM). To select a region,
first we need to select a service. Click the Oregon menu (on the left side of the console) and then select a region

8 Changing the Password


We can change password of our AWS account. To change the password, following are the steps.
Step 1 − Click the account name on the left side of the navigation bar.

Step 2 − Choose Security Credentials and a new page will open having various options. Select the password option to
change the password and follow the instructions.
Step 3 − After signing-in, a page opens again having certain options to change the password and follow the
instructions.
When successful, we will receive a confirmation message.

9 Know Your Billing Information


Click the account name in the navigation bar and select the 'Billing & Cost Management' option.

Now a new page will open having all the information related to money section. Using this service, we can pay AWS
bills, monitor our usage and budget estimation.

1.4 SETTING UP AWS STORAGE

1.5 STRETCHING OUT WITH ELASTIC COMPUTE CLOUD


Stretching out with Amazon Elastic Compute Cloud (EC2) involves leveraging its capabilities to scale your
computing resources based on demand. Here’s how to effectively utilize EC2 for various scenarios, from simple
setups to complex, scalable architectures.
Key Features of Amazon EC2
1. On-Demand Instances: Pay for compute capacity by the hour or second, depending on which instances you
run.
2. Auto Scaling: Automatically adjust the number of EC2 instances based on demand. This ensures you have
enough resources during peak times and can reduce costs during low usage.
3. Elastic Load Balancing (ELB): Distributes incoming application traffic across multiple instances to ensure
high availability and reliability.
4. Wide Selection of Instance Types: Choose from various instance types optimized for compute, memory,
storage, or GPU-based tasks.
5. Security: Use security groups, network ACLs, and IAM roles to control access and enhance security.
Getting Started with EC2
1. Launch an EC2 Instance:
o Access the Console: Log in to the AWS Management Console and navigate to EC2.

o Launch Instance: Click on "Launch Instance" and choose an Amazon Machine Image (AMI) based
on your application needs.
o Choose Instance Type: Select an instance type that fits your performance requirements.

o Configure Instance: Set details like the number of instances, VPC settings, IAM roles, and
monitoring options.
o Add Storage: Choose the type and size of storage volumes you need.

o Configure Security Group: Set inbound and outbound rules to control access to your instance.

o Review and Launch: Review your settings and launch the instance.

2. Connect to Your Instance:


o Use SSH (for Linux) or Remote Desktop (for Windows) to connect to your instance.

o Use the public IP address or DNS name provided by AWS.

Scaling with Auto Scaling Groups


1. Create a Launch Configuration:
o Define the instance type, AMI, and other configurations for instances in your Auto Scaling group.

2. Create an Auto Scaling Group:


o Set the minimum, maximum, and desired number of instances.

o Define scaling policies based on CloudWatch metrics (e.g., CPU utilization, memory usage).

3. Integrate with ELB:


o Attach your Auto Scaling group to an Elastic Load Balancer to distribute traffic among the instances.

Monitoring and Management


 Amazon CloudWatch: Use CloudWatch to monitor your instances’ performance and set alarms for various
metrics (CPU usage, disk I/O, etc.).
 AWS Systems Manager: Use Systems Manager for operational data management, patching, and automation
across your EC2 instances.
Cost Management
 Spot Instances: Consider using Spot Instances for non-critical workloads to take advantage of unused EC2
capacity at reduced prices.
 Reserved Instances: For predictable workloads, purchase Reserved Instances to save on costs over time.
 Instance Scheduler: Automate starting and stopping instances based on a schedule to reduce costs during
off-peak hours.
Security Best Practices
 Use IAM Roles: Assign IAM roles to instances to grant permissions securely without embedding credentials.
 Implement Security Groups: Regularly review and update security group rules to enforce the principle of
least privilege.
 Regularly Patch and Update: Use AWS Systems Manager to automate patching of your instances.
Use Cases
 Web Hosting: Deploy web applications with EC2 and scale based on user traffic using Auto Scaling and
ELB.
 Data Processing: Use EC2 for data analysis, machine learning, or batch processing, taking advantage of
different instance types as needed.
 Development and Testing: Quickly spin up instances for development environments and tear them down
when no longer needed.
Amazon EC2 provides a flexible and scalable computing platform that can adapt to your application's needs. By
leveraging features like Auto Scaling, ELB, and various instance types, you can ensure that your application remains
perform and cost-effective.

1.6 ELASTIC CONTAINER SERVICE FOR KUBERNETES


Amazon Elastic Kubernetes Service (EKS) is a fully managed service that simplifies running Kubernetes on AWS. It
takes care of the heavy lifting involved in managing the Kubernetes control plane, allowing you to focus on deploying
and managing your applications.
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS
cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability
of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability,
storing cluster data, and other key tasks. With Amazon EKS, you can take advantage of all the performance, scale,
reliability, and availability of AWS infrastructure, as well as integrations with AWS networking and security services.
On-premises, EKS provides a consistent, fully-supported Kubernetes solution with integrated tooling and simple
deployment to AWS Outposts, virtual machines, or bare metal servers.
Key Features of Amazon EKS
1. Fully Managed Control Plane: AWS manages the Kubernetes control plane for you, ensuring high
availability, security, and performance.
2. Integration with AWS Services: EKS integrates seamlessly with other AWS services like Amazon EC2,
IAM, VPC, and CloudWatch, allowing for easier management and enhanced security.
3. Security: EKS provides built-in security features, including IAM roles for service accounts, enabling fine-
grained permissions for your Kubernetes applications.
4. Auto Scaling: You can use the Kubernetes Cluster Autoscaler and Amazon EC2 Auto Scaling to
automatically adjust the number of nodes in your cluster based on demand.
5. Compatibility: EKS is fully compliant with upstream Kubernetes, meaning you can use existing Kubernetes
tools and plugins without modification.
6. Networking: EKS supports AWS App Mesh, which provides application-level networking for microservices.

2 AWS DEVELOPER TOOLS

Developer tools are technologies that make software development faster and more efficient. Software
development is a complex process of translating real-world objects into mathematical and electronic representations
that machines can understand and manipulate. Developer tools act as an interface between the physical reality and
computing processes. They include programming languages, frameworks, and platforms that abstract different levels
of complexity. This means you can interact with computers more easily and solve more complex problems. Instead of
working with hardware components and low-level coding languages, you can work with libraries, APIs, and other
abstractions that prioritize business use cases. Developer tools also include software applications, components, and
services that simplify the process of coding. Software teams use developer tools to overcome challenges when writing
code, testing programs, deploying applications, and monitoring production releases. With the right development tools,
you can reduce time to market, resolve bugs, optimize development workflows, and more.

2.1 AWS CODECOMMIT


AWS CodeCommit is a fully managed source control service that makes it easy for companies to host secure and
highly scalable private Git repositories. AWS CodeCommit eliminates the need to operate your own source control
system or worry about scaling its infrastructure. You can use AWS CodeCommit to securely store anything from
source code to binaries, and it works seamlessly with your existing Git tools. The major benefits of CodeCommit are
as follows:
 Benefit from a fully managed service hosted by AWS: CodeCommit provides high service availability and
durability and eliminates the administrative overhead of managing your own hardware and software. There is
no hardware to provision and scale and no server software to install, configure, and update.
 Store your code securely: CodeCommit repositories are encrypted at rest as well as in transit.
 Work collaboratively on code: CodeCommit repositories support pull requests, where users can review and
comment on each other's code changes before merging them to branches; notifications that automatically send
emails to users about pull requests and comments; and more.
 Easily scale your version control projects: CodeCommit repositories can scale up to meet your
development needs. The service can handle repositories with large numbers of files or branches, large file
sizes, and lengthy revision histories.
 Store anything, anytime: CodeCommit has no limit on the size of your repositories or on the file types you
can store.
 Integrate with other AWS and third-party services. CodeCommit keeps your repositories close to your
other production resources in the AWS Cloud, which helps increase the speed and frequency of your
development lifecycle.
 Easily migrate files from other remote repositories. You can migrate to CodeCommit from any Git-based
repository.
 Use the Git tools you already know. CodeCommit supports Git commands as well as its own AWS CLI

commands and APIs.

How does CodeCommit work?


 CodeCommit is familiar to users of Git-based repositories, but even those unfamiliar should find the transition
to CodeCommit relatively simple. CodeCommit provides a console for the easy creation of repositories and
the listing of existing repositories and branches. In a few simple steps, users can find information about a
repository and clone it to their computer, creating a local repo where they can make changes and then push
them to the CodeCommit repository. Users can work from the command line on their local machines or use a
GUI-based editor.
 The following figure shows how you use your development machine, the AWS CLI or CodeCommit console,
and the CodeCommit service to create and manage repositories:
 Use the AWS CLI or the CodeCommit console to create a CodeCommit repository.
 From your development machine, use Git to run git clone, specifying the name of the CodeCommit
repository. This creates a local repo that connects to the CodeCommit repository.
 Use the local repo on your development machine to modify (add, edit, and delete) files, and then run git
add to stage the modified files locally. Run git commit to commit the files locally, and then run git push to
send the files to the CodeCommit repository.
 Download changes from other users. Run git pull to synchronize the files in the CodeCommit repository with
your local repo. This ensures you're working with the latest version of the files.
 You can use the AWS CLI or the CodeCommit console to track and manage your repositories.

2.1 AWS CODEBUILD


AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit
tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale
your own build servers. It provides prepackaged build environments for popular programming languages and build
tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your
own build tools. CodeBuild scales automatically to meet peak build requests.
CodeBuild provides these benefits:

 Fully managed – CodeBuild eliminates the need to set up, patch, update, and manage your own build
servers.
 On demand – CodeBuild scales on demand to meet your build needs. You pay only for the number of build
minutes you consume.
 Out of the box – CodeBuild provides preconfigured build environments for the most popular programming
languages. All you need to do is point to your build script to start your first build.
How to run CodeBuild
You can use the AWS CodeBuild or AWS CodePipeline console to run CodeBuild. You can also automate the
running of CodeBuild by using the AWS Command Line Interface (AWS CLI) or the AWS SDKs.

The diagram shows how CodeBuild works with AWS CLI or AWS SDKs. As the following diagram shows, you can
add CodeBuild as a build or test action to the build or test stage of a pipeline in AWS CodePipeline. AWS
CodePipeline is a continuous delivery service that you can use to model, visualize, and automate the steps required to
release your code. This includes building your code. A pipeline is a workflow construct that describes how code
changes go through a release process.

The diagram shows how CodeBuild works with AWS CodePipeline. To use CodePipeline to create a pipeline and
then add a CodeBuild build or test action, see Use CodeBuild with CodePipeline. For more information about
CodePipeline, see the AWS CodePipeline User Guide. The CodeBuild console also provides a way to quickly search
for your resources, such as repositories, build projects, deployment applications, and pipelines. Choose Go to resource
or press the / key, and then enter the name of the resource. Any matches appear in the list. Searches are case
insensitive. You only see resources that you have permissions to view.

2.2 AWS CODEDEPLOY


CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises
instances, serverless Lambda functions, or Amazon ECS services. We can deploy a nearly unlimited variety of
application content, including:
 Code
 Serverless AWS Lambda functions
 Web and configuration files
 Executables
 Packages
 Scripts
 Multimedia files
Code Deploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub
repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to
make changes to your existing code before you can use CodeDeploy. CodeDeploy makes it easier for you to:
 Rapidly release new features.
 Update AWS Lambda function versions.
 Avoid downtime during application deployment.
 Handle the complexity of updating your applications, without many of the risks associated with error-prone
manual deployments.
The service scales with your infrastructure so you can easily deploy to one instance or thousands. CodeDeploy works
with various systems for configuration management, source control, continuous integration, continuous delivery, and
continuous deployment. CodeDeploy offers these benefits:
 Server, serverless, and container applications. CodeDeploy lets you deploy both traditional applications on
servers and applications that deploy a serverless AWS Lambda function version and an Amazon ECS
application.
 Automated deployments. CodeDeploy fully automates your application deployments across your
development, test, and production environments. CodeDeploy scales with your infrastructure so that you can
deploy to one instance or thousands.
 Minimize downtime. If your application uses the EC2/On-Premises compute platform, CodeDeploy helps
maximize your application availability. During an in-place deployment, CodeDeploy performs a rolling
update across Amazon EC2 instances. You can specify the number of instances to be taken offline at a time
for updates. During a blue/green deployment, the latest application revision is installed on replacement
instances. Traffic is rerouted to these instances when you choose, either immediately or as soon as you are
done testing the new environment. For both deployment types, CodeDeploy tracks application health
according to rules you configure.
 Stop and roll back. You can automatically or manually stop and roll back deployments if there are errors.
 Centralized control. You can launch and track the status of your deployments through the CodeDeploy
console or the AWS CLI. You receive a report that lists when each application revision was deployed and to
which Amazon EC2 instances.
 Easy to adopt. CodeDeploy is platform-agnostic and works with any application. You can easily reuse your
setup code. CodeDeploy can also integrate with your software release process or continuous delivery
toolchain.
 Concurrent deployments. If you have more than one application that uses the EC2/On-Premises compute
platform, CodeDeploy can deploy them concurrently to the same set of instances.
Overview of CodeDeploy compute platforms
CodeDeploy is able to deploy applications to three compute platforms:
EC2/On-Premises: Describes instances of physical servers that can be Amazon EC2 cloud instances, on-premises
servers, or both. Applications created using the EC2/On-Premises compute platform can be composed of executable
files, configuration files, images, and more. Deployments that use the EC2/On-Premises compute platform manage
the way in which traffic is directed to instances by using an in-place or blue/green deployment type.
AWS Lambda: Used to deploy applications that consist of an updated version of a Lambda function. AWS Lambda
manages the Lambda function in a serverless compute environment made up of a high-availability compute structure.
All administration of the compute resources is performed by AWS Lambda. For more information, see Serverless
Computing and Applications. For more information about AWS Lambda and Lambda functions, see AWS Lambda.
You can manage the way in which traffic is shifted to the updated Lambda function versions during a deployment by
choosing a canary, linear, or all-at-once configuration.
Amazon ECS: Used to deploy an Amazon ECS containerized application as a task set. CodeDeploy performs a
blue/green deployment by installing an updated version of the application as a new replacement task set. CodeDeploy
reroutes production traffic from the original application task set to the replacement task set. The original task set is
terminated after a successful deployment. You can manage the way in which traffic is shifted to the updated task set
during a deployment by choosing a canary, linear, or all-at-once configuration.
Overview of CodeDeploy deployment types
CodeDeploy provides two deployment type options:
In-place deployment: The application on each instance in the deployment group is stopped, the latest application
revision is installed, and the new version of the application is started and validated. We can use a load balancer so that
each instance is deregistered during its deployment and then restored to service after the deployment is complete.
Only deployments that use the EC2/On-Premises compute platform can use in-place deployments.

1. Create deployable content


2. Add an application specification file (AppSpec file) which is unique to AWS CodeDeploy.
3. Bundle the deployable content and the AppSpec file into an archive file.
4. Upload it to an Amazon S3 bucket or a GitHub repository.
5. Provide AWS CodeDeploy with information about the deployment so as to deploy its contents.
6. Next, the AWS CodeDeploy agent on each instance polls AWS CodeDeploy.
7. AWS CodeDeploy agent on each instance pulls the target revision from the specified Amazon S3 bucket or
GitHub repository.
Blue/green deployment: The behaviour of deployment depends on which compute platform you use. Consider the
following cases.
Blue/green on an EC2/On-Premises compute platform: The instances in a deployment group (the original
environment-Blue) are replaced by a different set of instances (the replacement environment-Green) using these steps:
 Instances are provisioned for the replacement environment.
 The latest application revision is installed on the replacement instances.
 An optional wait time occurs for activities such as application testing and system verification.
 Instances in the replacement environment are registered with one or more Elastic Load Balancing load
balancers, causing traffic to be rerouted to them. Instances in the original environment are deregistered and
can be terminated or kept running for other uses.

Blue/green deployments on an EC2/On-Premises compute platform can be used to deploy applications with nearly
zero downtime and the ability to roll back. The process involves running two identical environments, one with the
current application version and the other with a new version, and then redirecting traffic from the current environment
to the new one. Here are some benefits of blue/green deployments on an EC2/On-Premises compute platform:
 Faster and more reliable switching: Traffic can be routed back to the original instances if they haven't been
terminated.
 Avoids problems with long-running instances: New instances are provisioned for the deployment and have
the latest server configurations.
 Near zero downtime: Releases can be deployed with almost no downtime.
Here are some things to consider when using blue/green deployments on an EC2/On-Premises compute
platform:
 Blue/green deployments only work with Amazon EC2 instances.
 When adding instances to the replacement environment, you can use existing instances or create new ones
manually.
 You can use settings from an Amazon EC2 Auto Scaling group to define and create instances in a new
Amazon EC2 Auto Scaling group.

 Blue/green on an AWS Lambda or Amazon ECS compute platform: Traffic is shifted in increments according to
a canary, linear, or all-at-once deployment configuration.
Canary: Traffic is shifted in two increments. You can choose from predefined canary options that specify the
percentage of traffic shifted to your updated Lambda function or ECS task set in the first increment and the
interval, in minutes, before the remaining traffic is shifted in the second increment.

Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You
can choose from predefined linear options that specify the percentage of traffic shifted in each increment and
the number of minutes between each increment.
All-at-once: All traffic is shifted from the original Lambda function or ECS task set to the updated function
or task set all at once.

Blue/green deployments through AWS CloudFormation: Traffic is shifted from your current resources to your
updated resources as part of an AWS CloudFormation stack update. Currently, only ECS blue/green deployments are
supported. To update an application running on Amazon Elastic Container Service (Amazon ECS), you can use a
CodeDeploy blue/green deployment strategy. This strategy helps minimize interruptions caused by changing
application versions. In a blue/green deployment, you create a new application environment (referred to as green)
alongside your current, live environment (referred to as blue). This allows you to monitor and test the green
environment before routing live traffic from the blue environment to the green environment. After the green
environment is serving live traffic, you can safely terminate the blue environment. To perform CodeDeploy
blue/green deployments on ECS using CloudFormation, you include the following information in your stack template:
 A Hooks section that describes an AWS: CodeDeploy::BlueGreen hook.
 A Transform section that specifies the AWS::CodeDeployBlueGreen transform.

2.3 AWS CODEPIPELINE


AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release
pipelines for fast and reliable application and infrastructure updates. It allows users to build, test and deploy code into
a test or production environment using either the AWS CLI or a clean UI configuration process within the Amazon
Console. CodePipeline automates the steps required to release your software changes continuously.
In this example, when developers commit changes to a source repository, CodePipeline automatically detects
the changes. Those changes are built, and if any tests are configured, those tests are run. After the tests are complete,
the built code is deployed to staging servers for testing. From the staging server, CodePipeline runs more tests, such
as integration or load tests. Upon the successful completion of those tests, and after a manual approval action that was
added to the pipeline is approved, CodePipeline deploys the tested and approved code to production instances.
CodePipeline can deploy applications to EC2 instances by using CodeDeploy, AWS Elastic Beanstalk, or AWS
OpsWorks Stacks. CodePipeline can also deploy container-based applications to services by using Amazon ECS.
Developers can also use the integration points provided with CodePipeline to plug in other tools or services, including
build services, test providers, or other deployment targets or systems. A pipeline can be as simple or as complex as
your release process requires.

2.4 AWS CODESTAR


AWS CodeStar is a cloud service designed to make it easier to develop, build, and deploy applications on AWS
by simplifying the setup of your entire development project. AWS CodeStar includes project templates for common
development platforms to enable provisioning of projects and resources for coding, building, testing, deploying, and
running your software project. The key benefits of the AWS CodeStar service are:
 Easily create new projects using templates for Amazon EC2, AWS Elastic Beanstalk, or AWS Lambda
using five different programming languages; JavaScript, Java, Python, Ruby, and PHP. By selecting a
template, the service will provision the underlying AWS services needed for your project and application.
 Unified experience for access and security policies management for your entire software team. Projects are
automatically configured with appropriate IAM access policies to ensure a secure application environment.
 Pre-configured project management dashboard for tracking various activities, such as code commits, build
results, deployment activity and more.
 Running sample code to help you get up and running quickly enabling you to use your favorite IDEs, like
Visual Studio, Eclipse, or any code editor that supports Git.
 Automated configuration of a continuous delivery pipeline for each project using AWS CodeCommit, AWS
CodeBuild, AWS CodePipeline, and AWS CodeDeploy.
 Integration with Atlassian JIRA Software for issue management and tracking directly from the AWS
CodeStar console
 With AWS CodeStar, development teams can build an agile software development workflow that not only
increases the speed in which teams can deploy software and bug fixes, but also enables developers to build
software that is more inline with customers’ requests and needs.
An example of a responsive development workflow using AWS CodeStar is shown below:
3 AWS MANAGEMENT TOOLS

AWS Management Tools helps the user to manage the components of the cloud and their account. It
programmatically allows the user to provision, monitor, and automate all the components. This AWS Management
tools help the user to control every part of the cloud infrastructure.

3.1 CLOUD WATCH


Amazon CloudWatch is a service that monitors applications, responds to performance changes, optimizes
resource use, and provides insights into operational health. By collecting data across AWS resources, CloudWatch
gives visibility into system-wide performance and allows users to set alarms, automatically react to changes, and gain
a unified view of operational health. Amazon CloudWatch can monitor Amazon Web Services resources such as
Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics
generated by your applications and services, and any log files your applications generate. You can use Amazon
CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health.
You can use these insights to react and keep your application running smoothly.
CloudWatch collects various metrics from various resources. These metrics, as statistics, are available to the user
through Console, CLI. CloudWatch allows the creation of alarms with defined rules
 to perform actions to auto-scaling or stop, start, or terminate instances
 to send notifications using simple notification service (SNS) actions on your behalf

3.2 AWS AUTO SCALING


AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady,
predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling
for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that
lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks,
Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas. AWS Auto Scaling makes scaling simple
with recommendations that allow you to optimize performance, costs, or balance between them. If you’re already
using Amazon EC2 Auto Scaling to dynamically scale your Amazon EC2 instances, you can now combine it with
AWS Auto Scaling to scale additional resources for other AWS services. With AWS Auto Scaling, your applications
always have the right resources at the right time.
For example, the following Auto Scaling group has a minimum size of four instances, a desired capacity of six
instances, and a maximum size of twelve instances. The scaling policies that you define adjust the number of
instances, within your minimum and maximum number of instances, based on the criteria that you specify.

Benefits of AutoScaling
 Setup scaling quickly: AWS Auto Scaling lets you set target utilization levels for multiple resources in a
single, intuitive interface. You can quickly see the average utilization of all of your scalable resources
without having to navigate to other consoles. For example, if your application uses Amazon EC2 and
Amazon DynamoDB, you can use AWS Auto Scaling to manage resource provisioning for all of the EC2
Auto Scaling groups and database tables in your application.
 Make smart scaling decisions: AWS Auto Scaling lets you build scaling plans that automate how groups
of different resources respond to changes in demand. You can optimize availability, costs, or a balance of
both. AWS Auto Scaling automatically creates all of the scaling policies and sets targets for you based on
your preference. AWS Auto Scaling monitors your application and automatically adds or removes capacity
from your resource groups in real-time as demands change.
 Automatically maintain performance: Using AWS Auto Scaling, you maintain optimal application
performance and availability, even when workloads are periodic, unpredictable, or continuously changing.
AWS Auto Scaling continually monitors your applications to make sure that they are operating at your
desired performance levels. When demand spikes, AWS Auto Scaling automatically increases the capacity
of constrained resources so you maintain a high quality of service.
 Pay only for what you need: AWS Auto Scaling can help you optimize your utilization and cost
efficiencies when consuming AWS services so you only pay for the resources you actually need. When
demand drops, AWS Auto Scaling will automatically remove any excess resource capacity so you avoid
overspending. AWS Auto Scaling is free to use, and allows you to optimize the costs of your AWS
environment.

3.3 AWS CONTROL TOWER


AWS control tower offers a direct method for setting up and managing an AWS multi-account setup while
following prescribed best practices. It combines the abilities of a number of other AWS services, including AWS
Organisations, AWS Service Catalogue, and IAM Identity Centre (the replacement for AWS Single Sign-On), for
rapidly creating a landing zone. It uses preventive and investigative controls to assist in preventing your organizations
and accounts from straying from recommended practices (guardrails). For example, guardrails can be used to make
sure that security logs and the required cross-account access rights are produced and kept in place.

3.4 CLOUD FORMATION

AWS CloudFormation is a powerful service that allows you to define and provision AWS infrastructure as
code. It uses templates written in JSON or YAML to describe the desired state of your cloud resources. Here’s
a breakdown of key concepts and features:
Key Concepts
1. Templates: These are JSON or YAML files that describe the AWS resources you want to create. Templates
can include parameters, resources, outputs, and more.
2. Stacks: A stack is a collection of AWS resources that are created and managed as a single unit. You create a
stack by submitting a CloudFormation template.
3. Resources: The actual AWS services that you want to provision (e.g., EC2 instances, S3 buckets, RDS
databases) are defined in the resources section of your template.
4. Parameters: These allow you to pass in values to your template at runtime, making it more dynamic and
reusable.
5. Outputs: You can specify values that you want to be returned after the stack is created. This can be useful for
referencing resource attributes.
6. Mappings: These provide a way to create simple lookup tables, allowing you to define variables based on
conditions, such as regions.
7. Conditions: Conditions enable you to specify whether certain resources are created or certain properties are
assigned, based on the values of parameters or mappings.
Features
 Version Control: Since templates are code, they can be stored in version control systems, making it easier to
track changes.
 Change Sets: Before applying changes to a stack, you can create a change set to review how the proposed
changes will affect your stack.
 Rollback: If a stack creation or update fails, CloudFormation can automatically roll back the changes to the
last stable state.
 Nested Stacks: You can organize your infrastructure by breaking it into smaller templates, known as nested
stacks.
Example
Here’s a simple YAML example of a CloudFormation template that creates an S3 bucket:

Best Practices
 Use Parameters: To make your templates reusable and flexible.

 Organize with Nested Stacks: For better maintainability, especially in large deployments.

 Utilize Outputs: To expose necessary values for other stacks or services.


 Test Changes: Use change sets to understand the impact of changes before applying them.

Getting Started
To get started with AWS CloudFormation:
1. Create a Template: Define your infrastructure in JSON or YAML.
2. Upload to CloudFormation: Use the AWS Management Console, AWS CLI, or SDKs to create a stack
with your template.
3. Monitor and Manage: Use the console to monitor the stack’s creation, update, and deletion processes.
AWS CloudFormation is an excellent tool for automating infrastructure management, making deployments
consistent and repeatable.

3.5 CLOUD TRAIL

AWS CloudTrail is a service that enables you to monitor and log activity within your AWS account. It provides a
comprehensive record of actions taken by users, roles, or AWS services, helping you maintain security, compliance,
and operational governance. Here’s a closer look at its key features and functionalities:
Key Features
1. Event Logging: CloudTrail records API calls made within your AWS account, including calls from the AWS
Management Console, AWS CLI, SDKs, and other AWS services. This includes details like who made the
request, the time it was made, the source IP address, and what actions were performed.
2. Management Events: These include operations that are performed on AWS resources, such as creating,
modifying, or deleting resources. They are useful for tracking changes to your infrastructure.
3. Data Events: These capture operations on specific resources, such as S3 object-level API calls or DynamoDB
table-level actions. Data events provide granular logging for specific resource operations.
4. Insight Events: This feature helps identify unusual API activity in your account, providing insights into
potentially unauthorized or unexpected actions.
5. S3 Bucket Logging: You can configure CloudTrail to store logs in an S3 bucket for long-term storage and
analysis.
6. Integration with AWS Services: CloudTrail integrates with other AWS services like AWS Lambda,
Amazon CloudWatch, and AWS Config, enabling automated responses and monitoring.
7. Multi-Region and Multi-Account Support: You can configure CloudTrail to log events from multiple AWS
accounts and regions, centralizing your logging and monitoring efforts.
Use Cases
1. Security and Compliance: CloudTrail helps track user activity, aiding in compliance with regulations and
internal policies by providing a clear audit trail.
2. Operational Troubleshooting: By analyzing CloudTrail logs, you can troubleshoot issues and identify the
root cause of operational problems.
3. Resource Management: Monitoring changes in resources helps ensure that your AWS environment aligns
with operational standards and best practices.
4. Cost Management: Understanding API usage can help identify unused or underutilized resources, leading to
cost-saving opportunities.
Example
To enable CloudTrail and create a trail, you can use the AWS Management Console or AWS CLI. Here’s how to
create a trail using the CLI:

Best Practices
 Enable Logging for All Regions: This ensures that you capture all relevant activity across your AWS
infrastructure.
 Use S3 Bucket Policies: Implement strict policies on your S3 bucket where CloudTrail logs are stored to
enhance security.
 Integrate with CloudWatch: Set up CloudWatch Alarms to monitor for specific API calls or unusual
activity.
 Review Logs Regularly: Schedule regular reviews of your CloudTrail logs to stay informed about account
activity.
Getting Started
To get started with AWS CloudTrail:
1. Enable CloudTrail: Use the AWS Management Console, CLI, or SDK to create a trail and specify your S3
bucket for log storage.
2. Monitor Events: Use the AWS CloudTrail console to view event history and logs.
3. Analyze Logs: You can analyze the logs in S3 using tools like Amazon Athena, or integrate with SIEM
systems for further analysis.
AWS CloudTrail is essential for maintaining transparency in your AWS account, ensuring that you can effectively
monitor, audit, and respond to activities within your environment.

3.6 AWS LICENSE MANAGER

AWS License Manager is a service that helps you manage software licenses across AWS and on-premises
environments. It simplifies the licensing process, ensures compliance, and provides visibility into license usage.
Key Features
1. Centralized Management: License Manager provides a single interface to manage software licenses from
various vendors, including Microsoft, SAP, Oracle, and more. This centralized view helps streamline
compliance and reporting.
2. License Tracking: The service allows you to track license usage, helping you to ensure that you are
compliant with licensing agreements. You can monitor both AWS and on-premises environments.
3. Automated License Consumption: AWS License Manager can automatically manage license consumption
for eligible AWS services, reducing the manual effort required to track license usage.
4. Integration with AWS Services: The service integrates with other AWS services, such as AWS Systems
Manager and Amazon EC2, to help manage licenses associated with those resources.
5. Custom License Models: You can create custom licensing rules that match your organization’s requirements,
allowing for more flexibility in managing licenses.
6. Compliance and Reporting: License Manager provides detailed reports on license usage and compliance
status, making it easier to demonstrate adherence to licensing agreements during audits.
Benefits
 Cost Management: By effectively managing licenses, you can avoid over-provisioning and reduce costs
associated with unused or underutilized licenses.
 Risk Reduction: Ensures compliance with licensing agreements, reducing the risk of costly penalties or legal
issues.
 Operational Efficiency: Automating the license management process allows your IT teams to focus on other
strategic initiatives rather than manual tracking.
 Scalability: As your organization grows and uses more software, AWS License Manager scales to meet your
needs.
Use Cases
1. Microsoft License Management: Manage Microsoft licenses for Windows Server, SQL Server, and other
Microsoft products used across AWS and on-premises environments.
2. Oracle Database Licensing: Track and manage Oracle licenses effectively, ensuring compliance with
Oracle’s licensing policies.
3. SaaS and Subscription Software: Keep track of SaaS licenses and subscriptions, ensuring that you only pay
for what you use.
Getting Started
To get started with AWS License Manager:
1. Access the Console: Go to the AWS Management Console and navigate to License Manager.
2. Create License Configurations: Define your licensing models and rules based on your software
requirements.
3. Track License Usage: Use the License Manager dashboard to monitor license consumption and compliance.
4. Generate Reports: Utilize the reporting features to generate insights into your license usage and compliance
status.
Example
Here’s a simple workflow to manage Microsoft Windows Server licenses:
1. Create a License Configuration: Define how many Windows Server licenses you need and the rules for
usage.
2. Provision Resources: When launching new EC2 instances, specify the license configuration to automatically
track the licenses consumed.
3. Monitor Usage: Regularly check the License Manager dashboard for reports on your Windows Server license
consumption.
AWS License Manager is an essential tool for organizations that want to maintain compliance and optimize costs
associated with software licenses.

You might also like