0% found this document useful (0 votes)
8 views44 pages

Week 2

AWS Identity and Access Management (IAM) is a web service that enables secure control of access to AWS resources by managing user permissions and authentication. It employs the Principle of Least Privilege to ensure users have only the necessary access rights for their roles, while defining entities such as users, groups, roles, and policies to manage permissions effectively. IAM also includes tools like Single Sign-On (SSO) to enhance security and streamline user access across multiple resources.

Uploaded by

akashnavani17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views44 pages

Week 2

AWS Identity and Access Management (IAM) is a web service that enables secure control of access to AWS resources by managing user permissions and authentication. It employs the Principle of Least Privilege to ensure users have only the necessary access rights for their roles, while defining entities such as users, groups, roles, and policies to manage permissions effectively. IAM also includes tools like Single Sign-On (SSO) to enhance security and streamline user access across multiple resources.

Uploaded by

akashnavani17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 44

Week 2 Afternoon Session

AWS Identity and Access Management (IAM) Services


2.1 What is IAM ?
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to
AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users
can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use
resources.

When you create an AWS account, you begin with one sign-in identity that has complete access to all
AWS services and resources in the account. This identity is called the AWS account root user and is
accessed by signing in with the email address and password that you used to create the account. We strongly
recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials
and use them to perform the tasks that only the root user can perform.

2.2 What is IAM used for ?


AWS Identity and Access Management (IAM) is a web service that enables Amazon Web Services (AWS)
customers to manage users and user permissions in AWS. IAM enables you to securely control access to
AWS services and resources for your users. With IAM, organizations can centrally manage users, security
credentials such as access keys, and permissions that control which AWS resources users can access . IAM
roles can have defined permissions and be assigned to users, applications and services. AWS IAM helps us
to:
 Manage users and their access
 Manage roles and their permissions
 Manage federated users and their permissions
 Securely provide credentials for applications that run on EC2 instances
 Add two-factor authentication to your account and to individual users for extra security.

IAM's primary capability is access and permissions. It provides two essential functions that work together
to establish basic security for enterprise resources:

 Authentication. Authentication validates the identity of a user. It is typically handled by checking


credentials -- such as usernames and passwords -- against an established database of credentials within
the AWS IAM service. Advanced authentication might include multifactor authentication (MFA), which
couples traditional credentials with a third form of authentication, such as sending a unique code to a
user's smartphone.

 Authorization. Once a user is authenticated, authorization defines the access rights for that user and
limits access to only the resources permitted for that specific user. Not every user will have access to
every application, data set or service across the organization. Authorization typically follows the concept
of least privilege, where users receive the minimum access rights that are necessary for their jobs.
2.3 Principle of least privilege

The “Principle of Least Privilege” (POLP) states a given user account should have the exact access rights
necessary to execute their role’s responsibilities—no more, no less. POLP is a fundamental concept within
identity and access management (IAM).

Least privilege is critical for preventing the continual collection of unchecked access rights over a user
account’s lifecycle. The “user account lifecycle” defines the collective management stages for every user
account over time—creation, review/update, and deactivation (CRUD).

2.4 . IAM Introduction: Users, Groups, Policies

IAM deals with four principle entities: users, groups, roles and policies. These entities detail who a user is
and what that user is allowed to do within the environment:

 Users. A user is one of the most basic entities in IAM. A user is typically a person or a service, such as
an application or platform, which interacts with the environment. An IT teams assigns users
authorization credentials, such as a username and password, which validate the user's identity. Users can
then access resources that are assigned through permissions or policies.
 Groups. A group is a collection of users that share common permissions and policies. Any permissions
associated to a group are automatically assigned to all users in a group. For example, placing a user into
an Administrator group will automatically assign the user any permissions given to the Administrator
group. IT teams can move users between groups and automatically shift permissions as groups change.
 Roles. A role is a generic identity that is not associated with any specific user. Roles do not use
passwords and can be assumed by authorized users. Roles enable varied users to temporarily assume
different permissions for different tasks.
 Policies. Policies are AWS objects that are attached to users, groups, roles or resources that define the
permissions granted to those identities. When a user tries to access a resource, the request is checked
against the associated policies. If the request is permitted, then it is granted. If not, it is denied. AWS
policies are based on six different criteria: identity, resources, permission boundaries, service control
policies, access control lists and session policies. IT teams can attach multiple policies to each identity
for more granular control of permissions.

2.5 IAM Policies

A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the
policies determine whether the request is allowed or denied. You manage access in AWS by creating
policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources.

IAM policies define permissions for an action regardless of the method that you use to perform the
operation. For example, if a policy allows the GetUser action, then a user with that policy can get user
information from the AWS Management Console, the AWS CLI, or the AWS API. When you create an
IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM
user can sign in to the console using their sign-in credentials. If programmatic access is allowed, the user can
use access keys to work with the CLI or API.
Policy types

The following policy types, listed in order from most frequently used to less frequently used, are available
for use in AWS.

1. Identity-based policies – Attach managed and inline policies to IAM identities (users, groups to which
users belong, or roles). Identity-based policies grant permissions to an identity.
2. Resource-based policies – Attach inline policies to resources. The most common examples of resource-
based policies are Amazon S3 bucket policies and IAM role trust policies. Resource-based policies grant
permissions to the principal that is specified in the policy. Principals can be in the same account as the
resource or in other accounts.
3. Permissions boundaries – Use a managed policy as the permissions boundary for an IAM entity (user
or role). That policy defines the maximum permissions that the identity-based policies can grant to an
entity, but does not grant permissions. Permissions boundaries do not define the maximum permissions
that a resource-based policy can grant to an entity.
4. Organizations SCPs – Use an AWS Organizations service control policy (SCP) to define the maximum
permissions for account members of an organization or organizational unit (OU). SCPs limit permissions
that identity-based policies or resource-based policies grant to entities (users or roles) within the account,
but do not grant permissions.
5. Access control lists (ACLs) – Use ACLs to control which principals in other accounts can access the
resource to which the ACL is attached. ACLs are similar to resource-based policies, although they are
the only policy type that does not use the JSON policy document structure. ACLs are cross-account
permissions policies that grant permissions to the specified principal. ACLs cannot grant permissions to
entities within the same account.
6. Session policies – Pass advanced session policies when you use the AWS CLI or AWS API to assume a
role or a federated user. Session policies limit the permissions that the role or user's identity-based
policies grant to the session. Session policies limit permissions for a created session, but do not grant
permissions. For more information, see Session Policies.

2.6 Creating IAM policies


You can create a customer managed policy in the AWS Management Console using one of the following
methods:
 JSON — Paste and customize a published example identity-based policy.
 Visual editor — Construct a new policy from scratch in the visual editor. If you use the visual editor,
you do not have to understand JSON syntax.
 Import — Import and customize a managed policy from within your account. You can import an AWS
managed policy or a customer managed policy that you previously created.
The number and size of IAM resources in an AWS account are limited. For more information, see IAM and
AWS STS quotas.

Creating policies using the JSON editor


You can type or paste policies in JSON by choosing the JSON option. This method is useful for copying
an example policy to use in your account. Or, you can type your own JSON policy document in the JSON
editor. You can also use the JSON option to toggle between the visual editor and JSON to compare the
views.
When you create or edit a policy in the JSON editor, IAM performs policy validation to help you create an
effective policy. IAM identifies JSON syntax errors, while IAM Access Analyzer provides additional policy
checks with actionable recommendations to help you further refine the policy.
.
To use the JSON policy editor to create a policy
1. Sign in to the AWS Management Console and open the IAM console
at https://console.aws.amazon.com/iam/.
2. In the navigation pane on the left, choose Policies.
3. Choose Create policy.
4. In the Policy editor section, choose the JSON option.
5. Type or paste a JSON policy document. For details about the IAM policy language, see IAM JSON
policy reference.
6. Resolve any security warnings, errors, or general warnings generated during policy validation, and then
choose Next.
7. (Optional) When you create or edit a policy in the AWS Management Console, you can generate a
JSON or YAML policy template that you can use in AWS CloudFormation templates.
To do this, in the Policy editor choose Actions, and then choose Generate CloudFormation template.
To learn more about AWS CloudFormation see AWS Identity and Access Management resource type
reference in the AWS CloudFormation User Guide.
8. When you are finished adding permissions to the policy, choose Next.
9. On the Review and create page, type a Policy Name and a Description (optional) for the policy that
you are creating. Review Permissions defined in this policy to see the permissions that are granted by
your policy.
10. Choose Create policy to save your new policy.

Creating policies with the visual editor


The visual editor in the IAM console guides you through creating a policy without having to write JSON
syntax. To view an example of using the visual editor to create a policy, see Controlling access to identities.
To use the visual editor to create a policy
1. Sign in to the AWS Management Console and open the IAM console
at https://console.aws.amazon.com/iam/.
2. In the navigation pane on the left, choose Policies.
3. Choose Create policy.
4. In the Policy editor section, find the Select a service section, and then choose an AWS service. You
can use the search box at the top to limit the results in the list of services. You can choose only one
service within a visual editor permission block. To grant access to more than one service, add multiple
permission blocks by choosing Add more permissions.
5. In Actions allowed, choose the actions to add to the policy. You can choose actions in the following
ways:
 Select the check box for all actions.
 Choose add actions to type the name of a specific action. You can use wildcards (*) to specify
multiple actions.
 Select one of the Access level groups to choose all actions for the access level (for
example, Read, Write, or List).
 Expand each of the Access level groups to choose individual actions.

6. For Resources, if the service and actions that you selected in the previous steps do not support
choosing specific resources, all resources are allowed and you cannot edit this section.
If you chose one or more actions that support resource-level permissions, then the visual editor lists
those resources. You can then expand Resources to specify resources for your policy.
7. To add more permission blocks, choose Add more permissions. For each block, repeat steps 2 to 5.
8. When you are finished adding permissions to the policy, choose Next.
9. On the Review and create page, type a Policy Name and a Description (optional) for the policy that
you are creating. Review the Permissions defined in this policy to make sure that you have granted the
intended permissions.
10. Choose Create policy to save your new policy.
After you create a policy, you can attach it to your groups, users, or roles.

2.7 IAM roles

An IAM role is an IAM identity that you can create in your account that has specific
permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission
policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely
associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not
have standard long-term credentials such as a password or access keys associated with it. Instead, when you
assume a role, it provides you with temporary security credentials for your role session.

You can use roles to delegate access to users, applications, or services that don't normally have access to
your AWS resources. For example, you might want to grant users in your AWS account access to resources
they don't usually have, or grant users in one AWS account access to resources in another account. Or you
might want to allow a mobile app to use AWS resources, but not want to embed AWS keys within the app
(where they can be difficult to rotate and where users can potentially extract them). Sometimes you want to
give AWS access to users who already have identities defined outside of AWS, such as in your corporate
directory. Or, you might want to grant access to your account to third parties so that they can perform an
audit on your resources.

2.8. Creating a role for an AWS service (console) or IAM Roles Hands On

You can use the AWS Management Console to create a role for a service. Because some services support
more than one service role, see the AWS documentation for your service to see which use case to choose.
You can learn how to assign the necessary trust and permissions policies to the role so that the service can
assume the role on your behalf. The steps that you can use to control the permissions for your role can vary,
depending on how the service defines the use cases, and whether or not you create a service-linked role.

To create a role for an AWS service (console)


1. Sign in to the AWS Management Console and open the IAM console
at https://console.aws.amazon.com/iam/.
2. In the navigation pane of the IAM console, choose Roles, and then choose Create role.

3. For Select trusted entity, choose AWS service.


4. Choose the use case for your service. Use cases are defined by the service to include the trust policy
required by the service. Then, choose Next.
5. If possible, select the policy to use for the permissions policy or choose Create policy to open a new
browser tab and create a new policy from scratch. For more information, see Creating IAM policies. After
you create the policy, close that tab and return to your original tab. Select the check box next to the
permissions policies that you want the service to have.
Depending on the use case that you selected, the service might allow you to do any of the following:
 Nothing, because the service defines the permissions for the role.
 Choose from a limited set of permissions.
 Choose from any permissions.
 Select no policies at this time, create the policies later, and then attach them to the role.
6. Choose Next.
7. For Role name, the degree of role name customization is defined by the service. If the service defines
the role's name, this option is not editable. In other cases, the service might define a prefix for the role and
allow you to enter an optional suffix. Some services allow you to specify the entire name of your role.
If possible, enter a role name or role name suffix to help you identify the purpose of this role. Role names
must be unique within your AWS account. They are not distinguished by case. For example, you cannot
create roles named both PRODROLE and prodrole. Because other AWS resources might reference the
role, you cannot edit the name of the role after it has been created.
8. Choose Edit in the Step 1: Select trusted entities or Step 2: Add permissions sections to edit the use
cases and permissions for the role.
9. Review the role and then choose Create role.

2.9 IAM Security Tools

The following tools are crucial to upholding IAM security. They include but are not limited to:

a. Single Sign-on (SSO)


Single sign-on is a type of IAM control that enables users to authenticate their identity across numerous
resources via one set of credentials. The first time a user signs on, the username and password are directed to
the identity provider for verification. The authentication server checks the credentials against the directory
where user data is stored and initiates an SSO session on the user’s browser. When the user requests access
to an application within the trusted group, instead of requesting a password, the service provider requests
that the identity provider authenticates the user’s identity.
Advantages of SSO include:
Attack surface reduced from many credentials down to one
 A streamlined user experience and minimized password fatigue
 Lowered security risks involving partners, customers and other entities associated with the
organization

b. Multi-factor Authentication (MFA)


When a hacker finds an account supported by only one password and one username, they know they've hit
pay dirt. Cybercriminals have access to software purchased on the Dark Web that can send hundreds of
thousands of passwords and usernames to this account in less than a minute. Once the account recognizes
the right combination of letters, numbers and symbols, the hacker can access the account and potentially get
ahold of sensitive company information.

Multi-factor authentication ensures that digital users are who they say they are by requiring that they
provide at least two pieces of evidence to prove their identity. Each piece of evidence must come from a
different category: something they know, something they have or something they are. If one of the factors
has been compromised, the chances of another factor also being compromised are low, so requiring multiple
authentication factors thereby provides a higher level of assurance about the user’s identity. These additional
factors might take the form of numerical codes sent to a mobile phone, key fobs, smart cards, location
checks, biometric information or other factors.

See how an access request works with MFA:


c. Directory
User identity data is a prime target for attackers, especially when it’s housed across decentralized data
stores with inconsistent security policies. IAM security can help keep employee, partner and customer data
safe via a directory that centralizes and encrypts identity data, protecting it from attacks. A solid directory
solution can also help protect against insider attacks by allowing enterprises to limit admin access and by
sending active and passive alerts when suspicious activity occurs.

d. Self-service Password Resets


One important but often overlooked feature of an IAM security solution is the ability to implement self-
service password resets instead of requiring users to send requests to IT department help desks. By enabling
employees to use MFA to authenticate their identity and reset passwords, not only do you reduce the number
of costly password resets, but the security risk of password hijacking by hackers monitoring system "chatter"
is significantly reduced.

2.10 IAM Security Tools Hands On

Let us generate a credential report on the bottom left I am going to create a credential report. I can click
on Download Report to just download this report and this will be a CSV file.
Now this CSV, because I am using a training account, is not fascinating but as we can see we have two
rows in it we have my root account and my account named sandy. We can see when the user was created if
the password was enabled when the password was last used, and last changed.

This report is extremely helpful if you want to look at some users that have not been changing their
password or using it or their account. It could be giving you a great way to find which users that deserve
your attention from a security standpoint. I want to look at IAM Access Advisors I am going to click on my
user is sandy and on the right-hand side it says Access Advisor.
This is going to show me when some services were last used. The recent activity usually appears within
four hours. If you don’t see all the data, that’s why. We can see that for example Identity and Access
Management was last accessed today. Thanks to this policy right here. Also, the Health APIs and
Notifications were accessed today. Well, this is a little bell right here that automatically will be accessed to
see if there are any notifications for your accounts.

We will see what this is this is the Personal Health Dashboard. But for the other services for example for
Business or AWS Accounts or Certificates Manager I have not been using them. So maybe it makes sense
for me to remove these permissions from this user because it seems this user is not using these services. This
is the whole power of Access Advisor. And as you can see there are lots of services in AWS. About 23
pages just like this about 230 services in AWS at the time of recording. We have just seen all the ways we
can have security tools on IAM.

2.11 IAM Best Practices

With AWS Identity and Access Management (IAM), you can specify who can access which AWS services
and resources, and under which conditions. To help secure your AWS resources, follow these IAM best
practices.

1.Use temporary credentials

Require human users to use federation with an identity provider to access AWS by using temporary
credentials. You can use an identity provider for your human users to provide federated access to AWS
accounts by assuming IAM roles, which provide temporary credentials. For centralized access management,
we recommend that you use AWS IAM Identity Center to manage access to your accounts and permissions
within those accounts.
Require workloads to use temporary credentials with IAM roles to access AWS
A workload is a collection of resources and code, such as an application or backend process, that requires
an identity to make requests to AWS services. IAM roles have specific permissions and provide a way for
workloads to access AWS by relying on temporary security credentials through an IAM role. For more
information, see IAM roles.

2.Require multi-factor authentication (MFA)

We recommend using IAM roles for human users and workloads accessing your AWS resources so that
they rely on temporary credentials. However, for scenarios in which you need IAM users or root users in
your account, require MFA for additional security. Each user's credentials and device-generated response to
an authentication challenge are required to complete the sign-in process.

3.Rotate access keys regularly for use cases that require long-term credentials

Where possible, we recommend relying on temporary credentials instead of creating long-term credentials
such as access keys. However, for scenarios in which you need IAM users with programmatic access and
long-term credentials, use access key last used information to rotate and remove access keys regularly. For
more information, see Rotating access keys.
4.Safeguard your root user credentials and don't use them for everyday tasks

When you create an AWS account, you establish a root user name and password to sign in to the AWS
Management Console. Configure MFA to safeguard these credentials the same way you would protect other
sensitive personal information. Also, use your root user to complete the tasks that can be performed only by
the root user—and not for everyday tasks. For more information, see Best practices to protect your account's
root user.

5.Grant least privilege

a.Apply least-privilege permissions


When you set permissions with IAM policies, grant only the specific permissions required to perform
specific tasks, also known as least-privilege permissions. You might start with broad permissions while you
explore the permissions that are required for your workloads or use cases. You then can reduce permissions
to work toward least privilege. For more information, see Access management for AWS resources.

b.Get started with AWS managed policies and move toward least-privilege permissions
To get started granting permissions to your users and workloads, use the AWS managed policies that grant
permissions for many common use cases and are available in your AWS account. Keep in mind that AWS
managed policies might not grant least-privilege permissions for your specific use cases because they are
available for use by all AWS customers. As a result, we recommend that you reduce permissions further by
defining customer managed policies that are specific to your use cases. For more information, see AWS
managed policies. For information about AWS managed policies that are designed for specific job functions,
see AWS managed policies for job functions.

c.Regularly review and remove unused users, roles, and permissions


We recommend that you reduce permissions and remove unused users and roles with the goal of achieving
least-privilege permissions. IAM provides last accessed information to help you identify the permissions that
you no longer require, and you can use this information to refine your IAM policies to better adhere to least-
privilege permissions. For more information, see Refining permissions in AWS using last accessed
information.

d.Use IAM conditions in policies to further restrict access


You can specify conditions under which a permission is in effect. For example, you can write a policy
condition to specify that all requests must be sent by using SSL. You can also use conditions to grant access
to service actions, but only if they are used through a specific AWS service, such as AWS CloudFormation.
For more information, see IAM JSON policy elements: Condition.

6.Use IAM Access Analyzer

a.Generate least-privilege policies based on access activity


To grant only the permissions required to perform tasks, you can generate policies based on the access
activity that is found in AWS CloudTrail. IAM Access Analyzer analyzes the services and actions that your
IAM roles use, and then generates a least-privilege policy that you can use.
b.Verify public and cross-account access to resources
You can use IAM Access Analyzer to help you preview and analyze public and cross-account access for
supported resource types by reviewing the findings that IAM Access Analyzer generates. These findings
help you verify that your access controls grant the access that you expect. Additionally, as you update public
and cross-account permissions, you can verify the effect of your changes before deploying new access
controls to your resources..
c.Validate your IAM policies to help ensure secure and functional permissions
Use IAM Access Analyzer to validate the policies you create to ensure that they adhere to the IAM policy
language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and
actionable recommendations to help you author secure and functional policies. As you author new policies
or edit existing policies in the console, IAM Access Analyzer provides recommendations to help you refine
and validate your policies before you save them. Additionally, we recommend that you review and validate
all of your existing policies.

7.Set permissions guardrails across multiple accounts

As you scale your workloads, separate them by using multiple accounts that are managed with AWS
Organizations. We recommend that you use Organizations service control policies (SCPs) to
establish permissions guardrails to control access for all IAM users and roles across your accounts. SCPs are
a type of organization policy that you can use to manage permissions in your organization at the AWS
organization or account level. To do this, your administrator must attach identity-based or resource-based
policies to IAM users, IAM roles, or the resources in your accounts.

8.Delegate permissions management within an account by using permissions boundaries

In some scenarios, you might want to delegate permissions management within an account to others. For
example, you might want to allow developers to create and manage roles for their workloads. When you
delegate permissions to others, use permissions boundaries, which use a managed policy to set the maximum
permissions that an identity-based policy can grant to an IAM role. A permissions boundary does not grant
permissions on its own. For more information, see Permissions boundaries for IAM entities.

2.12 AWS IAM Multi-factor authentication (MFA) Overview


What is MFA (multi-factor authentication)?
Multi-factor authentication (MFA) is a multi-step account login process that requires users to enter more
information than just a password. For example, along with the password, users might be asked to enter a
code sent to their email, answer a secret question, or scan a fingerprint. A second form of authentication can
help prevent unauthorized account access if a system password has been compromised.
Why is multi-factor authentication necessary?
Digital security is critical in today's world because both businesses and users store sensitive information
online. Everyone interacts with applications, services, and data that are stored on the internet using online
accounts. A breach, or misuse, of this online information could have serious real-world consequences, such
as financial theft, business disruption, and loss of privacy.
While passwords protect digital assets, they are simply not enough. Expert cybercriminals try to actively
find passwords. By discovering one password, access can potentially be gained to multiple accounts for
which you might have reused the password. Multi-factor authentication acts as an additional layer of
security to prevent unauthorized users from accessing these accounts, even when the password has been
stolen. Businesses use multi-factor authentication to validate user identities and provide quick and
convenient access to authorized users.

2.13 AWS Access Keys, CLI & SDK

Introduction

AWS Access Keys are credentials used to authenticate with AWS services, including the AWS
Management Console, Command Line Interface (CLI), and Software Development Kits (SDKs). There are
two types of access keys: Access Key ID and Secret Access Key.

Access Key ID is a unique identifier used to access AWS services, while Secret Access Key is a secret
code used to encrypt the Access Key ID and authenticate with AWS services. Access Keys are used to make
programmatic requests to AWS services and are often used by developers and system administrators.

The AWS Command Line Interface (CLI) is a tool that allows users to interact with AWS services from a
command prompt or shell script. The CLI uses Access Keys to authenticate requests to AWS services and
provides a simple, command-line interface for managing AWS resources.

The AWS SDKs are software development kits that provide libraries and APIs for developers to build
applications that interact with AWS services. The SDKs use Access Keys to authenticate requests to AWS
services and provide a range of programming language-specific libraries and APIs that make it easier to
build AWS applications.

Using Access Keys, CLI, and SDKs can help automate tasks, manage AWS resources, and build custom
applications that interact with AWS services. However, it’s important to secure Access Keys and follow best
practices for managing and rotating them regularly to prevent unauthorized access to AWS resources.

AWS CLI Setup on Windows

Here are the steps to set up AWS CLI on Windows:

1. Download the AWS CLI installer for Windows from the AWS website. The installer is available in
both MSI and EXE formats.
2. Run the installer and follow the prompts to install AWS CLI. By default, AWS CLI will be installed
to C:\Program Files\Amazon\AWSCLI.
3. Once the installation is complete, open a Command Prompt window.
4. To verify that AWS CLI is installed correctly, type the following command:

aws –version

This should display the version number of AWS CLI installed on your system.

5. Next, you’ll need to configure AWS CLI with your AWS access keys. You can do this by typing the
following command:

aws configure
This will prompt you for your AWS Access Key ID, Secret Access Key, default region name, and default
output format. You can obtain your Access Key ID and Secret Access Key from the AWS Management
Console.

6. Once you’ve entered your AWS access keys and configured AWS CLI, you’re ready to start using it to
interact with AWS services. For example, you can use the following command to list all of your EC2
instances:

aws ec2 describe-instances

That’s it! You’ve now set up AWS CLI on your Windows system and can start using it to interact with
AWS services from the command line.

2.14 AWS CloudShell

AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS
Management Console. You can navigate to CloudShell from the AWS Management Console a few different
ways. For more information, see How to get started with AWS CloudShell?

How to get started with AWS CloudShell?

To start working with the shell, sign in to the AWS Management Console and choose
one of the following options:
 On the navigation bar, choose the CloudShell icon.

 In the Search box, type “CloudShell”, and then choose CloudShell.


This step opens your CloudShell session to a full screen.

 In the Recently visited widget, choose CloudShell.


This step opens your CloudShell session to a full screen.
 Choose CloudShell on the Console Toolbar, on the lower left of the console. You
can adjust the height of your CloudShell session by dragging =.

You can also switch your CloudShell session to a full screen by clicking Open in
new browser tab.
For instructions on how to sign in to the AWS Management Console and performing
key tasks with AWS CloudShell, see Getting started with AWS CloudShell.

You can run AWS CLI commands using your preferred shell, such as Bash, PowerShell, or Z shell. And
you can do this without downloading or installing command line tools.

When you launch AWS CloudShell, a compute environment that's based on Amazon Linux 2 is created.
Within this environment, you can access an extensive range of pre-installed development tools, options
for uploading and downloading files, and file storage that persists between sessions.

Week 2 Day 2 Afternoon


Cloud Computing instances in AWS
Cloud computing instances are server resources provided by third-party cloud services. While you can
manage and maintain physical server resources on premises, it is costly and inefficient to do so. Cloud
providers maintain hardware in their data centers and give you virtual access to compute resources in the
form of an instance. You can use the cloud instance for running compute-intensive workloads like
containers, databases, microservices, and virtual machines.

2.15 Virtualization in Cloud Computing


The word virtual means that it is a representation of something physically present elsewhere.
Similarly, Virtualization in Cloud Computing is a technology that allows us to create virtual resources such
as servers, networks, and storage in the cloud. All these resources are allocated from a physical machine that
runs somewhere in the world, and we'll get the software to provision and manage these virtual resources.
These physical machines are operated by cloud providers, who take care of maintenance, and hardware
supplies.
Virtualization in Cloud Computing also enables us to set up access control over the resources to secure
them. It also enables resource sharing among multiple applications.
Virtualization also enables efficient resource utilization, since it only provisions the requested amount of
resources and not more. And provisioning extra resources such as extra memory, storage, or processors is as
simple as clicking a few buttons on the cloud software.
Some of virtualization in cloud computing examples are as follows:
 EC2 service from Amazon Web Service
 Compute engine from Google Cloud
 Azure Virtual Machines from Microsoft Azure

2.16 What is the concept behind the Virtualization?


The main concept behind virtualization is Hypervisor. Hypervisor is a software that partitions the
hardware resources on the physical machine and runs Virtual Machine. It is typically installed on the server's
hardware and divides the resources for Virtual machines (VMs).
The server running hypervisor is called the Host, and the VMs using its resources are called Guest
Operating Systems. The VMs function like digital files inside the physical device and they can be moved
from one system to another, thereby increasing the portability.
Hypervisor partitions the resources as per the requirement of the physical machine. This enables cloud
providers to provision virtual machines to the users, who then can run their applications on them.
If extra resources are requested, the hypervisor caches the current state of the virtual machine and transfers
the request to the physical system (hardware) to provide more resources. By doing so, Hypervisor can make
sure the previous state of the VMs is not modified after processing the extra resource request.
There are many open-source and paid Hypervisors available. Cloud providers use them based on their
requirements and business needs.

2.17 Architecture of Virtualization

Virtualization follows a very simple architecture. Let's first look at the left side of the figure, this is the
traditional machine. Here we have the hardware at the base layer and the host operating system, such as
Linux, Windows, Mac, etc. Above, we have the application running directly on the host machine.
Here since we run one application on the host machine, a lot of computer resources are unused, to avoid
this we can run multiple applications by sharing the resources among the applications. This might increase
the efficiency in resource utilization, but there are a few issues since the resources are shared. The risk of a
data breach is high, and the application cannot operate in a dedicated environment.
To address these issues and enable efficient resource utilization, virtualization was introduced that follows
the same architecture pattern as the traditional machine but with a slight change.
Virtualization architecture starts with the base hardware, as the traditional machine, but it replaces the
operating system with the hypervisor. The hypervisor creates virtual machines for these applications and
allots resources to them, and these VMs will have their OS, storage, computing power, etc., allowing the
application to run in an isolated environment with dedicated resources.
This allows efficient resource utilization as well as provides an isolated or dedicated environment for the
application inside the machine.

2.18 Types of Virtualizations


Types of Virtualization
1. Application Virtualization: Application virtualization helps a user to have remote access to an
application from a server. The server stores all personal information and other characteristics of the
application but can still run on a local workstation through the internet. An example of this would be a user
who needs to run two different versions of the same software. Technologies that use application
virtualization are hosted applications and packaged applications.
2. Network Virtualization: The ability to run multiple virtual networks with each having a separate control
and data plan. It co-exists together on top of one physical network. It can be managed by individual parties
that are potentially confidential to each other. Network virtualization provides a facility to create and
provision virtual networks, logical switches, routers, firewalls, load balancers, Virtual Private Networks
(VPN), and workload security within days or even weeks.

Network Virtualization
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely stored on a server in
the data center. It allows the user to access their desktop virtually, from any location by a different machine.
Users who want specific operating systems other than Windows Server will need to have a virtual desktop.
The main benefits of desktop virtualization are user mobility, portability, and easy management of software
installation, updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are managed by a virtual storage
system. The servers aren’t aware of exactly where their data is stored and instead function more like worker
bees in a hive. It makes managing storage from multiple sources be managed and utilized as a single
repository. storage virtualization software maintains smooth operations, consistent performance, and a
continuous suite of advanced functions despite changes, breaks down, and differences in the underlying
equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of server resources takes
place. Here, the central server (physical server) is divided into multiple different virtual servers by changing
the identity number, and processors. So, each system can operate its operating systems in an isolated
manner. Where each sub-server knows the identity of the central server. It causes an increase in performance
and reduces the operating cost by the deployment of main server resources into a sub-server resource. It’s
beneficial in virtual migration, reducing energy consumption, reducing infrastructural costs, etc.

Server Virtualization
6. Data Virtualization: This is the kind of virtualization in which the data is collected from various sources
and managed at a single place without knowing more about the technical information like how data is
collected, stored & formatted then arranged that data logically so that its virtual view can be accessed by its
interested people and stakeholders, and users through the various cloud services remotely. Many big giant
companies are providing their services like Oracle, IBM, At scale, Cdata, etc.

Week 2 day 3 Morning Session


Amazon Elastic Compute Cloud (Amazon EC2) Instance
2.18 EC2 Basics
Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the
Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and
deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you
need, configure security and networking, and manage storage. You can add capacity (scale up) to handle
compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage
decreases, you can reduce capacity (scale down) again.
The following diagram shows a basic architecture of an Amazon EC2 instance deployed within an
Amazon Virtual Private Cloud (VPC). In this example, the EC2 instance is within an Availability Zone in
the Region. The EC2 instance is secured with a security group, which is a virtual firewall that controls
incoming and outgoing traffic. A private key is stored on the local computer and a public key is stored on
the instance. Both keys are specified as a key pair to prove the identity of the user. In this scenario, the
instance is backed by an Amazon EBS volume. The VPC communicates with the internet using an internet
gateway.

2.19 Creating EC2 Instances With EC2 user data


First login to the AWS console and search for EC2 and follow the below images for the process.
 Select the instances option on the EC2 dashboard page
 Launch an instance by clicking the "Launch instance" button.

 Givea name to the EC2 instance


 Add a EC2 base image here we selected the default option.
 AMI and Architecture are the default options provided by AWS.
 Under Advanced Details select the key pair option and create a new key pair.
 In Create key pair add key pair name ,type and file format
 Here we selected the default type ,file format
 click create key pair button.
o It is downloaded and selected
Automatically.

o In network settings
o Instance is going to get an public id for that we connect our instance.
o Select Allow http traffic from the internet
o Other options let it be default

2.19 EC2 instance types basics


Amazon EC2 – Instance Types
Different Amazon EC2 instance types are designed for certain activities. Consider the unique requirements
of your workloads and applications when choosing an instance type. This might include needs
for computing, memory, or storage.
The AWS EC2 Instance Types are as follows:
1. General Purpose Instances
2. Compute Optimized Instances
3. Memory-Optimized Instances
4. Storage Optimized Instances
5. Accelerated Computing Instances
1. General-Purpose Instances
The computation, memory, and networking resources in general-purpose instances are balanced.
Scenarios, where you can use General Purpose Instances, are gaming servers, small databases, personal
projects, etc. Assume you have an application with a kind of equal computing, memory, and networking
resource requirements. Because the program does not require optimization in any particular resource area,
you can use a general-purpose instance to execute it.
Examples:
 The applications that require computing, storage, networking, server performance, or want something
from everything, can utilize general-purpose instances.
 If high-performance CPUs are not required for your applications, you can go for general-purpose
instances.
EC2 General-Purpose Instance Types
Here are several general-purpose examples from which we can pick:
T2. micro: The most well-known instance in AWS is t2. micro, which gives 1 CPU and 1 GB of memory
with low to moderate network performance. It is also free and highly helpful for individuals first starting
AWS.
M6a Instance: The third-generation AMD EPYC processors used in the M6 instance are perfect for
general-purpose tasks. In m6a there are different sizes like m6a.large, m6a.2xlarge, m6a.4xlarge, and so on.
m6a large offers 2 CPUs, 8GiB memory, and network performance up to 12.5 Gigabit.
M5 instance: The newest generation of general-purpose instances, known as M5, are powered by Intel’s
Xeon Platinum 8175 processors. Its M5 divisions include m5. large, m5.12xlarge, and m5.24 large, and the
sort of M5 service we select will depend on memory, CPUs, storage, and network speed.
Features
 Powered by specifically designed AWS Graviton3 processors.
 Default optimized with EBS.
 It consists of dedicated hardware and a lightweight hypervisor.
 The bandwidth is higher when compared to other types.
Applications
1. Web Servers: The web servers can be hosted in General-purpose instances.EC2 instances provide a
flexible and scalable platform for web applications.
2. Development and Test Environment: The developers can use these General-purpose instances to
build, test and deploy the applications. It is a cost-effective solution for running this environment.
3. Content delivery: The hosting of content delivery networks (CDNs) that distribute content to users all
over the world is possible using general-purpose instances. EC2 instances can be set up to provide
content with low latency and great performance.
A popular option for many businesses, AWS EC2 general-purpose instances offer a versatile and scalable
platform for a variety of applications.
2. Compute-Optimized Instances
Compute-optimized instances are appropriate for applications that require a lot of computation and help
from high-performance CPUs. You may employ compute-optimized instances for workloads including web,
application, and gaming servers just like general-purpose instances. This instance type is best suited for
high-performance applications like web servers, Gaming servers.
Examples
 Applications that require high server performance or that employ a machine-learning model will
benefit from compute-optimized instances.
 If you have some batch processing workloads or high-performance computing.
Compute-Optimized Some Instance Types
1. c5d.24large: The c5d instance, which has 96 CPUs, 192 GiB of RAM, 3600 GB of SSD storage, and
12 Gigabit of network performance, was selected primarily for its excellent web server performance.
There are other instance types, including giant and extra-large. Depending on our needs, we will choose
between the c5a big and extra-large instance types.
Features
 Powered by specifically designed AWS Graviton3 processors.
 It will use DDR5 memory, by which it will get 50% more bandwidth than DDR4.
 By default EBS optimisation.
Applications
1. Machine learning: Machine learning operations can be performed on Compute-optimized instances
because it will manage heavy workloads. The processing capacity required to swiftly and effectively
train massive machine learning models can be provided by compute-optimized instances.
2. Gaming: Compute-optimised is well suited for heavy workloads so it can easily manage the Gaming
operations easily. Compute-optimized will decrease the latency and it can deliver a high-quality gaming
experience.

3. Memory-Optimized Instances
Memory-optimized instances are geared for workloads that need huge datasets to be processed in
memory. Memory here defines RAM which allows us to do multiple tasks at a time. Data stored is used to
perform the central processing unit (CPU) tasks it loads from storage to memory to run. This process of
preloading gives the CPU direct access to the computer program. Assume you have a workload that
necessitates the preloading of significant volumes of data prior to executing an application. A high-
performance database or a task that requires real-time processing of a significant volume of unstructured
data might be involved in this scenario. In this case, consider using a memory-optimized instance. It is used
to run applications that require a lot of memory with high performance.
Examples:
 Helpful for databases that need to handle quickly.
 Processes that do not need a large quantity of data yet require speedy and real-time processing.
Memory-Optimized Some Instance Types
The R and X categories belong to memory-optimized. let’s discuss any one-off them.
R7g.medium: It is run on AWS Gravitation processors with ARM architecture. with 1 CPU, 8 (GiB) of
memory, an EBS storage type, and a maximum of 12.5% network bandwidth.
x1: X1 is mainly suited for enterprise edition databases with memory applications and comes with 64 vCPU,
976 GiB of memory, 1 x 1,920 GB of SSD storage, 7,000 Mbps of dedicated EBS bandwidth, and 10 Gbps
of network performance.
Features
 Elastic Fabric Adapter (EFA) is supported on the r7g.16xlarge and r7g.metal instances.
 Includes the newest DDR5 memory, which provides 50% more bandwidth than DDR4.
 Compared to R6g instances, improved networking bandwidth is 20% more.
Applications
1. In-Memory Databases: Memory-optimized instances are mostly suited for databases that contain
high bandwidth and memory capacity is high.
2. Big Data Processing: For big data processing workloads like Apache Spark and Apache Hadoop
that demand high memory capacity and bandwidth, memory-optimized instances can be deployed.
Instances that have been optimized for memory can offer the memory space and bandwidth required to
process huge amounts of data fast and effectively.

4. Storage Optimized Instances


Storage-optimized instances are made for workloads that demand fast, sequential read and write access to
huge datasets. Distributed file systems, data warehousing applications, and high-frequency online
transaction processing (OLTP) systems are examples of workloads that are suited for storage-optimized
instances. Storage-optimized instances are built to provide applications with the lowest latency while
accessing the data.
Examples:
 The applications which high processing of databases can utilize storage-optimized instances.
 Data Warehousing applications or distributed file systems can use it.
Storage Optimized Instance Types
1. Im4gn: Because Im4gn is powered by AWS Graviton processors, it offers the best pricing
performance for workloads in Amazon EC2 that demand a lot of storage. Im4gn.large’s base
configuration has 2 CPUs, 8 GiB of memory, and EBS storage with a network bandwidth of up to 25
Gbps. It offers some other instance types of ls4gn, l4i, D, and H.
Features
 Using AWS Graviton2 processors, which provide the best price/performance for workloads in
Amazon EC2.
 Geared at tasks that correspond to 4 GB of RAM per vCPU.
 Improved Networking (ENA)-based Elastic Network Adapter (ENA)-based up to 100 Gbps of
network bandwidth.
Applications
1. Amazon EC2 C5d Instance: It is suitable for applications which are having very high intensive
workloads. It can deliver high input and output performance with low latency.
2. Amazon EC2 I3 instance: The storage-optimized instance is well-suited for applications with high
storage needs. It also provides local NVMe storage.
5. Accelerated Computing Instances
Coprocessors are used in accelerated computing instances to execute specific operations more effectively
than software running on CPUs. Floating-point numeric computations, graphics processing, and data pattern
matching are examples of these functions. A Hardware-Accelerator/ Co-processor is a component in
computing that may speed up data processing. Graphics applications, game streaming, and application
streaming are all good candidates for accelerated computing instances.
Examples:
 If the application utilizes floating-point calculations or graphics processing, accelerated computing
instances will be the best among all.
 Also, data pattern matching can be done more efficiently with this instance type.
Accelerated Computing Instance Types
1. Accelerated computing consists of mainly P1, Inf2, G5, G5g, G4dn, G4ad, G3, F1 and VT1.
2. P4: It offers 3.0 GHz 2nd Generation Intel Xeon Processors. of 8 GPUs, 96 CPUs, and memory of
1152(GiB) with network bandwidth of 400ENA and EFA.
Features
 2nd Generation Intel Xeon Scalable processors, 3.0 GHz (Cascade Lake P-8275CL).
 8 NVIDIA A100 Tensor Core GPUs maximum.
 400 Gbps instance networking with support for NVIDIA GPUDirect RDMA and Elastic Fabric
Adapter (EFA) (remote direct memory access).
Applications
1. Amazon EC2 P3 Instances: High-performance computing, rendering, and machine learning
workloads are all well-suited to these instances. Its NVIDIA V100 GPUs enable them to deliver up to 1
petaflop of mixed-precision performance per instance, which makes them perfect for simulations of
computational fluid dynamics, molecular dynamics, and complicated deep learning models.
2. Amazon EC2 G4 Instances: These instances are designed for graphically demanding tasks like
video transcoding, virtual desktops, and gaming. They provide up to 65 teraflops of single-precision
performance per instance and are driven by NVIDIA T4 GPUs.

2.20 Security groups and classic ports


A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic.
Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic
from your instance.
There are two sets of rules for an Amazon EC2 security group: inbound and outbound. Inbound rules
define the incoming traffic the security group allows. Outbound rules define the traffic permitted to leave the
compute resource associated with the security group.
Each inbound rule consists of three key elements:
 Protocol. Network protocols the rule will allow, such as TCP and User Datagram Protocol.
 Port range. A specific port or a port range to allow traffic on.
 Source. A specific IP, IP range or other security groups that will be allowed access.
Each outbound rule consists of:
 Protocol. Same as inbound rules.
 Port Range. Same as inbound rules.
 Destination. Similar to the inbound rules for source, except it refers to a destination the security
group allows traffic to go to.
2.21 How to create a security group in AWS
You can create a security group from the EC2 console and the CLI.
Step1. In the console, click on the "Security Groups" link in the left navigation bar and click on the Create
security group button. It's important to note that security groups are assigned to a specific VPC.

When creating a security group, add in basic details.


Step 2. The next step is to configure the inbound rules. The example below allows traffic on port 22 from
IPs in the 10.1.0.0/24 Central Identities Data Repository range. The source can also be configured as another
security group, which allows traffic from resources that have the security group assigned to them.

When creating a security group, specify inbound rules.


Step 3. You also need to configure outbound rules, which typically allow outgoing traffic without
restrictions. Outbound rules can be set up to only allow traffic to specific IP ranges or security groups.

When creating a security group, specify outbound rules.


You can also create a security group via the CLI with these commands:
 AWS CLI. create-security-group
 AWS Tools for Windows PowerShell. New-EC2SecurityGroup
2.21 How to SSH to EC2 Instance

SSH, also known as Secure Shell or Secure Socket Shell, is a network protocol that gives users,
particularly system administrators, a secure way to access a computer over an unsecured network. Secure
Shell (SSH) is a cryptographic network protocol that allows secure remote login from one computer to
another. It provides a secure channel over an unsecured network, ensuring your data is protected during
transmission.

Prerequisites

Before we dive into the steps, ensure you have the following:

1. An AWS account
2. A running Windows EC2 instance
3. PuTTY installed on your local machine

Step 1: Download the Key Pair

When you create an EC2 instance, AWS provides a key pair for that instance. This key pair consists of a
public key that AWS stores, and a private key file that you store (.pem file).

If you already have the .pem file, proceed to the next step. If not, follow these steps to create a new key
pair and download the .pem file:

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.


2. In the navigation pane, under NETWORK & SECURITY, choose Key Pairs.
3. Choose Create key pair.
4. For Name, type a descriptive name for the key pair.
5. For File format, choose pem.
6. Choose Create key pair.
7. The private key file will automatically download.

Step 2: Convert the .pem File to .ppk Format

PuTTY does not natively support the .pem format that AWS uses for key pairs. Therefore, you need to
convert your .pem file to the .ppk format. Here’s how:

1. Open PuTTYgen (part of the PuTTY download).


2. Choose Load.
3. Change the file type to All Files (.).
4. Select your .pem file and choose Open.
5. Choose OK to dismiss the alert dialog box.
6. Choose Save private key.
7. Choose Yes to dismiss the alert dialog box.
8. Specify the same name for the key as the .pem file, but with the .ppk extension, and then choose
Save.

Step 3: Connect to Your Windows EC2 Instance

Now that you have your .ppk file, you can connect to your Windows EC2 instance. Here’s how:

1. Open the Amazon EC2 console.


2. In the navigation pane, choose Instances.
3. Select your instance and choose Connect.
4. In the Connect To Your Instance dialog box, choose A standalone SSH client.
5. Note the public DNS (IPv4) of your instance, you will need it for the next step.

Step 4: SSH into Your Windows EC2 Instance

Finally, you can SSH into your Windows EC2 instance. Here’s how:

1. Open PuTTY.
2. In the Category pane, choose Session and complete the following fields:
o In the Host Name box, enter the public DNS (IPv4) of your instance.
o In the Port box, type 22.
3. In the Category pane, expand SSH, and then choose Auth.
4. Choose Browse, and then select your .ppk file.
5. Choose Open, and then choose Yes to dismiss the alert dialog box.
6. In the PuTTY console, log in as the appropriate user.

Congratulations! You have successfully SSHed into your Windows EC2 instance.

2.22 EC2 Instance Connect

Amazon EC2 Instance Connect provides a simple and secure way to connect to your instances using
Secure Shell (SSH). With EC2 Instance Connect, you use AWS Identity and Access Management (IAM)
policies and principals to control SSH access to your instances, removing the need to share and manage SSH
keys.

2.23 EC2 Instance Roles Demo

After you've created an IAM role, you can launch an instance, and associate that role with the instance
during launch.
To launch an instance with an IAM role (console)
1. Follow the procedure to launch an instance.
2. Expand Advanced details, and for IAM instance profile, select the IAM role that you created.
3. Configure any other details that you require for your instance or accept the defaults, and select a key
pair. For information about the fields in the launch instance wizard, see Launch an instance using
defined parameters.
4. In the Summary panel, review your instance configuration, and then choose Launch instance.
5. If you are using the Amazon EC2 API actions in your application, retrieve the AWS security credentials
made available on the instance and use them to sign the requests. The AWS SDK does this for you.

IMDSv2IMDSv1

IMDSv2

[ec2-user ~]$ TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-


token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" –v

http://169.254.169.254/latest/meta-data/iam/security-credentials/role_name

IMDSv1

[ec2-user ~]$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/role_name

Attach an IAM role to an instance


To attach an IAM role to an instance that has no role, the instance can be in the stopped or running state.
To attach an IAM role to an instance
1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the instance, choose Actions, Security, Modify IAM role.
4. Select the IAM role to attach to your instance, and choose Save.

2.24 Private vs Public vs Elastic IP

Private IP: This is like your nickname. You are recognized by it within your private circles i.e. your family
and friend circle. Similarly, Private IP is used to identify network resources within a private network. Using
Private IP of a network resource, you can’t identify it over internet. There are some IPs dedicated for use as a
private IP. These are called RFC 1918 addresses. It is best to use Private IP from these addresses however its
not a strict requirement. In AWS, this is the most used among three IP address types and required during
creation of VPC, Subnet etc. This is also by default assigned to every instance on creation.

Public IP: This is like your mobile number. People all over the world can identify and reach out to you using
it. Though it has wider reach unlike nick name it is temporary and there is no guarantee you will have the
same number tomorrow. Public IP is a routable address over internet. AWS EC2 instances can be assigned
public address if you choose the option but it is dynamic. That means AWS assigns any public IP which
makes your instance accessible over the internet. Every time there is a Stop and Start, your instance gets a
different Public IP. It’s like you start everyday with a new mobile number. On the other hand, if you restart
your instance you still get to keep the same Public IP. Restart is more like a power nap during the day where
you get to keep the same mobile number..

Elastic IP: This is like Aadhaar number or Social Security Number (SSN). It is meant
to uniquely and permanently identify you during your lifetime and does not change. As mentioned earlier,
Public IP is dynamic which is a problem. Imagine how difficult life would be if you change your mobile
number every day. To solve this problem, AWS allows you to select an Elastic IP which is actually a static
Public IP address. Once you attach it to an EC2 instance, it will not change on Instance stop and start.

2.25 EC2 Placement groups


You can use placement groups to influence the placement of a group of interdependent instances to meet
the needs of your workload. Depending on the type of workload, you can create a placement group using
one of the following placement strategies:

 Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to
achieve the low-latency network performance necessary for tightly-coupled node-to-node communication
that is typical of high-performance computing (HPC) applications.
 Partition – spreads your instances across logical partitions such that groups of instances in one partition
do not share the underlying hardware with groups of instances in different partitions. This strategy is
typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.
 Spread – strictly places a small group of instances across distinct underlying hardware to reduce
correlated failures.

There is no charge for creating a placement group.

2.26 Lab - EC2 Placement groups

Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/ .


1. In the navigation pane, choose Placement Groups, Create placement group.
2. Specify a name for the group.
3. Choose the placement strategy for the group. ...
4. To tag the placement group, choose Add tag, and then enter a key and value. ...
5. Choose Create group.

2.27 Elastic Network Interface (ENI) Overview

AWS Network Interfaces or Elastic Network Interfaces (AWS ENIs) are virtual network cards attached to
EC2 instances that help facilitate network connectivity for instances. Having two or more AWS Network
Interface connected to an instance permits it to communicate on two separate subnets.
AWS Network Interface has the following characteristics:
 A primary private IPv4 address of a user’s VPC
 One Elastic IP address (IPv4) for every private IPv4 address
 One or more secondary private IPv4 addresses of a user’s VPC
 A description
 A destination/source check flag
 One public IPv4 address
 One or more security groups
 A MAC address
 One or more IPv6 addresses
AWS Network Interfaces can be created, configured, and attached to instances within the same Availability
Zone. An AWS ENI can be attached to an instance or detached from an instance and then attached to
another instance upon creation.
2.28 Lab - ENI
. To create an AWS ENI, consider the following steps:
1. Open the Amazon EC2 console
2. Click on Network Interfaces from the navigation pane
3. Click on Create network interface
4. Optionally add a descriptive name for Description
5. Choose a subnet (IPv4-only, IPv6-only, or dual-stack (IPv4 and IPv6)). The next option will change
according to the type of subnet you select
6. Do either of the following for Private IPv4 address:
 Permit Amazon EC2 to choose an IPv4 address from the internet by clicking on Auto-assign
 Enter an IPv4 address selected by you from the subnet by clicking on Custom
7. Do either of the following for IPv6 address:
 Select None if you do not wish to allocate an IPv6 address to the network interface
 If you want Amazon EC2 to choose an IPv6 address from the subnet, then select Auto-assign
 Choose Custom to select and enter an IPv6 address from the subnet
8. Optionally, select Elastic Fabric Adapter and Enable to create an Elastic Fabric Adapter
9. Select one or more security groups
10. Optionally, select Add New Tag for each tag and enter a tag key and an optional tag value
11. Click on Create network interface

2.29 EC2 Hibernate

When you hibernate an instance, Amazon EC2 signals the operating system to perform hibernation
(suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon Elastic
Block Store (Amazon EBS) root volume. Amazon EC2 persists the instance's EBS root volume and any
attached EBS data volumes.

2.30 To hibernate an Amazon EBS-backed instance [Lab EC2 Hibernate]


1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select an instance, and choose Instance state, Hibernate instance. If Hibernate instance is disabled,
the instance is already hibernated or stopped, or it can't be hibernated.
4. When prompted for confirmation, choose Hibernate. It can take a few minutes for the instance to
hibernate. The instance state first changes to Stopping, and then changes to Stopped when the instance
has hibernated.
2.31 EC2 Advance concepts (Nitro, vCPU, Capacity Reservations)

The AWS Nitro System is the underlying platform for our next generation of EC2 instances that enables
AWS to innovate faster, further reduce cost for our customers, and deliver added benefits like increased
security and new instance types. AWS has completely re-imagined our virtualization infrastructure.
Amazon EC2 instances support multithreading, which enables multiple threads to run concurrently on a
single CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. An instance has a
default number of CPU cores, which varies according to instance type. For example, an m5.xlarge instance
type has two CPU cores and two threads per core by default—four vCPUs in total.
Capacity Reservations enable you to reserve capacity for your Amazon EC2 instances in a specific
Availability Zone for any duration. This gives you the flexibility to selectively add capacity reservations and
still get the Regional RI discounts for that usage.

2.32 Amazon Elastic Block Store EBS


Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for
use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at
any scale. A broad range of workloads, such as relational and non-relational databases, enterprise
applications, containerized applications, big data analytics engines, file systems, and media workflows are
widely deployed on Amazon EBS.

You can choose from five different volume types to balance optimal price and performance. You can
achieve single digit-millisecond latency for high performance database workloads such as SAP HANA or
gigabyte per second throughput for large, sequential workloads such as Hadoop. You can change volume
types, tune performance, or increase volume size without disrupting your critical applications, so you have
cost-effective storage when you need it.

Designed for mission-critical systems, EBS volumes are replicated within an Availability Zone (AZ) and
can easily scale to petabytes of data. Also, you can use EBS Snapshots with automated lifecycle policies to
back up your volumes in Amazon S3, while ensuring geographic protection of your data and business
continuity.
.
Features of EBS:
 Scalability: EBS volume sizes and features can be scaled as per the needs of the system.
 Backup: Users can create snapshots of EBS volumes that act as backups.
 Encryption: Encryption can be a basic requirement when it comes to storage. This can be due to the
government of regulatory compliance. EBS offers an AWS managed encryption feature.
 Charges: AWS charges users for the storage you hold. For example if you use 1 GB storage in a 5 GB
volume, you will be charged for a 5 GB EBS volume.EBS charges vary from region to region.
 The data in an EBS volume will remain unchanged even if the instance is rebooted or terminated.

2.33 Lab EBS -To create an empty EBS volume using the console
1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Volumes.
3. Choose Create volume.
4. For Volume type, choose the type of volume to create.
5. For Size, enter the size of the volume, in GiB. For more information, see Constraints on the size and
configuration of an EBS volume.
6. (io1, io2, and gp3 only) For IOPS, enter the maximum number of input/output operations per second
(IOPS) that the volume should provide.
7. (gp3 only) For Throughput, enter the throughput that the volume should provide, in MiB/s.
8. For Availability Zone, choose the Availability Zone in which to create the volume. A volume can be
attached only to an instance that is in the same Availability Zone.
9. For Snapshot ID, keep the default value (Don't create volume from a snapshot).
10. (io1 and io2 only) To enable the volume for Amazon EBS Multi-Attach, select Enable Multi-
Attach..
11. Set the encryption status for the volume.
If your account is enabled for encryption by default, then encryption is automatically enabled and you can't
disable it. You can choose the KMS key to use to encrypt the volume.
If your account is not enabled for encryption by default, encryption is optional. To encrypt the volume,
for Encryption, choose Encrypt this volume and then select the KMS key to use to encrypt the volume.
12. (Optional) To assign custom tags to the volume, in the Tags section, choose Add tag, and then enter
a tag key and value pair..
13. Choose Create volume.
14. To use the volume, attach it to an instance.

2.34 EBS Snapshots

You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots.
Snapshots are incremental backups, which means that only the blocks on the device that have changed after
your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on
storage costs by not duplicating data.

2.34 Lab –Snapshot : To create a snapshot using the console


1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Snapshots, Create snapshot.
3. For Resource type, choose Volume.
4. For Volume ID, select the volume from which to create the snapshot.
The Encryption field indicates the selected volume's encryption status. If the
selected volume is encrypted, the snapshot is automatically encrypted using the same
KMS key. If the selected volume is unencrypted, the snapshot is not encrypted.
5. (Optional) For Description, enter a brief description for the snapshot.
6. (Optional) To assign custom tags to the snapshot, in the Tags section, choose Add
tag, and then enter the key-value pair. You can add up to 50 tags.
7. Choose Create snapshot

2.35 Amazon Machine Image (AMI) Overview


An Amazon Machine Image (AMI) is a supported and maintained image provided by
AWS that provides the information required to launch an instance. You must specify
an AMI when you launch an instance. You can launch multiple instances from a single
AMI when you require multiple instances with the same configuration. You can use
different AMIs to launch instances when you require instances with different
configurations.

An AMI includes the following:

 One or more Amazon Elastic Block Store (Amazon EBS) snapshots, or, for instance-
store-backed AMIs, a template for the root volume of the instance (for example, an
operating system, an application server, and applications).
 Launch permissions that control which AWS accounts can use the AMI to launch
instances.
 A block device mapping that specifies the volumes to attach to the instance when
it's launched.

2.36 Lab – AMI


To create an AMI from an instance using the console
1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the instance from which to create the AMI, and then choose Actions, Image
and
templates, Create image.
Tip : If this option is disabled, your instance isn't an Amazon EBS-backed
instance.
4. On the Create image page, specify the following information:
a. For Image name, enter a unique name for the image, up to 127 characters.
b. For Image description, enter an optional description of the image, up to 255
characters.
c. For No reboot, either keep the Enable check box cleared (the default), or select
it.
 If Enable is cleared, when Amazon EC2 creates the new AMI, it
reboots the instance so that it can take snapshots of the attached volumes while
data is at rest, in order to ensure a consistent state.
 If Enable is selected, when Amazon EC2 creates the new AMI, it
does not shut down and reboot the instance.
Warning : If you choose to enable No reboot, we can't guarantee the file system
integrity of the created
image.
d. Instance volumes – You can modify the root volume, and add additional Amazon
EBS and instance store volumes, as follows:
i. The root volume is defined in the first row.
 To change the size of the root volume, for Size, enter the required value.
ii. If you select Delete on termination, when you terminate the instance created
from this AMI, the EBS volume is deleted. If you clear Delete on termination,
when you terminate the instance, the EBS volume is not deleted.
iii. To add an EBS volume, choose Add volume (which adds a new row). For Storage
type, choose EBS, and fill in the fields in the row. When you launch an instance
from your new AMI, additional volumes are automatically attached to the instance.
Empty volumes must be formatted and mounted. Volumes based on a snapshot
must be mounted.
iv. To add an instance store volume, see Add instance store volumes to an AMI. When
you launch an instance from your new AMI, additional volumes are automatically
initialized and mounted. These volumes do not contain data from the instance store
volumes of the running instance on which you based your AMI.
b. Tags – You can tag the AMI and the snapshots with the same tags, or you can tag
them with different tags.
 To tag the AMI and the snapshots with the same tags, choose Tag image and
snapshots together. The same tags are applied to the AMI and every snapshot
that is created.
 To tag the AMI and the snapshots with different tags, choose Tag image and
snapshots separately. Different tags are applied to the AMI and the snapshots
that are created. However, all the snapshots get the same tags; you can't tag each
snapshot with a different tag.
To add a tag, choose Add tag, and enter the key and value for the tag. Repeat for
each tag.
b. When you're ready to create your AMI, choose Create image.
2. To view the status of your AMI while it is being created:
a. In the navigation pane, choose AMIs.
b. Set the filter to Owned by me, and find your AMI in the list.
Initially, the status is pending but should change to available after a few minutes.
3. (Optional) To view the snapshot that was created for the new AMI:
a. Note the ID of your AMI that you located in the previous step.
b. In the navigation pane, choose Snapshots.
c. Set the filter to Owned by me, and then find the snapshot with the new AMI ID
in
the Description column.
When you launch an instance from this AMI, Amazon EC2 uses this snapshot to
create its root
device volume.

2.37 EC2 Instance Store

An instance store provides temporary block-level storage for your instance. This storage is located on
disks that are physically attached to the host computer. Instance store is ideal for temporary storage of
information that changes frequently, such as buffers, caches, scratch data, and other temporary content. It
can also be used to store temporary data that you replicate across a fleet of instances, such as a load-
balanced pool of web servers.

An instance store consists of one or more instance store volumes exposed as block devices. The size of an
instance store as well as the number of devices available varies by instance type and instance size.

2.38 EBS Volume Types

EBS Volume Types

Amazon EBS provides two types of volume that differ in performance characteristics and price. EBS
Volume types fall into two parts:
o SSD-backed volumes
o HDD-backed volumes
SSD
o SSD stands for solid-state Drives.
o In June 2014, SSD storage was introduced.
o It is a general purpose storage.
o It supports up to 4000 IOPS which is quite very high.
o SSD storage is very high performing, but it is quite expensive as compared to HDD (Hard Disk
Drive) storage.
o SSD volume types are optimized for transactional workloads such as frequent read/write operations
with small I/O size, where the performance attribute is IOPS.
SSD is further classified into two parts:
o General Purpose SSD
o Provisioned IOPS SSD
General Purpose SSD
o General Purpose SSD is also sometimes referred to as a GP2.
o It is a General purpose SSD volume that balances both price and performance.
o You can get a ratio of 3 IOPS per GB with up to 10,000 IOPS and the ability to burst up to 3000
IOPS for an extended period of time for volumes at 3334 GiB and above. For example, if you get
less than 10,000 IOPS, then GP2 is preferable as it gives you the best performance and price.
Provisioned IOPS SSD
o It is also referred to as IO1.
o It is mainly used for high-performance applications such as intense applications, relational databases.
o It is designed for I/O intensive applications such as large relational or NOSQL databases.
o It is used when you require more than 10,000 IOPS.

HDD
o It stands for Hard Disk Drive.
o HDD based storage was introduced in 2008.
o The size of the HDD based storage could be between 1 GB to 1TB.
o It can support up to 100 IOPS which is very low.

Throughput Optimized HDD (st1)


o It is also referred to as ST1.
o Throughput Optimized HDD is a low-cost HDD designed for those applications that require higher
throughput up to 500 MB/s.
o It is useful for those applications that require the data to be frequently accessed.
o It is used for Big data, Data warehouses, Log processing, etc.
o It cannot be a boot volume, so it contains some additional volume. For example, if we have Windows
server installed in a C: drive, then C drive cannot be a Throughput Optimized Hard disk, D: drive or
some other drive could be a Throughput Optimized Hard disk.
o The size of the Throughput Hard disk can be 500 GiB to 16 TiB.
o It supports up to 500 IOPS.
Cold HDD (sc1)
o It is also known as SC1.
o It is the lowest cost storage designed for the applications where the workloads are infrequently
accessed.
o It is useful when data is rarely accessed.
o It is mainly used for a File server.
o It cannot be a boot volume.
o The size of the Cold Hard disk can be 500 GiB to 16 TiB.
o It supports up to 250 IOPS.
Magnetic Volume
o It is the lowest cost storage per gigabyte of all EBS volume types.
o It is ideal for the applications where the data is accessed infrequently
o It is useful for applications where the lowest storage cost is important.
o Magnetic volume is the only hard disk which is bootable. Therefore, we can say that it can be used as
a boot volume.

2.39 EBS Multi-Attach

Amazon Elastic Block Store or EBS is a durable block-based storage device that can be attached to your
EC2 instance. AWS released a feature called Multi-Attach, which allows EC2 instances to share a single
EBS volume for up to 16 instances and provide higher availability of your applications for Linux workloads.
Each instance to which the volume is attached has full read and write permissions to the volume.

2.40 EBS Encryption


AWS has made encryption of EBS volumes as easy as possible. When a volume is encrypted, all data
stored on it is encrypted, including the boot and data volumes. When you attach an encrypted EBS to an
EC2 instance, encryption extends beyond the stored data to all data transfers to and from the volume.

AWS supports a default encryption process that you can configure by region within your account. You can
also choose between using AWS-created keys or a customer-managed key (CMK) to encrypt your volumes.
In both cases, Amazon encrypts the data with industry-standard AES-256 encryption and stores the
encryption key in the AWS Key Management Service (KMS).

2.41 EFS
Amazon Elastic File System (Amazon EFS) is a straightforward, serverless, set-and-forget file system.
There is no setup or minimum fee. You only pay for the storage you use, for reading and writing access to
data kept in Infrequent Access storage classes, and any allocated throughput. It is a scalable, cloud-based file
system supporting Linux-based applications and workloads that can work in tandem with AWS cloud
services and on-premise resources.
Depending on your needs, EFS offers two storage classes: Infrequent Access and Standard Access.
Standard access storage is meant for regularly accessed data, whereas Infrequent Access storage is intended
for long-term but less frequently used information at a cheaper cost.
The file systems can scale automatically from gigabytes to petabytes of data without the requirement for
storage provisioning. An AWS EFS file system can be accessed by tens, hundreds, or even thousands of
compute instances at the same time, and Amazon EFS ensures consistent performance for each compute
instance.
It is built to be both long-lasting and readily available. There is no minimum price or setup cost with
Amazon EFS, and you just pay for what you use.

2.42 Lab – EFS


To create your Amazon EFS file system
1. Sign in to the AWS Management Console and open the Amazon EFS console
at https://console.aws.amazon.com/efs/.
2. Choose Create file system to open the Create file system dialog box.

3. (Optional) Enter a Name for your file system.


4. For Virtual Private Cloud (VPC), choose your VPC, or keep it set to your default
VPC.
5. Choose Create to create a file system that uses the following service
recommended settings:
 Automatic backups enabled..
 Mount targets configured with the following settings:
o Created in each Availability Zone in the AWS Region in which the file system
is created.
o Located in the default subnets of the VPC you selected.
o Using the VPC's default security group – You can manage security groups
after the file system is the created.
 Standard storage class.
 General Purpose performance mode.
 Elastic throughput mode.
 Encryption of data at rest enabled using your default key for Amazon EFS
(aws/elasticfilesystem).
 Lifecycle Management – Amazon EFS creates the file system with the
following lifecycle policies:
o Transition into IA set to 30 days since last access.
o Transition out of IA set to None.
6. After you create the file system, you can customize the file system's settings with
the exception of availability and durability, encryption, and performance mode.
7. The File systems page appears with a banner across the top showing the status
of the file system you created. A link to access the file system details page appears in
the banner when the file system becomes available.

2.43 AWS EFS vs EBS


Elastic File Storage (EFS), Elastic Block Storage (EBS), and Simple Storage Service (S3) are 3 different
types of storage offered by AWS, and they all have their advantages.
Let’s compare them to get a better idea of what kind of storage suffices your needs.
AWS EFS AWS EBS
Performance 3GB/s baseline performanceUp to 10GB/s HDD Volumes 250-500 IOPS/volume
throughputUp To 500k IOPS depending on volume typeSSD volumes 16-
64K IOPS/volume
Availability and No publicly available SLAUp to 1000 99.9% available Accessible via single EC2
Accessibility concurrent instancesAccessible from any AZ or instance
region
Access Control IAM user-based authenticationSecurity groups Security groupsUser-based authentication
(IAM)
Storage and File 16TB per volume52TB maximum for Max storage size of 16TBNo file size limit
Size Limits individual files on disk
Cost Standard storage: $0.30-$0.39 per GB-month Free tier: 30GBGeneral Purpose: $0.045 per
depending on regionInfrequent storage: $0.025- GB/monthProvisioned SSD: $0.125 per
$0.03 per GB-monthProvisioned throughput: $6 GB/month, $0.065 per IOPS/month
per MB/s-month

You might also like