Week 2
Week 2
When you create an AWS account, you begin with one sign-in identity that has complete access to all
AWS services and resources in the account. This identity is called the AWS account root user and is
accessed by signing in with the email address and password that you used to create the account. We strongly
recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials
and use them to perform the tasks that only the root user can perform.
IAM's primary capability is access and permissions. It provides two essential functions that work together
to establish basic security for enterprise resources:
Authorization. Once a user is authenticated, authorization defines the access rights for that user and
limits access to only the resources permitted for that specific user. Not every user will have access to
every application, data set or service across the organization. Authorization typically follows the concept
of least privilege, where users receive the minimum access rights that are necessary for their jobs.
2.3 Principle of least privilege
The “Principle of Least Privilege” (POLP) states a given user account should have the exact access rights
necessary to execute their role’s responsibilities—no more, no less. POLP is a fundamental concept within
identity and access management (IAM).
Least privilege is critical for preventing the continual collection of unchecked access rights over a user
account’s lifecycle. The “user account lifecycle” defines the collective management stages for every user
account over time—creation, review/update, and deactivation (CRUD).
IAM deals with four principle entities: users, groups, roles and policies. These entities detail who a user is
and what that user is allowed to do within the environment:
Users. A user is one of the most basic entities in IAM. A user is typically a person or a service, such as
an application or platform, which interacts with the environment. An IT teams assigns users
authorization credentials, such as a username and password, which validate the user's identity. Users can
then access resources that are assigned through permissions or policies.
Groups. A group is a collection of users that share common permissions and policies. Any permissions
associated to a group are automatically assigned to all users in a group. For example, placing a user into
an Administrator group will automatically assign the user any permissions given to the Administrator
group. IT teams can move users between groups and automatically shift permissions as groups change.
Roles. A role is a generic identity that is not associated with any specific user. Roles do not use
passwords and can be assumed by authorized users. Roles enable varied users to temporarily assume
different permissions for different tasks.
Policies. Policies are AWS objects that are attached to users, groups, roles or resources that define the
permissions granted to those identities. When a user tries to access a resource, the request is checked
against the associated policies. If the request is permitted, then it is granted. If not, it is denied. AWS
policies are based on six different criteria: identity, resources, permission boundaries, service control
policies, access control lists and session policies. IT teams can attach multiple policies to each identity
for more granular control of permissions.
A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the
policies determine whether the request is allowed or denied. You manage access in AWS by creating
policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources.
IAM policies define permissions for an action regardless of the method that you use to perform the
operation. For example, if a policy allows the GetUser action, then a user with that policy can get user
information from the AWS Management Console, the AWS CLI, or the AWS API. When you create an
IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM
user can sign in to the console using their sign-in credentials. If programmatic access is allowed, the user can
use access keys to work with the CLI or API.
Policy types
The following policy types, listed in order from most frequently used to less frequently used, are available
for use in AWS.
1. Identity-based policies – Attach managed and inline policies to IAM identities (users, groups to which
users belong, or roles). Identity-based policies grant permissions to an identity.
2. Resource-based policies – Attach inline policies to resources. The most common examples of resource-
based policies are Amazon S3 bucket policies and IAM role trust policies. Resource-based policies grant
permissions to the principal that is specified in the policy. Principals can be in the same account as the
resource or in other accounts.
3. Permissions boundaries – Use a managed policy as the permissions boundary for an IAM entity (user
or role). That policy defines the maximum permissions that the identity-based policies can grant to an
entity, but does not grant permissions. Permissions boundaries do not define the maximum permissions
that a resource-based policy can grant to an entity.
4. Organizations SCPs – Use an AWS Organizations service control policy (SCP) to define the maximum
permissions for account members of an organization or organizational unit (OU). SCPs limit permissions
that identity-based policies or resource-based policies grant to entities (users or roles) within the account,
but do not grant permissions.
5. Access control lists (ACLs) – Use ACLs to control which principals in other accounts can access the
resource to which the ACL is attached. ACLs are similar to resource-based policies, although they are
the only policy type that does not use the JSON policy document structure. ACLs are cross-account
permissions policies that grant permissions to the specified principal. ACLs cannot grant permissions to
entities within the same account.
6. Session policies – Pass advanced session policies when you use the AWS CLI or AWS API to assume a
role or a federated user. Session policies limit the permissions that the role or user's identity-based
policies grant to the session. Session policies limit permissions for a created session, but do not grant
permissions. For more information, see Session Policies.
6. For Resources, if the service and actions that you selected in the previous steps do not support
choosing specific resources, all resources are allowed and you cannot edit this section.
If you chose one or more actions that support resource-level permissions, then the visual editor lists
those resources. You can then expand Resources to specify resources for your policy.
7. To add more permission blocks, choose Add more permissions. For each block, repeat steps 2 to 5.
8. When you are finished adding permissions to the policy, choose Next.
9. On the Review and create page, type a Policy Name and a Description (optional) for the policy that
you are creating. Review the Permissions defined in this policy to make sure that you have granted the
intended permissions.
10. Choose Create policy to save your new policy.
After you create a policy, you can attach it to your groups, users, or roles.
An IAM role is an IAM identity that you can create in your account that has specific
permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission
policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely
associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not
have standard long-term credentials such as a password or access keys associated with it. Instead, when you
assume a role, it provides you with temporary security credentials for your role session.
You can use roles to delegate access to users, applications, or services that don't normally have access to
your AWS resources. For example, you might want to grant users in your AWS account access to resources
they don't usually have, or grant users in one AWS account access to resources in another account. Or you
might want to allow a mobile app to use AWS resources, but not want to embed AWS keys within the app
(where they can be difficult to rotate and where users can potentially extract them). Sometimes you want to
give AWS access to users who already have identities defined outside of AWS, such as in your corporate
directory. Or, you might want to grant access to your account to third parties so that they can perform an
audit on your resources.
2.8. Creating a role for an AWS service (console) or IAM Roles Hands On
You can use the AWS Management Console to create a role for a service. Because some services support
more than one service role, see the AWS documentation for your service to see which use case to choose.
You can learn how to assign the necessary trust and permissions policies to the role so that the service can
assume the role on your behalf. The steps that you can use to control the permissions for your role can vary,
depending on how the service defines the use cases, and whether or not you create a service-linked role.
The following tools are crucial to upholding IAM security. They include but are not limited to:
Multi-factor authentication ensures that digital users are who they say they are by requiring that they
provide at least two pieces of evidence to prove their identity. Each piece of evidence must come from a
different category: something they know, something they have or something they are. If one of the factors
has been compromised, the chances of another factor also being compromised are low, so requiring multiple
authentication factors thereby provides a higher level of assurance about the user’s identity. These additional
factors might take the form of numerical codes sent to a mobile phone, key fobs, smart cards, location
checks, biometric information or other factors.
Let us generate a credential report on the bottom left I am going to create a credential report. I can click
on Download Report to just download this report and this will be a CSV file.
Now this CSV, because I am using a training account, is not fascinating but as we can see we have two
rows in it we have my root account and my account named sandy. We can see when the user was created if
the password was enabled when the password was last used, and last changed.
This report is extremely helpful if you want to look at some users that have not been changing their
password or using it or their account. It could be giving you a great way to find which users that deserve
your attention from a security standpoint. I want to look at IAM Access Advisors I am going to click on my
user is sandy and on the right-hand side it says Access Advisor.
This is going to show me when some services were last used. The recent activity usually appears within
four hours. If you don’t see all the data, that’s why. We can see that for example Identity and Access
Management was last accessed today. Thanks to this policy right here. Also, the Health APIs and
Notifications were accessed today. Well, this is a little bell right here that automatically will be accessed to
see if there are any notifications for your accounts.
We will see what this is this is the Personal Health Dashboard. But for the other services for example for
Business or AWS Accounts or Certificates Manager I have not been using them. So maybe it makes sense
for me to remove these permissions from this user because it seems this user is not using these services. This
is the whole power of Access Advisor. And as you can see there are lots of services in AWS. About 23
pages just like this about 230 services in AWS at the time of recording. We have just seen all the ways we
can have security tools on IAM.
With AWS Identity and Access Management (IAM), you can specify who can access which AWS services
and resources, and under which conditions. To help secure your AWS resources, follow these IAM best
practices.
Require human users to use federation with an identity provider to access AWS by using temporary
credentials. You can use an identity provider for your human users to provide federated access to AWS
accounts by assuming IAM roles, which provide temporary credentials. For centralized access management,
we recommend that you use AWS IAM Identity Center to manage access to your accounts and permissions
within those accounts.
Require workloads to use temporary credentials with IAM roles to access AWS
A workload is a collection of resources and code, such as an application or backend process, that requires
an identity to make requests to AWS services. IAM roles have specific permissions and provide a way for
workloads to access AWS by relying on temporary security credentials through an IAM role. For more
information, see IAM roles.
We recommend using IAM roles for human users and workloads accessing your AWS resources so that
they rely on temporary credentials. However, for scenarios in which you need IAM users or root users in
your account, require MFA for additional security. Each user's credentials and device-generated response to
an authentication challenge are required to complete the sign-in process.
3.Rotate access keys regularly for use cases that require long-term credentials
Where possible, we recommend relying on temporary credentials instead of creating long-term credentials
such as access keys. However, for scenarios in which you need IAM users with programmatic access and
long-term credentials, use access key last used information to rotate and remove access keys regularly. For
more information, see Rotating access keys.
4.Safeguard your root user credentials and don't use them for everyday tasks
When you create an AWS account, you establish a root user name and password to sign in to the AWS
Management Console. Configure MFA to safeguard these credentials the same way you would protect other
sensitive personal information. Also, use your root user to complete the tasks that can be performed only by
the root user—and not for everyday tasks. For more information, see Best practices to protect your account's
root user.
b.Get started with AWS managed policies and move toward least-privilege permissions
To get started granting permissions to your users and workloads, use the AWS managed policies that grant
permissions for many common use cases and are available in your AWS account. Keep in mind that AWS
managed policies might not grant least-privilege permissions for your specific use cases because they are
available for use by all AWS customers. As a result, we recommend that you reduce permissions further by
defining customer managed policies that are specific to your use cases. For more information, see AWS
managed policies. For information about AWS managed policies that are designed for specific job functions,
see AWS managed policies for job functions.
As you scale your workloads, separate them by using multiple accounts that are managed with AWS
Organizations. We recommend that you use Organizations service control policies (SCPs) to
establish permissions guardrails to control access for all IAM users and roles across your accounts. SCPs are
a type of organization policy that you can use to manage permissions in your organization at the AWS
organization or account level. To do this, your administrator must attach identity-based or resource-based
policies to IAM users, IAM roles, or the resources in your accounts.
In some scenarios, you might want to delegate permissions management within an account to others. For
example, you might want to allow developers to create and manage roles for their workloads. When you
delegate permissions to others, use permissions boundaries, which use a managed policy to set the maximum
permissions that an identity-based policy can grant to an IAM role. A permissions boundary does not grant
permissions on its own. For more information, see Permissions boundaries for IAM entities.
Introduction
AWS Access Keys are credentials used to authenticate with AWS services, including the AWS
Management Console, Command Line Interface (CLI), and Software Development Kits (SDKs). There are
two types of access keys: Access Key ID and Secret Access Key.
Access Key ID is a unique identifier used to access AWS services, while Secret Access Key is a secret
code used to encrypt the Access Key ID and authenticate with AWS services. Access Keys are used to make
programmatic requests to AWS services and are often used by developers and system administrators.
The AWS Command Line Interface (CLI) is a tool that allows users to interact with AWS services from a
command prompt or shell script. The CLI uses Access Keys to authenticate requests to AWS services and
provides a simple, command-line interface for managing AWS resources.
The AWS SDKs are software development kits that provide libraries and APIs for developers to build
applications that interact with AWS services. The SDKs use Access Keys to authenticate requests to AWS
services and provide a range of programming language-specific libraries and APIs that make it easier to
build AWS applications.
Using Access Keys, CLI, and SDKs can help automate tasks, manage AWS resources, and build custom
applications that interact with AWS services. However, it’s important to secure Access Keys and follow best
practices for managing and rotating them regularly to prevent unauthorized access to AWS resources.
1. Download the AWS CLI installer for Windows from the AWS website. The installer is available in
both MSI and EXE formats.
2. Run the installer and follow the prompts to install AWS CLI. By default, AWS CLI will be installed
to C:\Program Files\Amazon\AWSCLI.
3. Once the installation is complete, open a Command Prompt window.
4. To verify that AWS CLI is installed correctly, type the following command:
aws –version
This should display the version number of AWS CLI installed on your system.
5. Next, you’ll need to configure AWS CLI with your AWS access keys. You can do this by typing the
following command:
aws configure
This will prompt you for your AWS Access Key ID, Secret Access Key, default region name, and default
output format. You can obtain your Access Key ID and Secret Access Key from the AWS Management
Console.
6. Once you’ve entered your AWS access keys and configured AWS CLI, you’re ready to start using it to
interact with AWS services. For example, you can use the following command to list all of your EC2
instances:
That’s it! You’ve now set up AWS CLI on your Windows system and can start using it to interact with
AWS services from the command line.
AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS
Management Console. You can navigate to CloudShell from the AWS Management Console a few different
ways. For more information, see How to get started with AWS CloudShell?
To start working with the shell, sign in to the AWS Management Console and choose
one of the following options:
On the navigation bar, choose the CloudShell icon.
You can also switch your CloudShell session to a full screen by clicking Open in
new browser tab.
For instructions on how to sign in to the AWS Management Console and performing
key tasks with AWS CloudShell, see Getting started with AWS CloudShell.
You can run AWS CLI commands using your preferred shell, such as Bash, PowerShell, or Z shell. And
you can do this without downloading or installing command line tools.
When you launch AWS CloudShell, a compute environment that's based on Amazon Linux 2 is created.
Within this environment, you can access an extensive range of pre-installed development tools, options
for uploading and downloading files, and file storage that persists between sessions.
Virtualization follows a very simple architecture. Let's first look at the left side of the figure, this is the
traditional machine. Here we have the hardware at the base layer and the host operating system, such as
Linux, Windows, Mac, etc. Above, we have the application running directly on the host machine.
Here since we run one application on the host machine, a lot of computer resources are unused, to avoid
this we can run multiple applications by sharing the resources among the applications. This might increase
the efficiency in resource utilization, but there are a few issues since the resources are shared. The risk of a
data breach is high, and the application cannot operate in a dedicated environment.
To address these issues and enable efficient resource utilization, virtualization was introduced that follows
the same architecture pattern as the traditional machine but with a slight change.
Virtualization architecture starts with the base hardware, as the traditional machine, but it replaces the
operating system with the hypervisor. The hypervisor creates virtual machines for these applications and
allots resources to them, and these VMs will have their OS, storage, computing power, etc., allowing the
application to run in an isolated environment with dedicated resources.
This allows efficient resource utilization as well as provides an isolated or dedicated environment for the
application inside the machine.
Network Virtualization
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely stored on a server in
the data center. It allows the user to access their desktop virtually, from any location by a different machine.
Users who want specific operating systems other than Windows Server will need to have a virtual desktop.
The main benefits of desktop virtualization are user mobility, portability, and easy management of software
installation, updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are managed by a virtual storage
system. The servers aren’t aware of exactly where their data is stored and instead function more like worker
bees in a hive. It makes managing storage from multiple sources be managed and utilized as a single
repository. storage virtualization software maintains smooth operations, consistent performance, and a
continuous suite of advanced functions despite changes, breaks down, and differences in the underlying
equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of server resources takes
place. Here, the central server (physical server) is divided into multiple different virtual servers by changing
the identity number, and processors. So, each system can operate its operating systems in an isolated
manner. Where each sub-server knows the identity of the central server. It causes an increase in performance
and reduces the operating cost by the deployment of main server resources into a sub-server resource. It’s
beneficial in virtual migration, reducing energy consumption, reducing infrastructural costs, etc.
Server Virtualization
6. Data Virtualization: This is the kind of virtualization in which the data is collected from various sources
and managed at a single place without knowing more about the technical information like how data is
collected, stored & formatted then arranged that data logically so that its virtual view can be accessed by its
interested people and stakeholders, and users through the various cloud services remotely. Many big giant
companies are providing their services like Oracle, IBM, At scale, Cdata, etc.
o In network settings
o Instance is going to get an public id for that we connect our instance.
o Select Allow http traffic from the internet
o Other options let it be default
3. Memory-Optimized Instances
Memory-optimized instances are geared for workloads that need huge datasets to be processed in
memory. Memory here defines RAM which allows us to do multiple tasks at a time. Data stored is used to
perform the central processing unit (CPU) tasks it loads from storage to memory to run. This process of
preloading gives the CPU direct access to the computer program. Assume you have a workload that
necessitates the preloading of significant volumes of data prior to executing an application. A high-
performance database or a task that requires real-time processing of a significant volume of unstructured
data might be involved in this scenario. In this case, consider using a memory-optimized instance. It is used
to run applications that require a lot of memory with high performance.
Examples:
Helpful for databases that need to handle quickly.
Processes that do not need a large quantity of data yet require speedy and real-time processing.
Memory-Optimized Some Instance Types
The R and X categories belong to memory-optimized. let’s discuss any one-off them.
R7g.medium: It is run on AWS Gravitation processors with ARM architecture. with 1 CPU, 8 (GiB) of
memory, an EBS storage type, and a maximum of 12.5% network bandwidth.
x1: X1 is mainly suited for enterprise edition databases with memory applications and comes with 64 vCPU,
976 GiB of memory, 1 x 1,920 GB of SSD storage, 7,000 Mbps of dedicated EBS bandwidth, and 10 Gbps
of network performance.
Features
Elastic Fabric Adapter (EFA) is supported on the r7g.16xlarge and r7g.metal instances.
Includes the newest DDR5 memory, which provides 50% more bandwidth than DDR4.
Compared to R6g instances, improved networking bandwidth is 20% more.
Applications
1. In-Memory Databases: Memory-optimized instances are mostly suited for databases that contain
high bandwidth and memory capacity is high.
2. Big Data Processing: For big data processing workloads like Apache Spark and Apache Hadoop
that demand high memory capacity and bandwidth, memory-optimized instances can be deployed.
Instances that have been optimized for memory can offer the memory space and bandwidth required to
process huge amounts of data fast and effectively.
SSH, also known as Secure Shell or Secure Socket Shell, is a network protocol that gives users,
particularly system administrators, a secure way to access a computer over an unsecured network. Secure
Shell (SSH) is a cryptographic network protocol that allows secure remote login from one computer to
another. It provides a secure channel over an unsecured network, ensuring your data is protected during
transmission.
Prerequisites
Before we dive into the steps, ensure you have the following:
1. An AWS account
2. A running Windows EC2 instance
3. PuTTY installed on your local machine
When you create an EC2 instance, AWS provides a key pair for that instance. This key pair consists of a
public key that AWS stores, and a private key file that you store (.pem file).
If you already have the .pem file, proceed to the next step. If not, follow these steps to create a new key
pair and download the .pem file:
PuTTY does not natively support the .pem format that AWS uses for key pairs. Therefore, you need to
convert your .pem file to the .ppk format. Here’s how:
Now that you have your .ppk file, you can connect to your Windows EC2 instance. Here’s how:
Finally, you can SSH into your Windows EC2 instance. Here’s how:
1. Open PuTTY.
2. In the Category pane, choose Session and complete the following fields:
o In the Host Name box, enter the public DNS (IPv4) of your instance.
o In the Port box, type 22.
3. In the Category pane, expand SSH, and then choose Auth.
4. Choose Browse, and then select your .ppk file.
5. Choose Open, and then choose Yes to dismiss the alert dialog box.
6. In the PuTTY console, log in as the appropriate user.
Congratulations! You have successfully SSHed into your Windows EC2 instance.
Amazon EC2 Instance Connect provides a simple and secure way to connect to your instances using
Secure Shell (SSH). With EC2 Instance Connect, you use AWS Identity and Access Management (IAM)
policies and principals to control SSH access to your instances, removing the need to share and manage SSH
keys.
After you've created an IAM role, you can launch an instance, and associate that role with the instance
during launch.
To launch an instance with an IAM role (console)
1. Follow the procedure to launch an instance.
2. Expand Advanced details, and for IAM instance profile, select the IAM role that you created.
3. Configure any other details that you require for your instance or accept the defaults, and select a key
pair. For information about the fields in the launch instance wizard, see Launch an instance using
defined parameters.
4. In the Summary panel, review your instance configuration, and then choose Launch instance.
5. If you are using the Amazon EC2 API actions in your application, retrieve the AWS security credentials
made available on the instance and use them to sign the requests. The AWS SDK does this for you.
IMDSv2IMDSv1
IMDSv2
http://169.254.169.254/latest/meta-data/iam/security-credentials/role_name
IMDSv1
Private IP: This is like your nickname. You are recognized by it within your private circles i.e. your family
and friend circle. Similarly, Private IP is used to identify network resources within a private network. Using
Private IP of a network resource, you can’t identify it over internet. There are some IPs dedicated for use as a
private IP. These are called RFC 1918 addresses. It is best to use Private IP from these addresses however its
not a strict requirement. In AWS, this is the most used among three IP address types and required during
creation of VPC, Subnet etc. This is also by default assigned to every instance on creation.
Public IP: This is like your mobile number. People all over the world can identify and reach out to you using
it. Though it has wider reach unlike nick name it is temporary and there is no guarantee you will have the
same number tomorrow. Public IP is a routable address over internet. AWS EC2 instances can be assigned
public address if you choose the option but it is dynamic. That means AWS assigns any public IP which
makes your instance accessible over the internet. Every time there is a Stop and Start, your instance gets a
different Public IP. It’s like you start everyday with a new mobile number. On the other hand, if you restart
your instance you still get to keep the same Public IP. Restart is more like a power nap during the day where
you get to keep the same mobile number..
Elastic IP: This is like Aadhaar number or Social Security Number (SSN). It is meant
to uniquely and permanently identify you during your lifetime and does not change. As mentioned earlier,
Public IP is dynamic which is a problem. Imagine how difficult life would be if you change your mobile
number every day. To solve this problem, AWS allows you to select an Elastic IP which is actually a static
Public IP address. Once you attach it to an EC2 instance, it will not change on Instance stop and start.
Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to
achieve the low-latency network performance necessary for tightly-coupled node-to-node communication
that is typical of high-performance computing (HPC) applications.
Partition – spreads your instances across logical partitions such that groups of instances in one partition
do not share the underlying hardware with groups of instances in different partitions. This strategy is
typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.
Spread – strictly places a small group of instances across distinct underlying hardware to reduce
correlated failures.
AWS Network Interfaces or Elastic Network Interfaces (AWS ENIs) are virtual network cards attached to
EC2 instances that help facilitate network connectivity for instances. Having two or more AWS Network
Interface connected to an instance permits it to communicate on two separate subnets.
AWS Network Interface has the following characteristics:
A primary private IPv4 address of a user’s VPC
One Elastic IP address (IPv4) for every private IPv4 address
One or more secondary private IPv4 addresses of a user’s VPC
A description
A destination/source check flag
One public IPv4 address
One or more security groups
A MAC address
One or more IPv6 addresses
AWS Network Interfaces can be created, configured, and attached to instances within the same Availability
Zone. An AWS ENI can be attached to an instance or detached from an instance and then attached to
another instance upon creation.
2.28 Lab - ENI
. To create an AWS ENI, consider the following steps:
1. Open the Amazon EC2 console
2. Click on Network Interfaces from the navigation pane
3. Click on Create network interface
4. Optionally add a descriptive name for Description
5. Choose a subnet (IPv4-only, IPv6-only, or dual-stack (IPv4 and IPv6)). The next option will change
according to the type of subnet you select
6. Do either of the following for Private IPv4 address:
Permit Amazon EC2 to choose an IPv4 address from the internet by clicking on Auto-assign
Enter an IPv4 address selected by you from the subnet by clicking on Custom
7. Do either of the following for IPv6 address:
Select None if you do not wish to allocate an IPv6 address to the network interface
If you want Amazon EC2 to choose an IPv6 address from the subnet, then select Auto-assign
Choose Custom to select and enter an IPv6 address from the subnet
8. Optionally, select Elastic Fabric Adapter and Enable to create an Elastic Fabric Adapter
9. Select one or more security groups
10. Optionally, select Add New Tag for each tag and enter a tag key and an optional tag value
11. Click on Create network interface
When you hibernate an instance, Amazon EC2 signals the operating system to perform hibernation
(suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon Elastic
Block Store (Amazon EBS) root volume. Amazon EC2 persists the instance's EBS root volume and any
attached EBS data volumes.
The AWS Nitro System is the underlying platform for our next generation of EC2 instances that enables
AWS to innovate faster, further reduce cost for our customers, and deliver added benefits like increased
security and new instance types. AWS has completely re-imagined our virtualization infrastructure.
Amazon EC2 instances support multithreading, which enables multiple threads to run concurrently on a
single CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. An instance has a
default number of CPU cores, which varies according to instance type. For example, an m5.xlarge instance
type has two CPU cores and two threads per core by default—four vCPUs in total.
Capacity Reservations enable you to reserve capacity for your Amazon EC2 instances in a specific
Availability Zone for any duration. This gives you the flexibility to selectively add capacity reservations and
still get the Regional RI discounts for that usage.
You can choose from five different volume types to balance optimal price and performance. You can
achieve single digit-millisecond latency for high performance database workloads such as SAP HANA or
gigabyte per second throughput for large, sequential workloads such as Hadoop. You can change volume
types, tune performance, or increase volume size without disrupting your critical applications, so you have
cost-effective storage when you need it.
Designed for mission-critical systems, EBS volumes are replicated within an Availability Zone (AZ) and
can easily scale to petabytes of data. Also, you can use EBS Snapshots with automated lifecycle policies to
back up your volumes in Amazon S3, while ensuring geographic protection of your data and business
continuity.
.
Features of EBS:
Scalability: EBS volume sizes and features can be scaled as per the needs of the system.
Backup: Users can create snapshots of EBS volumes that act as backups.
Encryption: Encryption can be a basic requirement when it comes to storage. This can be due to the
government of regulatory compliance. EBS offers an AWS managed encryption feature.
Charges: AWS charges users for the storage you hold. For example if you use 1 GB storage in a 5 GB
volume, you will be charged for a 5 GB EBS volume.EBS charges vary from region to region.
The data in an EBS volume will remain unchanged even if the instance is rebooted or terminated.
2.33 Lab EBS -To create an empty EBS volume using the console
1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Volumes.
3. Choose Create volume.
4. For Volume type, choose the type of volume to create.
5. For Size, enter the size of the volume, in GiB. For more information, see Constraints on the size and
configuration of an EBS volume.
6. (io1, io2, and gp3 only) For IOPS, enter the maximum number of input/output operations per second
(IOPS) that the volume should provide.
7. (gp3 only) For Throughput, enter the throughput that the volume should provide, in MiB/s.
8. For Availability Zone, choose the Availability Zone in which to create the volume. A volume can be
attached only to an instance that is in the same Availability Zone.
9. For Snapshot ID, keep the default value (Don't create volume from a snapshot).
10. (io1 and io2 only) To enable the volume for Amazon EBS Multi-Attach, select Enable Multi-
Attach..
11. Set the encryption status for the volume.
If your account is enabled for encryption by default, then encryption is automatically enabled and you can't
disable it. You can choose the KMS key to use to encrypt the volume.
If your account is not enabled for encryption by default, encryption is optional. To encrypt the volume,
for Encryption, choose Encrypt this volume and then select the KMS key to use to encrypt the volume.
12. (Optional) To assign custom tags to the volume, in the Tags section, choose Add tag, and then enter
a tag key and value pair..
13. Choose Create volume.
14. To use the volume, attach it to an instance.
You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots.
Snapshots are incremental backups, which means that only the blocks on the device that have changed after
your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on
storage costs by not duplicating data.
One or more Amazon Elastic Block Store (Amazon EBS) snapshots, or, for instance-
store-backed AMIs, a template for the root volume of the instance (for example, an
operating system, an application server, and applications).
Launch permissions that control which AWS accounts can use the AMI to launch
instances.
A block device mapping that specifies the volumes to attach to the instance when
it's launched.
An instance store provides temporary block-level storage for your instance. This storage is located on
disks that are physically attached to the host computer. Instance store is ideal for temporary storage of
information that changes frequently, such as buffers, caches, scratch data, and other temporary content. It
can also be used to store temporary data that you replicate across a fleet of instances, such as a load-
balanced pool of web servers.
An instance store consists of one or more instance store volumes exposed as block devices. The size of an
instance store as well as the number of devices available varies by instance type and instance size.
Amazon EBS provides two types of volume that differ in performance characteristics and price. EBS
Volume types fall into two parts:
o SSD-backed volumes
o HDD-backed volumes
SSD
o SSD stands for solid-state Drives.
o In June 2014, SSD storage was introduced.
o It is a general purpose storage.
o It supports up to 4000 IOPS which is quite very high.
o SSD storage is very high performing, but it is quite expensive as compared to HDD (Hard Disk
Drive) storage.
o SSD volume types are optimized for transactional workloads such as frequent read/write operations
with small I/O size, where the performance attribute is IOPS.
SSD is further classified into two parts:
o General Purpose SSD
o Provisioned IOPS SSD
General Purpose SSD
o General Purpose SSD is also sometimes referred to as a GP2.
o It is a General purpose SSD volume that balances both price and performance.
o You can get a ratio of 3 IOPS per GB with up to 10,000 IOPS and the ability to burst up to 3000
IOPS for an extended period of time for volumes at 3334 GiB and above. For example, if you get
less than 10,000 IOPS, then GP2 is preferable as it gives you the best performance and price.
Provisioned IOPS SSD
o It is also referred to as IO1.
o It is mainly used for high-performance applications such as intense applications, relational databases.
o It is designed for I/O intensive applications such as large relational or NOSQL databases.
o It is used when you require more than 10,000 IOPS.
HDD
o It stands for Hard Disk Drive.
o HDD based storage was introduced in 2008.
o The size of the HDD based storage could be between 1 GB to 1TB.
o It can support up to 100 IOPS which is very low.
Amazon Elastic Block Store or EBS is a durable block-based storage device that can be attached to your
EC2 instance. AWS released a feature called Multi-Attach, which allows EC2 instances to share a single
EBS volume for up to 16 instances and provide higher availability of your applications for Linux workloads.
Each instance to which the volume is attached has full read and write permissions to the volume.
AWS supports a default encryption process that you can configure by region within your account. You can
also choose between using AWS-created keys or a customer-managed key (CMK) to encrypt your volumes.
In both cases, Amazon encrypts the data with industry-standard AES-256 encryption and stores the
encryption key in the AWS Key Management Service (KMS).
2.41 EFS
Amazon Elastic File System (Amazon EFS) is a straightforward, serverless, set-and-forget file system.
There is no setup or minimum fee. You only pay for the storage you use, for reading and writing access to
data kept in Infrequent Access storage classes, and any allocated throughput. It is a scalable, cloud-based file
system supporting Linux-based applications and workloads that can work in tandem with AWS cloud
services and on-premise resources.
Depending on your needs, EFS offers two storage classes: Infrequent Access and Standard Access.
Standard access storage is meant for regularly accessed data, whereas Infrequent Access storage is intended
for long-term but less frequently used information at a cheaper cost.
The file systems can scale automatically from gigabytes to petabytes of data without the requirement for
storage provisioning. An AWS EFS file system can be accessed by tens, hundreds, or even thousands of
compute instances at the same time, and Amazon EFS ensures consistent performance for each compute
instance.
It is built to be both long-lasting and readily available. There is no minimum price or setup cost with
Amazon EFS, and you just pay for what you use.