0% found this document useful (0 votes)
23 views

AWS

The document outlines key AWS services including IAM for identity and access management, VPC for virtual networking, EC2 for cloud computing, and various messaging services like SNS and SQS. It details features, use cases, best practices, and pricing models for these services, emphasizing scalability, security, and efficient resource management. Additionally, it introduces serverless computing with AWS Lambda and container management with ECS and EKS, highlighting the importance of infrastructure in cloud deployments.

Uploaded by

Gildas Nzikoune
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

AWS

The document outlines key AWS services including IAM for identity and access management, VPC for virtual networking, EC2 for cloud computing, and various messaging services like SNS and SQS. It details features, use cases, best practices, and pricing models for these services, emphasizing scalability, security, and efficient resource management. Additionally, it introduces serverless computing with AWS Lambda and container management with ECS and EKS, highlighting the importance of infrastructure in cloud deployments.

Uploaded by

Gildas Nzikoune
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

IAM (IDENTIFICATION ACCESS MANAGEMENT)

Fonctionnalités :

●​ Service global (pas lié a une region)


●​ Le compte racine est le 1er compte crée
●​ Un nouvel utilisateur n'a aucun droit par defaut
●​ A utilisateur acces console ou par programmation vi l'api(id key secret key)
●​ Acces console login et mot de passe
●​ Acces par programmation cle ID + cle secrete
●​ L'acces par programmation ne donne pas acces a la console

Cas d'usages :

●​ Gestion:
●​ Utilisateurs
●​ Groupes
●​ Rôles
●​ Stratégies
●​ Sécurité de mot de passe
●​ Fournisseur d'identité (Fédération web)
●​ Accès via programmation ou via la console

Bonne pratiques :

●​ Supprimes vos clésclés d'acces racine


●​ Active MFA
●​ créer des utilisateurs IAM
●​ utiliser des groupes pour attribuer des autorisations
●​ Appliquer une stratégie de mot de passe IAM

VPC (virtual private cloud)

création
les sous réseaux
la table de routage
la liste de contrôle d’accès réseaux
la passerelle internet
les zones de disponibilités

-Centre de données virtuel par région


- Héberge des ressources AWS (Ec2 -RDS)
- Accès complet à la configuration de votre réseau
- sous réseaux, tables de routage, liste de controle d’accès, Groupes de sécurité
- un sous réseaux est lié à une seule zone de disponibilité
- par défaut sont crée (en créant un vpc) : tables de routage, acl, et un groupe de sécurité
- par défaut ne sont pas crées : sous réseau, pas de passerelle internet
- permet aux sous réseaux d’accéder à internet grace à la passerelle internet IGW
- Aws réserve 5 adresses ip par sous réseau (0-1-2-3-255)
- un sous réseau public est visible depuis internet
- un sous réseau privé n’est pas visible depuis internet

EC2 (ELASTIC CLOUD COMPUTE)


Provisionnement en quelques minutes

redimensionnable

Paiement à l’usage

Instances à la demande, réservées, spot, serveur dédié

Emplacement géographique libre

système windows ou linux

type d’instances pour tous les usages

cas d’usages

web / App

Bases de données

Jeux

Courriels

Fichiers, documents

Calculs intensifs CPU/GPU

ETC

INSTANCES TYPES:

1- GENERAL PURPOSE INSTANCE

The General Purpose Instance balances computing, memory, and networking resources.

It fits many purposes. Such as:


●​ Application servers
●​ Gaming servers
●​ Backend servers for companies
●​ Small and medium databases

The General Purpose Instances are best when there is a balance between the resources.

2- Compute Optimized Instances


The Compute Optimized Instances are best there is a need for high compute.

This type is also good for application servers, gaming servers, and web applications.

The main difference is that this type is ideal for high-performance and compute-intensive
needs.

3 - Memory Optimized Instances


This type can deliver large dataset workloads fast.

Memory is a temporary storage area.

It loads from storage, holds the data, and processes it before the computer can run it.

The processing allows for a preloading process and gives the CPU direct access to the
computer program.

The Memory Optimized Instances are best when huge amounts of data need to be
preloaded before running the app.

4 - Accelerated Computing Instances


This type use hardware accelerators.

The accelerators boost the data processing.

The Accelerated Computing Instances are best for graphics applications and streaming.

5 - Storage Optimized Instances


This type is best when you have large datasets on local storage.

Some examples:

●​ Large file systems


●​ Data warehouses
●​ Online transaction systems

The Storage Optimized Instances are designed to deliver many inputs as fast as possible.

ELASTIC CLOUD COMPUTE PRICING

On demand instances (time usage pay)

Saving plan : it’s a commitment for 1 or 3 years it give you a discounted price

if you surpass the budget the cost goes to normal (on demand)

Reserved instances :

the options are 1 or 3 years:

The latter one gives the highest discount

spot instances

it’s most using for flexible workload start end times.

spot can give up to a 90 percent cost using

the reason behind the price is that aws can optimize its capacity giving you better prices.

Dedicated Host

Dedicated hosts are physical servers fully dedicated to you.

You can use your existing VM software licenses.

The Dedicated Host is the most expensive model.

SCALING

It’s about only using the resources that you need.

Otherwise if you want to have the scaling possibility your architecture must be scalable.

Autoscaling

AWS EC2 Auto Scaling allows you to add or remove EC2 instances automatically.

It automates the capacity to the demand.

There are two approaches:

●​ Dynamic scaling: responds to changing demand


●​ Predictive scaling: schedules the number of instances based on a predicted demand
●​ Dynamic and Predictive scaling can be combined to scale faster
But at the end you can’t make scalable architecture on-premises

EC2 Auto Scaling can be added as a buffer on top of your instances.

It can add new instances to the application when necessary and terminate them when no
longer needed.

You can set up a group of instances.

Here you can set a minimum capacity of instances that will always be running. The rest will
operate when necessary.

You can set the desired number of AWS EC2 instances in the scaling group.

However, the desired capacity defaults to your minimum capacity if not specified.

The last configuration is Maximum capacity.

Here you set the maximum capacity of instances to be used.


The Auto Scaling groups allow you to have a dynamic environment.

You set the minimum capacity, the desired number, and the maximum capacity.

The group will operate within the config and give you a predictable and cost-effective
architecture.

ELASTIC LOAD BALANCING

This service distributes application traffic across services.

The Load Balancer is a single point of contact for incoming web traffic.

The single point of contact means that the traffic hits the Load Balancer first, spreading out
the load between the resources.

It balancer accepts requests and directs them to the appropriate instances.

It ensures that one resource won't get overloaded, and that the traffic is spread out.

AWS EC2 and Elastic Load Balancing are two different services that work well together.

AWS ELB is built to support increased traffic without increasing the hourly cost.

AWS ELB scales automatically.

Load Allocation
The service allocates incoming traffic between the available resources.

The principle is the same with both high and low demand periods.

It will allocate between what is available at any time.

it can be use also the scall back-end instances and give the possibilities to front end to
communicate with multiple back-end using ELB.

AWS Cloud Messaging and Queuing


Monolithic Applications and Microservices
Applications are made of multiple components.

The components communicate with each other.


The communication can transmit data, fulfill requests, and keep the application running.

Monolithic Application
An architecture with tightly coupled components can be called a monolithic application.

Components can be databases, servers, interfaces, and much more.

A monolithic application can be vulnerable if one of the components fails.

In the worst case, this can cause the whole service to go down.

Instead, your application can be designed with an approach called microservices.

Microservices can help to keep your service available if one component fails.

Microservices
Microservices can help to maintain the service if one component fails.

The services can be maintained because they communicate with each other and the
components are not tightly coupled.

AWS has two services that can make this integration:

●​ AWS Simple Notification Service (AWS SNS)


●​ AWS Simple Queue Service (AWS SQS)

You will learn more about them in the next chapter.

The difference between the Monolithic and Microservices approach is tight coupled vs.
loosely coupled.

AWS SNS - Simple Notification Service


What is AWS SNS?
SNS is a cloud service for the mass delivery of messages.

It is a fully managed publish-subscribe messaging and mobile communication service.

It can be event-driven, with automated services responding to triggers.


Distributed systems and micro services can be decoupled with messaging between them
through AWS SNS.

Application-to-person messaging to users is possible with SMS, mobile push, and email.

Message Endpoints
AWS SNS can publish messages to many different endpoints:

●​ HTTP and HTTPS


●​ Email and Email-JSON
●​ AWS SQS
●​ Applications
●​ AWS Lambda
●​ SMS (depending on region)

The Difference between SQS and SNS


SNS is a notification system, which pushes messages to its subscribers.

SQS is a queuing system, and the receivers have to pull the messages to be processed and
deleted from the queue.

SNS and SQS can works well together.

AWS SQS - Simple Queue Service


It’s is a kind of buffer for messages.

Those messaging can have up to 256kb.

Synchronous = tightly coupled system if a component failed the entire of the system failed.

asynchronous process = Loosely coupled system

don’t wait for the result to continue the process because messaging are put inside a Queue.

AWS Cloud Serverless


Serverless is a service where you do not have to think about servers.

With serverless, you only have to think about code.

The cloud provider handles all infrastructure behind it.


AWS LAMBDA

AWS Lambda a serverless compute service.

This service lets you run code without needing to think about servers.

It lets you focus on what's most important, such as making a great application.

You only pay for the compute time that you use.

Pay for what you use translates to that you only pay when your code is running.

AWS Cloud Containers


Containers are popular for deploying and managing applications in the cloud.

Containers let you package code in a single object.

The container isolates the code and removes the dependencies to other components.

It runs in isolation.

Containers are an essential concept in micro service architectures.

Containerized Approach
Having the application in a container makes debugging easier.
It makes it easier because the application is inside of an isolated container.

The container remains consistent regardless of deployment.

AWS ECS ELASTIC CONTAINER SERVICE

it’s helped you to to run containerized applications

ECS support docker

whats is a container : it’s a isolation environnement wich permit to package


applicatons

Problem : manage multiples container

Cluster - task - service

Cluster

task
service example (ELB)

amazon containers ar running inside EC2 instances

AWS EKS (ELASTIC KUBERNETES SERVICE)

Kubernetes is an container orchestrator

AWS Elastic Kubernetes Service is also called AWS EKS

EKS is a managed service that lets you run Kubernetes on AWS.

It is built for scaling with Kubernetes.

What is Kubernetes?

Kubernetes is open-source software.

It helps you deploy and manage containerized applications.

Kubernetes has a large community.

AWS continuously keeps the AWS EKS service updated to the latest
Kubernetes features.

How AWS EKS works


AWS EKS is used to run and scale Kubernetes applications in the cloud and
on-premises.

Deploy applications in different ways:

●​ Cloud Deployment
●​ Deployment on your infrastructure
●​ Deployment with your tools
AWS Cloud Fargate
Serverless Compute for Containers - AWS Fargate
It helps to deploy and manage applications.

Fargate manages the infrastructure for you.

You do not have to think about the provision of servers and infrastructure
management when using Fargate.

Introduction AWS Infrastructure


AWS has global infrastructure with Data Centers all over the world.

Deploy apps across the globe or to a specific location.

Build and deploy where you want.

AWS REGIONS

There are different reasons to choose a specific region.

Those reasons could be:

●​ Data regulations (compliance)


●​ Customer proximity
●​ Service availability
●​ Pricing

Some countries do not allow sensitive data to be processed and stored


abroad.

Your company might require that all company data shall reside in the country.

Customer Proximity
Selecting a region near your customers can help to make the services faster.
AWS Cloud Availability Zones
Availability Zone is a single Data Center or a group of Data Centers in a
region.

In an Availability Zone the Data Centers are located many miles apart from
each other.

Having them apart reduces the risk of them all going down if a disaster
happens in the region.

Simultaneously, have the Data Center(s) close enough to have low latency.

finally a region is a group of availability zone

it’s very important to use at least 2 availabity zones (horizontale scale)

AWS Cloud Edge Locations


Edge Location is the Data Center used to deliver content fast to your users.

It is the site that is nearest your users.

The AWS Edge Locations uses a service called CloudFront.

CloudFront is used to store cached copies of your content.


Resulting in fast delivery of your content.

The content is delivered faster because the data is no longer requested from
the primary location.

It is delivered from the Edge Location (cloudFront). The location nearest to the
user.

The cache saves subsets of the data, making it available.

Once someone requests the data, it is copied and stored at the Edge
Location.

When the next person requests the same data, it will be delivered faster from
the nearest Edge Location.

AWS Cloud Resource Provisioning


AWS Management Console
The AWS Management Console is a web-based interface.

It is used to access and manage AWS services.

The AWS Management Console has a mobile application.

The mobile view is best used for monitoring and accessing billing information.

AWS Command Line Interface


AWS Command Line Interface is also called "AWS CLI".

CLI saves you time when making API requests.

It allows you to control multiple AWS services with one tool.

CLI allows you to automate actions on services with scripts.

It is available on Windows, macOS, and Linux.


Software Development Kits
Software development kits are also called "SDKs".

SDKs is another option to access and manage AWS services.

It eases the use of AWS services through an API.

The API is fitted to the platform or programing language that you use.

SDKs can be used on existing applications or new ones built on AWS.

AWS SDKs supports programming languages such as C++, Java, .NET, and
more.

in aws everything is an API call 😀


AWS Cloud Provision Services
AWS offers two managed tools: AWS Elastic Beanstalk and AWS
CloudFormation.

AWS Elastic Beanstalk


With AWS Elastic Beanstalk, you provide code and configuration settings.

Elastic Beanstalk deploys the resources necessary to perform the following


tasks:

●​ Adjust capacity
●​ Load balancing
●​ Automatic scaling
●​ Application health monitoring

AWS CloudFormation
With AWS CloudFormation, you can treat your infrastructure as code.
Using this service you can build an environment by writing lines of code.

Instead of using the AWS Management Console to provision resources


individually.

Introduction AWS Networking

AWS Virtual Private Cloud


AWS Virtual Private Cloud is also called AWS VPC.

VPC is a service that lets you isolate your AWS resources in an isolated
network.

The boundaries created around the resources let AWS restrict the network
traffic.

In addition, it allows you to include the sections of the AWS Cloud that you
want in the isolated network.

Resources can be organized in subnets.

A subnet is a section in the VPC that can contain specific resources.

Internet Gateway
Public traffic can be allowed to your VPC.

The traffic is allowed by an Internet Gateway.


Virtual Private Gateway
A Virtual Private Gateway is used to access private resources in the VPC.

It has extra layers of protection.

The Virtual Private Gateway encrypts the internet traffic, keeping it protected.

It is a component that allows the encrypted traffic to enter the VPC.

AWS Direct Connect


AWS Direct Connect lets you make a dedicated private connection between
the Data Center and a VPC.

A dedicated connection is to have the link for yourself.

The link is not shared with others.

Only you and your data can travel through the connection.
Subnets and Network Access Control Lists Video
Subnets control access to the gateways.

Subnets
A Subnet is a section of a VPC.

The Subnet allows you to group resources.

The groupings can have different security or operations needs.

You can have both public and private Subnets.

Network Traffic in a VPC


Requested data are sent as a Packet.

A Packet is a package of data sent over a network or the internet.

It enters the VPC through an Internet Gateway.

Before entering a Subnet it checks for permissions.

Checking permissions such as:

1.​ Who sent the Packet?


2.​ How will the Packet communicate with the resources in the Subnet
Network Access Control Lists
Network Access Control Lists are called ACLs.

ACL is a firewall that controls the traffic, both inbound and outbound.

It controls the traffic at the subnet level.

The ACL checks and controls the Packets.

If the Packet is on the approved list, it will pass through.

However, if they are not on the list, they will be denied access.

Stateless Packet Filtering


everything get in will be verified when it’s wa go out

The ACLs do Stateless Packet filtering.

They have no memory and will forget the request once checked.

Their job is to check the Packets that go in and out.

It uses the set rules to approve or deny access.

Security Groups
A Security Group is a firewall that controls inbound and outbound traffic.

This feature is specific for an AWS EC2 instance.

The default config denies all inbound traffic and allows all outbound.

You have to add new rules to change this config.


Stateful Packet Filtering
Everything in came automatically go out
Security Groups do stateful Packet filtering.

They remember the actions that they have done with Packets in the past.

Configuration
ACLs and Security groups can be configured.

Configuration means adding custom rules for the traffic.

So ACL are using for all the subnet and security Group for Each
Instance

Domain Name System


Domain Name System is also called DNS.
DNS is the service that lets someone access your website from their browser.

The DNS is like a phone book.

It connects the IP address to the domain name.

AWS Route 53
Route 53 is a DNS web service.

It routes end users to internet apps hosted in AWS.

Route 53 connects users and their requests to AWS resources and external
resources.
The picture explained:

The company has 3 EC2 Instances in an Auto Scaling group.

The group is attached to an Application Load Balancer.

1.​ User requests data from the website application.


2.​ Route 53 uses DNS resolution to identify the IP address.
3.​ The data is sent back to the user.
4.​ The user request is sent to the nearest Edge Location through
CloudFront.
5.​ CloudFront connects to the Application Load Balancer.
6.​ The Load Balancer sends the packet to the EC2 instance.

AWS Storage and Databases

AWS Instance Stores


Instance Store is a storage volume that acts as a physical hard drive.

It provides temporary storage for Amazon EC2 instance.

The data in an instance store persists during the lifetime of its instance.
If an instance reboots, data in the instance store will persist.

When the instance hibernates or terminates, you lose any data in the instance store.

If an instance starts from a stopped state, it might start on another host where the used
instance store does not exist.

It is recommended to avoid storing valuable data in the store instance.

Instance Stores are good for temporary files, and data that can be easily recreated.

AWS EBS - Elastic Block Store


AWS EBS is also called AWS Elastic Block Store.

EBS is a service that provides storage volumes.

You can use provided storage volumes in Amazon EC2 instances.

EBS volumes are used for data that needs to persist.

It is important to backup the data with AWS EBS snapshots.

After creating an EBS volume, you can attach it to an AWS EC2 instance.

If the EC2 instance stops or is terminated, all the data on the attached EBS
volume remains.

To attach EBS volume the volume and EC2 must be in the same AZ

What are AWS EBS Snapshots?


EBS snapshot is an incremental data backup.

The first backup of a volume backups all the data.


Every next backup copies only a block of data that has changed since the last
snapshot.

It saves on storage costs by not duplicating data.

Only the data unique to that snapshot is removed when you delete a
snapshot.

If the EC2 instance stops, or is terminated, all the data on the attached
EBS volume remains.

AWS S3 - Simple Storage Service


AWS S3 is also called AWS Simple Storage Service.

S3 is a storage service.

It allows uploading any type of file.

In S3 you can set access permissions to a file.


It is object-level storage.

It offers unlimited space in the storage.

The maximum file size is 5 TB.

What is Object-Level Storage?


Object-level storage contains objects.

Each object is made of:

●​ Data - any type of file


●​ Metadata - information about what the data is
●​ Key - unique identifier

AWS S3 Storage Classes


There are many AWS S3 storage classes.

They differ in data availability.

How frequent data is retrieved and cost price.

S3 Standard

S3 Standard is ideal for data that is accessed often.

Provides high availability for stored objects.

It stores data in at least three Availability Zones.

It is the most expensive class.


S3 Standard-Infrequent Access

S3 Standard-Infrequent Access is also called S3 Standard-IA

S3 Standard-IA is ideal for data that is often accessed.

It has the same level of data availability as S3 Standard.

It stores data in at least three Availability Zones.

Lower storage price but higher data retrieval price.

It is higher priced than other classes.

S3 One Zone-IA (S3 One Zone-Infrequent Access)

It stores data in one Availability Zone.

It is cheaper than S3 Standard and S3 Standard-IA classes.

S3 Intelligent-Tiering

S3 Intelligent-Tiering requires automation and monitoring.

It is recommended for data with unknown or frequently changing access.

It moves the object to the S3 Standard-IA class if it is not accessed for 30


days.

It moves objects to S3 Standard if accessed in S3 Standard-IA or S3 One


Zone-IA classes.

S3 Glacier

S3 Glacier is recommended for archiving data.

It can retrieve objects within a few minutes.

S3 Glacier is a cheaper and slower class.

S3 Glacier Deep Archive

S3 Glacier Deep Archive has the lowest cost.

Like S3 Glacier, it is best for archives.


Compared to S3 Glacier, S3 Glacier Deep Archive can retrieve objects within
12 hours.

Comparison of AWS EBS and AWS S3

AWS EBS ​ AWS S3

Data is stored as blocks ​ Data is stored as objects

Store block can size up to 16 tebibytes each (17.6 terabytes) ​ Individual


object size can be up to 5,000 gigabytes (5 terabytes)

Faster performance than AWS S3 ​ Data does not suffer loss, degradation,
or a corruption for a very long time (99,99999999%)

Data can be modified ​ Data can not be modified, unless reuploaded

Cloud File System - AWS EFS (Elastic File System)


EFS is a file system

Data is accessed via file paths

compare to AWS EBS AWS EFS saves the data in many availability zones
scaling Efs not disrupt applications.

it’s a regional resource so all Ec2 instances regions can be attach to the
volume

Cloud Relational Database - Amazon RDS

also called AWS Relational Database Service.


It’s supports
. AWS AURORA
. POSTGRESQL
. MYSQL
. ORACLE DATABASE
. MICROSOFT SQL SERVER

AWS RDS database engines offer data encryption while data is stored, sent,
and received.

What is Amazon Aurora?


Amazon Aurora is a relational database ideal for large organizations and
enterprises.

It offers high availability of data.

It is excellent for managing large amounts of data.

It is five times faster than a MySQL database.

It is three times faster than a PostgreSQL database.

Amazon Aurora creates six copies of data across three Availability


Zones and a data backup on Amazon S3.

It ensures the data is available at all times.

Non-relational Cloud Database - AWS DynamoDB


AWS DynamoDB is a non-relational, NoSQL database.

It is a serverless database.

DynamoDB is a high performance service.


As AWS DynamoDB is a serverless database, you do not have to manage
servers or an operational system to use it.

Big Data Analytics - AWS Redshift (data warehouse as


a service)
AWS Redshift is big data analytics service.

It can gather information from many sources.

It assists you with getting connections across your data.

Database Migration Service - AWS DMS


It helps you move data between databases.

There is a source database and a target database.

A source database is a database from where data is migrated sometimes it’s


on-premices database
A target database is a database where data is migrated to.

Your source database will remain operational during the migration process.

AWS DBS is simple to use, reduces application downtime, supports a wide


range of databases, has low cost, and is reliable.

When to Use AWS DMS


You can use AWS DMS to:

●​ Enable testing application against production data, or other


environmental data, without affecting it
●​ Combine multiple databases into a single one
●​ To send your data to other data sources
Monitoring : observing systems, collecting metrics, and then using data
to make decisions.

CLOUDWATCH

(Surveillance et Gestion des ressources et applications)

. Utiliser pour surveiller les performances


. CloudWatch peut surveiller aussi bien des serveurs(V/P) que des applications
. Cloudwatch supporte des métriques personnalisées que vous avez développées
. Par défaut CloudWatch surveille les instances EC2 par intervalle de 5 minutes
. vous pouvez activer la surveillance détaillée par intervalle de 1 minute
. vous pouvez créer des alarme qui déclenchent d’autres services de notifications.
. Cloudwatch dispose d’une offre gratuite pour tous

Cloud Action Logging Service - AWS CloudTrail


CloudTrail logs actions on your account as a trail.

Example of data logs:

●​ Identity
●​ Time
●​ IP address
●​ and much more.

CloudTrail gives a complete history of user activity and API calls on your resources.

Cloud Inspection Service - AWS TrustedAdvisor


TrustedAdvisor checks your account, evaluates, and recommends.

It recommends helping you follow AWS best practices.

AWS Pricing and Support

AWS Free tier


The AWS Free Tier lets you try services for free for the specified period.
It has three different offerings:

●​ Always Free
●​ 12 Months Free
●​ Trials

Always Free
The offers in Always Free do not expire.

Always Free is available to everyone.

You need to have an account to get started.

12 Months Free
This offer is free for the first 12 months.

It starts to count when you sign up with an account.

With 12 Months Free, you get more data to play with.

Trials
Trials are short-term offers.

It is for specific services.

The trial period starts when you activate the service.

The period length differs from service to service.

Examples: 30 days, 90 days, or 150 free hours of consumption.

AWS Pricing Models


AWS has many different pay-as-you-go pricing options.

Pay for What You Use


Pay only for the resources that you use.

No need for long-term contracts.

No need for licensing agreements.

Pay Less When You Reserve


Requires a commitment for future consumption.

You need to pay no matter if you use the services or not.

Reserve resources will give you a discount.

This option is for those who know that they need the resources in the future.

Pay Less with Volume-Based Discount When You Use


More
The service gets cheaper the more you use.

Pricing per unit gets lower when you cross a threshold.

More use, pay less.

AWS Pricing Calculator


The Pricing Calculator lets you create a cost estimate for the use of AWS resources.

Organize the estimates into groups.

Use the groups to simulate how your business is organized. For example, by cost centers.

The estimates can be shared with others by links.

AWS Billing Dashboard

The Billing Dashboard lets you pay your AWS bill, monitor usage, and analyze costs.

●​ Compare billing periods


●​ View spending, for example: daily, monthly, or year-to-date
●​ Find out how much use you have left on the Free Tier
●​ Saving plans
●​ Create cost and usage reports

Consolidated Billing Cloud Services


AWS lets you manage several accounts from a central location.

The central location allows you to have one bill cross all the accounts.

Many accounts, one single bill.

Merging bills is the core of Consolidated Billing.

if you want to have multiple account use AWS organizations

The max number of accounts per organization is 4.

Contact AWS Support to increase the limit.

AWS Cloud Support Plans


AWS offers four different support plans.

●​ Basic
●​ Developer
●​ Business
●​ Enterprise

Basic Support
Basic is the default support option.

Basic support is free.

It grants access to whitepapers, documentation, and support communities.

There are limitations for what you can contact AWS for.

Developer Support
Access to everything in Basic plus:
●​ Best practice guidance
●​ Client-side diagnostic tools
●​ Building-block architecture support on how to use AWS services together

Business Support
Everything in Basic and Developer plus:

●​ Use-case guidance
●​ All TrustedAdvisor checks
●​ Limited support for third-party software

Enterprise Support
Everything in Basic, Developer, and Business plus:

●​ Application architecture guidance


●​ A short project to assess and guide your company on architecture and scale
●​ Technical account manager

Technical Account Manager (TAM)


The Enterprise Plan includes access to a Technical Account Manager.

The TAM is the primary point of contact.

She helps you with design, architecture, and how to grow with AWS.

The TAM has access to expertise in all AWS services.

AWS Marketplace
AWS Marketplace lets you list and sell software.

Marketplace is a digital catalog where vendors can list and sell their software.

Here you can explore, test, and purchase software that runs on AWS.

It gives detailed product information on listings such as:

●​ Pricing
●​ Support options
●​ Customer reviews

Cloud Migration and Innovation

AWS CAF - Cloud Adoption Framework


AWS CAF is a framework that walks you through migration of applications to the cloud.

It provides suggestions assisting you in the migration process.

CAF has six focus areas (also called perspectives):

1.​ Business
2.​ People
3.​ Governance
4.​ Platform
5.​ Security
6.​ Operations

Business Perspective
The Business Perspective is about justifying the investment.

The Business Perspective ensures that business and IT objectives meets the investment.

Roles in the Business Perspectives are:

●​ Budget owners
●​ Business managers
●​ Finance managers
●​ Strategy stakeholders

People Perspective
The People Perspective evaluates skills, requirements, and roles in your organization.

It is about making sure that you have the right skills, competence, and processes in place to
move to the cloud.
The evaluation process helps you implement necessary changes or improvements.

Roles in the People Perspectives are:

●​ People managers
●​ Human resources (HR)
●​ Staffing

Governance Perspective
The Governance Perspective is about minimizing the risk.

And simultaneously, to maximize the business value.

It helps you to understand the gaps.

Giving you an understanding of how to ensure processes and staff skills.

Roles in the Governance Perspectives are:

●​ Chief Information Officer (CIO)


●​ Enterprise architects
●​ Business analysts
●​ Program managers
●​ Portfolio managers

Platform Perspective
The Platform Perspective helps you deploy new cloud solutions.

It also helps you migrate on-premises workload to the cloud.

Roles in the Platform Perspectives are:

●​ Chief Technology Officer (CTO)


●​ Solutions architects
●​ IT managers

Security Perspective
The Security Perspective ensures that the organization's security objectives are met.

The Security Perspective includes objectives for:


●​ Agility
●​ Visibility
●​ Auditability
●​ Control

Roles in the Security Perspectives are:

●​ Chief Information Security Officer (CISO)


●​ IT security analysts
●​ IT security managers

Operations Perspective
The Operations Perspective is about running the business.

Ensuring that the business operations meet the expectations.

It includes a year-to-year, quarter-to-quarter, and day-to-day business.

It helps define the necessary changes needed for successful cloud adoption.

Roles in the Operations Perspectives are:

●​ IT operations managers
●​ IT support managers

Cloud Migration Strategies (6R’s)


Migration Strategies are plans that help you move your applications into the cloud.

There are six most common strategies you can implement for your application migration:

1.​ Rehosting
2.​ Replatforming
3.​ Refactoring
4.​ Repurchasing
5.​ Retaining
6.​ Retiring

Rehosting
Rehosting is also called lift-and-shift.

It is a process of moving applications without making any changes to them.


Replatforming
Replatforming is also called lift, tinker and shift.

It is a process of moving applications with cloud optimizations.

Refactoring
Refactoring is also called re-architecting.

It is a process of changing the application foundation/core and/or environment.

It helps with application scaling, performance, and further development.

Repurchasing
Repurchasing is a process of changing business type.

It moves your application to a software-as-a-service (SaaS) model from a traditional model.

Retaining
Retaining involves keeping crucial business applications.

It could include applications that require refactoring before migration.

Retiring
It is a process of removing unnecessary applications.

AWS Snow Family


AWS Snow Family is a group of devices that transport data in and out of AWS.

AWS Snow Family devices are physical devices.

They can transfer up to exabytes of data.

One exabyte is 1 000 000 000 000 megabytes.


AWS Snow Family include three device types:

●​ AWS Snowcone
●​ AWS Snowball
●​ AWS Snowmobile

AWS Snowcone
AWS Snowcone is a secure and small device.

It transfers data.

It is made out of 8 TB of storage space, 4 GB of memory, and 2 CPUs.

AWS Snowball
AWS Snowball has 2 types of devices, described in the table below.

Snowball Edge Storage Snowball Edge Compute Optimized devices


Optimized devices

Great for large-scale data Great for services that require a large amount of
migrations computing resources.

Have 80 TB of HDD storage Have 42 TB of HDD storage for object storage, and 7.68
space for object storage TB of NVMe SSD storage space for AWS EBS block
volumes.

Have 1 TB of SSD storage for Work on 208 Gib of memory and 52 vCPUs.
block volumes

AWS Snowmobile
AWS Snowmobile moves large amounts of data to AWS.

It can transfer up to 100 petabytes of data

One petabyte is 1 000 000 000 megabytes.


Innovate with AWS Cloud
Innovation with AWS can improve your business in the cloud.

AWS Services help you:

●​ Evaluate your current business state


●​ Determine the state you want to be at
●​ Deal with the problems you need to solve

Some options for solving your problems that AWS offers you are:

●​ Machine Learning (ML)


●​ Artificial Intelligence (AI)
●​ Serverless applications

AWS Well-Architected Framework


AWS Well-Architected Framework is a tool that uses best practices to find improvements for
your applications in the cloud.

It helps you in five areas:

1.​ Operational excellence


2.​ Security
3.​ Reliability
4.​ Performance efficiency
5.​ Cost optimization

Those areas are also called the five pillars of AWS Well-Architected Framework.

Operational Excelence Pillar


The operational excellence pillar is a capacity to manage and monitor systems.

It improves supporting systems processes and procedures.

It includes:

●​ Making small and reversible changes


●​ Prediction of system disruptions
●​ Performing code tasks
●​ Making documentation notes
Security Pillar
The security pillar consists of protecting systems and data.

Well-Architected Framework applies security at all levels.

It protects both stored and in-transit data.

When possible, best security practices are automatically applied.

Reliability Pillar
The reliability pillar is the ability to minimize disruptions of the system.

It obtains computing resources as needed.

It entails boosting system availability.

It automatically recovers the system from disruptions.

Performance Efficiency Pillar


The performance efficiency pillar is the capacity to accurately use computing resources.

It satisfies the efficiency on demand.

Cost Optimization Pillar


Cost optimization pillar helps you run your cloud services at the lowest price points.

Cost optimization performs operations such as:

●​ Analysis of your costs


●​ Operating managed services
●​ Makes sure you only pay for what you use

What Are the Benefits of the AWS Cloud?


There are six crucial benefits of the AWS Cloud:

●​ Trade upfront expense for variable expense


●​ Benefit from massive economies of scale
●​ Stop guessing capacity
●​ Increase speed and agility
●​ Stop spending money running and maintaining data centers
●​ Go global in minutes
Trade Upfront Expense for Variable Expense
AWS Cloud makes sure you pay only for what you use.

It helps you avoid unnecessary investments in infrastructure like servers or data centers.

Benefit from Massive Economies of Scale


By utilizing cloud computing, you might receive a cheaper variable cost.

Because of the high number of clients in the cloud, you can achieve lower pay-as-you-go
rates.

Stop Guessing Capacity


AWS Cloud helps you lower your capacity cost.

You only pay what you use.

Increase Speed and Agility


AWS Cloud makes application deployment fast and easy.

Stop Spending Money Running and Maintaining Data


Centers
AWS Cloud gives you more time to focus on your customers and applications.

It does so by managing servers for you.

Go Global in Minutes
AWS Cloud allows you to deploy apps quickly and with little latency.
S3 :

Rappels

.Service transversal
.service de stockage d’objets
. accéssible par d’autres services aws
. ompartiments, nommage universel (global)
. classes de stockage (Std, APF (accès peu fréquent), Glacier ) => avec définition ou non de
la duréee de vie
. Version, Cycles de vie, Expiration
. Permissions S3 ou KMS, en transit et au repos
. Durabilité 99,9999999
. Possibilité de remonter les métriques vers CLOUDWATCH

Cas d’usages
●​ stockage de données
●​ stockage de sauvegardes (snapshot Ec2) => reprise d’activité après sinistre
●​ hébergement de sites statiques
●​ Archivage bas cout (Glacier)
●​ Zone de stockage pour les données du <<BigData>>

ROUTE 53 👍
Service de nom de domaine (DNS) global
Service de haute disponibilité
Surveillance de ressources aws et non aws

😀
Annuaire
utilise le port 53 (TCP/UDP) Route 53
Fonctionnalités
Résolveur
Flux de trafic
Routage basé sur la latence
Geo DNS
DNS privé pour Amazone VPC
Basculement DNS
contrôle et surveillance avec cloudwatch de l’état de santé des equipment
Intégré à Amazone ELB

SUMMARY
Gestion de DNS
Hébergement de zones
Couvre les différents enregistrements DNS
​ .SOA (centralise la liste des top level) -NS (précise quel serveur à autorité pour un
domaine précis)-A (rédirection nom d’hote vers IP)-CNAMES (permet d’appeler une
machine par un ou plusieurs noms d’hotes)-MX (redirection de mail)-PTR (reverse DNS)
Routage conditionnel
Routage simple, pondéré, latence, bascule, géographique
métrique de santé (sauf pour le routage simple)
lié aux autres services cloudwatch, ELB

RDS
Hautement évolutif
Hautes performances
Administration facile
Disponible et durable
Sûr et conforme (licenses)

SNS
FOnctionnement publication-souscription
contrôlé par API ou console
Multiples types d’envoie de message
Faibles coût
service régionale

ELB (intégrer à EC2)


.Application LB
.Network LB
.Classic LB
Liens avec les services Route53 - Cloudwatch - cloudtrail
.Surveillance des instances de groupe
​ .en service
​ .hors service
cas d’usage
. Haute disponibilité
. vérification de l’état des instances
. Fonctionnalité de sécurité
. Transfert de charge SSl
. session permanente (sticky session)
. Prise en charge du protocole IPV6
. équilibrage de charge au niveau de la couche 4 et 7 (réseau ou applicatifs)
. surveillance des opérations
. journalisation => cloudtrail (audits)

Autoscaling (mise à l'échelle )

👍
Ajustement de la capacité en fonction du besoin
Automatiser le plus possible
<< un système automatisé est un ensemble d’éléments qui effectue des actions sans
intervention de l’utilisateur>>.

Bootstrap EC2
#!/bin/bash
yum update -y
yum install httpd -y
chkconfig httpd on
service httpd start
cd /var/www/html
echo " <html> <h1> instance PARIS 01 </h1> </html>" > index.html

You might also like