0% found this document useful (0 votes)
12 views84 pages

Report Removed

Uploaded by

akshita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views84 pages

Report Removed

Uploaded by

akshita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 84

Microsoft Azure Administrator (Associate) and

AWS Cloud Practitioner (Foundational)


Certification
AN INDUSTRIAL INTERNSHIP TRAINING REPORT

Submitted by

21BEC1194

ECE1902 – INDUSTRIAL INTERNSHIP

in partial fulfillment for the award of the degree of


BACHELOR OF TECHNOLOGY
in
ELECTRONICS AND COMMUNICATION ENGINEERING

April 2024
School of Electronics Engineering
DECLARATION BY THE CANDIDATE

I hereby declare that the Industrial Internship Report entitled “Microsoft Azure
Administrator (Associate) and AWS Cloud Practitioner (Foundational)
Certification” submitted by me to VIT University, Chennai in partial fulfillment
of the requirement for the award of the degree of Bachelor of Technology in
Electronics and Communications Engineering is a record of bonafide industrial
training undertaken by me. I further declare that the work reported in this report
has not been submitted and will not be submitted, either in part or in full, for the
award of any other degree or diploma in this institute or any other institute or
university.

Location: Uttar Pradesh Signature of the


Candidate Date: 04.11.2023
School of Electronics Engineering
BONAFIDE CERTIFICATE

This is to certify that the Industrial Internship Report entitled “Microsoft Azure
Administrator (Associate) and AWS Cloud Practitioner (Foundational)
Certification” submitted by Enakshi Kapoor(21BEC1194) to VIT, Chennai in
partial fulfillment of the requirement for the award of the degree of Bachelor of
Technology in Electronics and Communication Engineering is a record of
bonafide industrial internship undertaken by him/her fulfills the requirements as
per the regulations of this institute and in my opinion meets the necessary
standards for submission. The contents of this report have not been submitted and
will not be submitted either in part or in full, for the award of any other degree or
diploma in this institute or any other institute or university.

Signature of the Examiner Signature of the Examiner


Date: Date:

Head of the Department (B.Tech ECE)


ACKNOWLEDGEMENT

I sincerely thank the individuals who made this internship experience both
educational and enriching. I want to express my heartfelt thanks to Satya Narayan
Nadella, CEO of Microsoft, and Maureen Lonergan, Director of Training and
Certification at AWS Academy for providing me with this incredible opportunity to
get certified at Microsoft Azure Administrator (Associate) and AWS Cloud
Practitioner (Foundational) Certification. Would like to extend my thanks further to
Mr. Jayakumar Sadhasivam, AWS Academy Accredited Educator from the
Software Systems Department for their guidance, and support, and for creating a
conducive learning environment throughout my internship. Their mentorship and
insights have been instrumental in shaping my understanding of the cloud and its
operations.
I am immensely grateful to Priyadaarshini M and Padmavathy T V, Associate senior
professors, dedication to their work, and patience in answering my queries have
broadened my perspective and helped me develop practical skills.
Furthermore, I would like to express my deep appreciation to my parents, my sister,
and friends for their trust in my abilities and belief in my potential has given me the
platform to apply my theoretical knowledge in a real-world setting. I am grateful for
their unwavering support and encouragement throughout this journey.
Lastly, I would like to acknowledge that as the sole intern, I have received
undivided attention and guidance from the entire team at Pearson Vue. Their
openness, collaboration, and willingness to share their experiences have contributed
significantly to my personal and professional growth.

Enakshi Kapoor
(21BEC1194)
TABLE OF CONTENTS

Chapter Title Pg. No.


Declaration by Candidate 2
Certificate 3
Bonafide Certificate 4
Acknowledgment 5
Table of Contents 6
List of Symbols and Abbreviations 7
Abstract 6
1. Introduction
1.1Certificates
2. Microsoft Azure Administrator Associate 14
2.1 Configuring Microsoft Entra ID
2.2 Configuring user and group accounts
2.3 Configuring subscriptions
2.4 Configuring Azure Policy
2.5 Configuring role-based access control
2.6 Configuring Azure resources with tools
2.7 Using Azure Resource Manager
2.8 Configure resources with Azure Resources Manager templates
2.9 Configuring virtual networks
2.10 Configuring network security groups
2.11 Configuring Azure DNS
2.12 Configuring Azure Virtual Network Peering
2.13 Configuring network routing and endpoints
2.14 Configuring Azure Load Balancer
2.15 Configuring Azure Application Gateway
2.16 Configuring storage accounts
2.17 Configuring Azure Blob Storage
2.18 Configuring Azure Storage security
2.19 Configuring Azure Files and Azure File Sync
2.20 Configuring virtual machines
2.21 Configuring virtual machine availability
2.22 Configuring Azure App Service plans
2.23 Configuring Azure App Service
2.24 Configuring Azure Container Instances
2.25 Configuring files and folder backups
2.26 Configuring virtual machine backups
2.27 Configuring Azure Monitor
2.28 Improve incident response with alerting on Azure
2.29 Configure Log Analytics
2.30 Configure Network Watcher
3. AWS Cloud Foundations 57
3.1 Cloud Concepts Overview
3.2 Computing EC2
3.3 Relational database
3.4 AWS Cloud Security
3.5 Monitoring and analytics
3.6 Pricing and support
3.7 Migration and innovation
3.8 AWS RDS, DynamoDB, Redshift and
3.9 Aurora Cloud Architecture
3.10 Cloud journey
4. Participations 79
4.1 Microsoft Azure AI Challenge
4.2 Azure data fundamental challenge
4.3 Power Platform Challenge
4.4 Cloud foundations certificate and badge

5. Conclusion 83
LIST OF SYMBOLS AND ABBREVIATIONS
AWS Amazon Web Services
Amazon ES Amazon Elasticsearch Service
AMI Amazon Machine Image
API Application Programming Interface
AI Artificial Intelligence
ACL Access Control List
ALB Application Load Balancer
ARN Amazon Resource Name
AZ Availability Zone
ACM AWS Certificate Management
ASG Auto Scaling Group
AES Advanced Encryption System
ADFS Active Directory Federation Service
AVX Advanced Vector Extensions

CDN Content Delivery Network


CRC Cyclic Redundancy Check
CLI Command Line Interface
CIDR Classless Inter-Domain Routing
CORS Cross Origin Resource Sharing
CRR Cross Region Replication
CI/CD Continuous Integration/Continuous Deployment
DMS Database Migration Service
DNS Domain Name System
DDoS Distributed Denial of Service
DoS Denial of Service
DaaS Desktop as-a-Service

EC2 Elastic Compute Cloud


ECS EC2 Container Service
ECR Elastic Container Registry
EFS Elastic File System
EI Elastic Inference
ENA Elastic Network Adapter
EKS Elastic Kubernetes Service
EBS Elastic Block Store
EMR Elastic MapReduce
ELB Elastic Load Balancing
EFA Elastic Fabric Adapter
EIP Elastic IP
EDA Electronic Design Automation
ENI Elastic Network Interface
ECU EC2 Compute Unit

FIFO First In First Out


FaaS Function as-a-Service
HPC High-Performance Compute
HVM Hardware Virtual Machine
HTTP Hypertext Transfer Protocol
HTTPS HTTP Secure
HDK Hardware Development Kit

IAM Identity & Access Management


iOT Internet Of Things
S3 IA S3 Infrequent Access
iSCSI Internet Small Computer Storage Interface
IOPS Input/Output Operation Per Second
IGW Internet Gateway
ICMP Internet Control Message Protocol
IP Internet Protocol
IPSec Internet Protocol Security
IaaS Infrastructure-as-a-Service

JSON JavaScript Object Notation

KMS Key Management Service


KVM Kernel-based Virtual Machine

LB Load Balancer
LCU Load Balancer Capacity Unit

MFA Multi-Factor Authentication


MSTSC Microsoft Terminal Service Client
MPP Massive Parallel Processing
MITM Man in the Middle Attack
MPLS Multi Protocol Label Switching

OLTP Online Transaction Processing


OLAP Online Analytics Processing
OCI Open Container Initiative

PCI DSS Payment Card Industry Data Security Standard


PVM Para Virtual Machine
PV ParaVirtual
PaaS Platform as a Service

RDS Relational Database Service


RRS Reduced Redundancy Storage
RI Reserved Instance
RAM Random-access Memory
RIE Runtime Interface Emulator
S
S3 Simple Storage Service
S3 RTC S3 Replication Time Control
SRR Same Region Replication
SMS Server Migration Service
SWF Simple Workflow Service
SES Simple Email Service
SNS Simple Notification Service
SQS Simple Queue Service
SES Simple Email Service
SLA Service Level Agreement
SSL Secure Socket Layer
SOA Start of Authority
SDK Software Development Kit
SSH Secure Shell
SAR Serverless Application Repository
SRD Scalable Reliable Datagrams
SSO Single Sign-On
SAML Security Assertion Markup Language
SaaS Software-as-a-Service
SaaS Security-as-a-Service
SCP Service Control Policies
SCA Storage Class Analysis
STS Security Token Service
SNI Server Name Indication

TTL Time To Live


TLS Transport Layer Security
TPM Trusted Platform Module
TME Total Memory Encryption
TPM Technical Program Manager
TPS Transaction Per Second
TCP Transmission Control Protocol

VPC Virtual Private Cloud


VM Virtual Machine
VTL Virtual Tape Library
VPN Virtual Private Network
VLAN Virtual Local Area Network
VDI Virtual Desktop Infrastructure
VPG Virtual Private Gateway
ABSTRACT
The Microsoft Azure Associate Administrator Certification is a comprehensive validation of
proficiency in designing, implementing, and managing solutions on the Microsoft Azure
platform. This abstract provides a detailed overview of the certification's scope, objectives, and
significance in the realm of cloud computing.
The certification encompasses a broad range of topics essential for Azure professionals, including
Azure architecture, cloud services, storage solutions, security, and monitoring. Candidates are
required to demonstrate their understanding of Azure's core services and their ability to design
and implement solutions that meet specific business requirements.
Key objectives of the certification include proficiency in Azure infrastructure deployment and
management, virtual networking, identity and access management, data storage and management,
and security and compliance. Additionally, candidates are evaluated on their ability to monitor,
troubleshoot, and optimize Azure solutions to ensure high availability, reliability, and
performance.
By obtaining the Microsoft Azure Administrator Associate Certification, individuals showcase
their expertise in leveraging Azure services to build scalable, secure, and cost-effective cloud
solutions. This certification not only validates technical skills but also demonstrates an
understanding of best practices and industry standards in cloud computing.
In today's rapidly evolving digital landscape, where organizations are increasingly adopting cloud
technologies, the Microsoft Azure Administrator Associate Certification serves as a valuable asset
for professionals seeking to advance their careers in cloud computing. It signifies a commitment
to continuous learning and staying abreast of the latest advancements in Azure technology,
making certified individuals highly sought after by employers across industries.
The AWS Cloud Practitioner Certification is a foundational credential that validates essential
knowledge of the Amazon Web Services (AWS) Cloud and its basic architectural principles. This
abstract provides a comprehensive overview of the certification's objectives, coverage areas, and
significance in the context of cloud computing.
Additionally, the certification emphasizes the importance of implementing security best practices
and adhering to AWS compliance standards.
By earning the AWS Cloud Practitioner Certification, individuals establish a strong foundation in
cloud computing and AWS fundamentals, positioning themselves for success in roles such as
cloud solutions architect, cloud consultant, or cloud administrator. This certification serves as a
gateway to further AWS certifications and validates the skills and knowledge necessary to excel
in the dynamic and rapidly expanding field of cloud technology.
CHAPTER 1
INTRODUCTION

1.1 INTRODUCTION
The world of cloud computing has revolutionized the way organizations deploy, manage, and
scale their IT infrastructure. With the increasing adoption of cloud services, the demand for
skilled professionals who can design, implement, and manage cloud solutions has never been
higher. In response to this growing need, certifications such as the Microsoft Azure Associate
Administrator Certification and the AWS Cloud Practitioner Certification have emerged as
valuable credentials that validate expertise in leading cloud platforms.
This introduction provides an overview of the significance of these certifications in the context of
cloud computing and outlines the objectives of this document. It sets the stage for a detailed
exploration of the Microsoft Azure Administrator Associate Certification and the AWS Cloud
Practitioner Certification, highlighting their key components, benefits, and relevance in today's
technology landscape.
As organizations transition their infrastructure to the cloud, they seek professionals who possess
the knowledge and skills to leverage cloud platforms effectively. The Microsoft Azure
Administrator Associate Certification and the AWS Cloud Practitioner Certification serve as
benchmarks of proficiency in Microsoft Azure and Amazon Web Services (AWS), two of the
leading cloud providers in the industry. These certifications validate not only technical expertise
but also the ability to design scalable, secure, and cost-effective cloud solutions that meet the
needs of modern businesses.
Throughout this document, we will delve into the core components of each certification,
including the topics covered, exam objectives, and recommended study resources. We will
explore the benefits of obtaining these certifications, both for individuals seeking to advance their
careers in cloud computing and for organizations looking to build a skilled workforce capable of
harnessing the full potential of cloud technology.
Whether you are an aspiring cloud professional looking to kickstart your career or an experienced
IT professional seeking to validate your expertise in cloud computing, this document aims to
provide valuable insights into the Microsoft Azure Administrator Associate Certification and the
AWS Cloud Practitioner Certification. By understanding the requirements and benefits of these
certifications, you can embark on a journey to enhance your skills, advance your career, and
become a trusted cloud expert in today's dynamic and competitive job market.
1.1 CERTIFICATES:

Microsoft Azure Administrator Associate

AWS Cloud Practitioner


CHAPTER 2
MICROSOFT AZURE ADMINISTRATOR (Associate)

Configuring Microsoft Entra ID


1. Introduction: Microsoft Entra ID is a cloud-based directory and identity
management service that supports user access to various resources and applications.
2. Features:
 Microsoft Entra ID provides secure single sign-on (SSO) to web apps on the cloud and
to on-premises apps. Users can sign in with the same set of credentials to access all their
apps.
 Microsoft Entra ID works with iOS, macOS, Android, and Windows devices, and offers
a common experience across the devices. Users can launch apps from a personalized
web- based access panel, mobile app, Microsoft 365, or custom company portals by
using their existing work credentials.
 Microsoft Entra ID can extend to the cloud to help you manage a consistent set of
users, groups, passwords, and devices across environments.
Implementation of Microsoft Entra ID

3. Concepts:
 An identity is an object that can be authenticated. The identity can be a user with a
username and password. Identities can also be applications or other servers that
require authentication by using secret keys or certificates. Microsoft Entra ID is the
underlying product that provides the identity service.
 An account is an identity that has data associated with it. To have an account, you
must first have a valid identity. You can't have an account without an identity.
 An Azure tenant is a single dedicated and trusted instance of Microsoft Entra ID. Each
tenant (also called a directory) represents a single organization. When your organization
signs up for a Microsoft cloud service subscription, a new tenant is automatically
created. Because each tenant is a dedicated and trusted instance of Microsoft Entra ID,
you can create multiple tenants or instances.

4. Implement Microsoft Entra self-service password reset: The selected option is useful for
creating specific groups that have SSPR enabled. You can create groups for testing or proof
of concept before applying the feature to a larger group. When you're ready to deploy SSPR
to
all user accounts in your Microsoft Entra tenant, you can change the setting.

5. Things to consider when using SSPR:


 Your system must require at least one authentication method to reset a password.
 A strong SSPR plan offers multiple authentication methods for the user. Options
include email notification, text message, or a security code sent to the user's mobile or
office phone. You can also offer the user a set of security questions.
 You can require security questions to be registered for the users in your Microsoft
Entra tenant.
 You can configure how many correctly answered security questions are required for
a successful password reset.
CONFIGURING USER AND GROUP ACCOUNTS
Every user who wants access to Azure resources needs an Azure user account. A user account has
all the information required to authenticate the user during the sign-in process. Microsoft Entra ID
supports three types of user accounts. The types indicate where the user is defined (in the cloud or
on-premises), and whether the user is internal or external to your Microsoft Entra organization.
1. Things to consider when choosing user accounts
 Consider where users are defined. Determine where your users are defined. Are all
your users defined within your Microsoft Entra organization, or are some users defined in
external Microsoft Entra instances? Do you have users who are external to your
organization? It's common for businesses to support two or more account types in their
infrastructure.
 Consider support for external contributors. Allow external contributors to access
Azure resources in your organization by supporting the Guest user account type. When
the external contributor no longer requires access, you can remove the user account and
their access privileges.
 Consider a combination of user accounts. Implement the user account types that
enable your organization to satisfy their business requirements. Support directory-
synchronized identity user accounts for users defined in Windows Server Active
Directory. Support cloud identities for users defined in your internal Microsoft Entra
structure or for user defined in an external Microsoft Entra instance.
The administrator can Create a user within the organization or Invite a guest user to provide
access to organization resources:
A new user account must have a display name and an associated user account name. An example
display name is Aran Sawyer-Miller and the associated user account name could
be [email protected].
Information and settings that describe a user are stored in the user account profile.
The profile can have other settings like a user's job title, and their contact email address.
A user with Global administrator or User administrator privileges can preset profile data in user
accounts, such as the main phone number for the company.
Non-admin users can set some of their own profile data, but they can't change their display
name or account name.
2. Things to consider when managing cloud identity accounts
There are several points to consider about managing user accounts. As you review this list,
consider how you can add cloud identity user accounts for your organization.
1. Consider user profile data: Allow users to set their profile information for their
accounts, as needed. User profile data, including the user's picture, job, and contact
information is optional. You can also supply certain profile settings for each user based
on your organization's requirements.
2. Consider restore options for deleted accounts: Include restore scenarios in your
account management plan. Restore operations for a deleted account are available up to 30
days after an account is removed. After 30 days, a deleted user account can't be restored.
3. Consider gathered account data: Collect sign-in and audit log information for user
accounts. Microsoft Entra ID lets you gather this data to help you analyze and
improve your infrastructure.
Microsoft Entra ID supports several bulk operations, including bulk create and delete for user
accounts. The most common approach for these operations is to use the Azure portal. Azure
PowerShell can be used for bulk upload of user accounts.
3. Things to know about bulk account operations

Let's examine some characteristics of bulk operations in the Azure portal. Here's an example that
shows the Bulk create user option for new user accounts in Microsoft Entra ID:
Only Global administrators or User administrators have privileges to create and delete user
accounts in the Azure portal.
To complete bulk create or delete operations, the admin fills out a comma-separated values (CSV)
template of the data for the user accounts.
Bulk operation templates can be downloaded from the Microsoft Entra admin center.
Bulk lists of user accounts can be downloaded.
4. Things to know about creating group accounts

Review the following characteristics of group accounts in Microsoft Entra ID. The following
screenshot shows a list of groups in the Azure portal:

Use security groups to set permissions for all group members at the same time, rather than adding
permissions to each member individually.
Add Microsoft 365 groups to enable group access for guest users outside your Microsoft Entra
organization.
Security groups can be implemented only by a Microsoft Entra administrator.
Normal users and Microsoft Entra admins can both use Microsoft 365
groups.
Consider the management tasks for a large university that's composed of several different schools
like Business, Engineering, and Medicine. The university has administrative offices, academic
buildings, social buildings, and student dormitories. For security purposes, each business office
has its own internal network for resources like servers, printers, and fax machines. Each academic
building is connected to the university network, so both instructors and students can access their
accounts. The network is also available to students and deans in the dormitories and social
buildings. Across the university, guest users require access to the internet via the university
network.

5. Things to think about administrative units

 Consider how a central admin role can use administrative units to support the
Engineering department in our scenario:
 Create a role that has administrative permissions for only Microsoft Entra users
 Create an administrative unit for the Engineering department.
 Populate the administrative unit with only the Engineering department students, staff,
and resources.
 Add the Engineering department IT team to the role, along with its scope.
Things to consider when working with administrative units
Think about how you can implement administrative units in your organization. Here are some
considerations:

 Consider management tools. Review your options for managing AUs. You can use
the Azure portal, PowerShell cmdlets and scripts, or Microsoft Graph.
 Consider role requirements in the Azure portal. Plan your strategy for
administrative units according to role privileges. In the Azure portal, only the Global
Administrator or Privileged Role Administrator users can manage AUs.
 Consider scope of administrative units. Recognize that the scope of an administrative
unit applies only to management permissions. Members and admins of an administrative
unit can exercise their default user permissions to browse other users, groups, or
resources outside of their administrative unit.
CONFIGURING SUBSCRIPTIONS
Types of Azure subscriptions would work for your organization, consider these scenarios:
Consider trying Azure for free. An Azure free subscription includes a monetary credit to spend
on any service for the first 30 days. You get free access to the most popular Azure products for 12
months, and access to more than 25 products that are always free. An Azure free subscription is
an excellent way for new users to get started.
To set up a free subscription, you need a phone number, a credit card, and a Microsoft account.
The credit card information is used for identity verification only. You aren't charged for any
services until you upgrade to a paid subscription.
Consider paying monthly for used services. A Pay-As-You-Go (PAYG) subscription charges
you monthly for the services you used in that billing period. This subscription type is appropriate
for a wide range of users, from individuals to small businesses, and many large organizations as
well.
Consider using an Azure Enterprise Agreement. An Enterprise Agreement provides flexibility
to buy cloud services and software licenses under one agreement. The agreement comes with
discounts for new licenses and Software Assurance. This type of subscription targets enterprise-
scale organizations.
Consider supporting Azure for students. An Azure for Students subscription includes a
monetary credit that can be used within the first 12 months.
Students can select free services without providing a credit card during the sign-up process.You
must verify your student status through your organizational email address.

Configuring Azure Policy


Azure Policy is a service in Azure that enables you to create, assign, and manage policies to control or
audit your resources. These policies enforce different rules over your resource configurations so the
configurations stay compliant with corporate standards.
Your company is subject to many regulations and compliance rules. Your company wants to ensure
each department implements and deploys resources correctly. You're responsible for investigating how
to use Azure Policy and management groups to implement compliance measures.
Azure management groups provide a level of scope and control above your subscriptions. You can use
management groups as containers to manage access, policy, and compliance across your subscriptions.
Things to know about management groups
Consider the following characteristics of Azure management groups:
By default, all new subscriptions are placed under the top-level management group, or root group.
All subscriptions within a management group automatically inherit the conditions applied to that
management group.
A management group tree can support up to six levels of depth.Azure role-based access control
authorization for management group operations isn't enabled by default.
Things to consider when using Azure Policy
Consider deployable resources. Specify the resource types that your organization can deploy by
using Azure Policy. You can specify the set of virtual machine SKUs that your organization can
deploy.
Consider location restrictions. Restrict the locations your users can specify when deploying
resources. You can choose the geographic locations or regions that are available to your
organization.
Consider rules enforcement. Enforce compliance rules and configuration options to help
manage your resources and user options. You can enforce a required tag on resources and define
the allowed values.
Consider inventory audits. Use Azure Policy with Azure Backup service on your VMs and run
inventory audits.
Azure Administrators use Azure Policy to create policies that define conventions for resources. A policy
definition describes the compliance conditions for a resource, and the actions to complete when the
conditions are met. One or more policy definitions are grouped into an initiative definition, to control the
scope of your policies and evaluate the compliance of your resources.
After you create your initiative definition, the next step is to assign the initiative to establish the
scope for the policies. The scope determines what resources or grouping of resources are affected
by the conditions of the policies.
Here's an example that shows how to configure the scope assignment:

To establish the scope, you select the affected subscriptions. As an option, you can also choose the
affected resource groups.
The following example shows how to apply the scope:
CONFIGURING ROLE-BASED ACCESS CONTROL
Azure Administrators need to secure access to their Azure resources like virtual machines (VMs),
websites, networks, and storage. Administrators need mechanisms to help them manage who can
access their resources, and what actions are allowed. Organizations that do business in the cloud
recognize that securing their resources is a critical function of their infrastructure.
Secure access management for cloud resources is critical for businesses that operate in the cloud.
Role-based access control (RBAC) is a mechanism that can help you manage who can access
your Azure resources. RBAC lets you determine what operations specific users can do on
specific resources, and control what areas of a resource each user can access.
Azure RBAC is an authorization system built on Azure Resource Manager. Azure RBAC
provides fine-grained access management of resources in Azure.
Things to know about Azure RBAC
 Allow an application to access all resources in a resource group.
 Allow one user to manage VMs in a subscription, and allow another user to manage
virtual networks.
 Allow a database administrator (DBA) group to manage SQL databases in a subscription.
 Allow a user to manage all resources in a resource group, such as VMs, websites,
and subnets.
Things to consider when using Azure RBAC
Consider your requestors. Plan your strategy to accommodate for all types of access to
your resources. Security principals are created for anything that requests access to your
resources. Determine who are the requestors in your organization. Requestors can be internal
or external users, groups of users, applications and services, resources, and so on.
Consider your roles. Examine the types of job responsibilities and work scenarios in your
organization. Roles are commonly built around the requirements to fulfill job tasks or complete
work goals. Certain users like administrators, corporate controllers, and engineers can require a
level of access beyond what most users need. Some roles can be defined to provide the same
access for all members of a team or department for specific resources or applications.
Consider scope of permissions. Think about how you can ensure security by controlling the
scope of permissions for role assignments. Outline the types of permissions and levels of scope
that you need to support. You can apply different scope levels for a single role to support
requestors in different scenarios.
Consider built-in or custom definitions. Review the built-in role definitions in Azure
RBAC. Built-in roles can be used as-is, or adjusted to meet the specific requirements for your
organization. You can also create custom role definitions from scratch.
Azure RBAC roles Microsoft Entra ID admin roles
Access management Manages access to Manages access to
Azure resources Microsoft Entra resources
Scope assignment Scope can be specified at multiple The scope is specified at
levels, including management the tenant level
groups, subscriptions, resource
groups, and resources
Role definitions Roles can be defined via the Roles can be defined via the
Azure portal, the Azure CLI, Azure admin portal, Microsoft
Azure PowerShell, Azure 365 admin portal, and Microsoft
Resource Manager templates, and Graph PowerShell
the REST API
Built-in role definitions are defined for several categories of services, tasks, and users. You can
assign built-in roles at different scopes to support various scenarios, and build custom roles from
the base definitions.
Microsoft Entra ID also provides built-in roles to manage resources in Microsoft Entra ID,
including users, groups, and domains. Microsoft Entra ID offers administrator roles that you can
implement for your organization, such as Global admin, Application admin, and Application
developer.
The following diagram illustrates how you can apply Microsoft Entra administrator roles and
Azure roles in your organization.

Microsoft Entra admin roles are used to manage resources in Microsoft Entra ID, such as users,
groups, and domains. These roles are defined for the Microsoft Entra tenant at the root level of
the configuration.
Azure RBAC roles provide more granular access management for Azure resources. These roles
are defined for a requestor or resource and can be applied at multiple levels: the root,
management groups, subscriptions, resource groups, or resources.

CONFIGURING AZURE RESOURCES WITH TOOLS


The Azure portal lets you build, manage, and monitor everything from simple web apps to complex cloud
applications in a single, unified console.
Azure Cloud Shell is an interactive, browser-accessible shell for managing Azure resources. It
provides the flexibility of choosing the shell experience that best suits the way you work. Linux
users can opt for a Bash experience, while Windows users can opt for PowerShell.
Cloud Shell enables access to a browser-based command-line experience built with Azure
management tasks in mind. You can use Cloud Shell to work untethered from a local machine in
a way only the cloud can provide.

Azure PowerShell is a module that you add to Windows PowerShell or PowerShell Core to
enable you to connect to your Azure subscription and manage resources. Azure PowerShell
requires PowerShell to function. PowerShell provides services such as the shell window and
command parsing. Azure PowerShell adds the Azure-specific commands.
For example, Azure PowerShell provides the New-AzVm command that creates a virtual
machine inside your Azure subscription. To use it, you would launch the PowerShell application
and then issue a command such as the following command:
Azure CLI is a command-line program to connect to Azure and execute administrative commands
on Azure resources. It runs on Linux, macOS, and Windows, and allows administrators and
developers to execute their commands through a terminal, command-line prompt, or script
instead of a web browser. For example, to restart a VM, you would use a command such as the
following:
Azure CLI is a command-line program to connect to Azure and execute administrative commands
on Azure resources. It runs on Linux, macOS, and Windows, and allows administrators and
developers to execute their commands through a terminal, command-line prompt, or script
instead of a web browser. For example, to restart a VM, you would use a command such as the
following:

USING AZURE RESOURCE MANAGER


 Creating resource groups
There are some important factors to consider when defining your resource group:
All the resources in your group should share the same lifecycle. You deploy, update, and delete
them together. If one resource, such as a database server, needs to exist on a different deployment
cycle it should be in another resource group.
I. Each resource can only exist in one resource group.
II. You can add or remove a resource to a resource group at any time.
III. You can move a resource from one resource group to another group. Limitations do
apply to moving resources.
IV. A resource group can contain resources that reside in different regions.
V. A resource group can be used to scope access control for administrative actions.
VI. A resource can interact with resources in other resource groups. This interaction
is common when the two resources are related but don't share the same lifecycle
(for example, web apps connecting to a database)
 Create Azure Resource Manager locks
A common concern with resources provisioned in Azure is the ease with which they can be
deleted. An over-zealous or careless administrator can accidentally erase months of work with a
few steps. Resource Manager locks allow organizations to put a structure in place that prevents
the accidental deletion of resources in Azure.
You can associate the lock with a subscription, resource group, or resource.
Locks are inherited by child resources.
Lock types
There are two types of resource locks.
Read-Only locks, which prevent any changes to the
resource. Delete locks, which prevent deletion.
 Determine resource limits
Azure lets you view resource usage against limits. This is helpful to track current usage, and plan
for future use.

 The limits shown are the limits for your subscription.


 When you need to increase a default limit, there is a Request Increase link.
 All resources have a maximum limit listed in Azure limits.
 If you are at the maximum limit, the limit can't be increased.

CONFIGURING RESOURCES WITH AZURE RESOURCE


MANAGER TEMPLATES
Template benefits
Templates improve consistency. Resource Manager templates provide a common language for
you and others to describe your deployments. Regardless of the tool or SDK that you use to
deploy the template, the structure, format, and expressions inside the template remain the same.
Templates help express complex deployments. Templates enable you to deploy multiple
resources in the correct order. For example, you wouldn't want to deploy a virtual machine prior
to creating an operating system (OS) disk or network interface. Resource Manager maps out each
resource and its dependent resources, and creates dependent resources first. Dependency mapping
helps ensure that the deployment is carried out in the correct order.
Templates reduce manual, error-prone tasks. Manually creating and connecting resources can
be time consuming, and it's easy to make mistakes. Resource Manager ensures that the
deployment happens the same way every time.
Templates are code. Templates express your requirements through code. Think of a template as
a type of Infrastructure as Code that can be shared, tested, and versioned similar to any other
piece of software. Also, because templates are code, you can create a "paper trail" that you can
follow. The template code documents the deployment. Most users maintain their templates
under some kind of revision control, such as GIT. When you change the template, its revision
history also documents how the template (and your deployment) has evolved over time.
Templates promote reuse. Your template can contain parameters that are filled in when the
template runs. A parameter can define a username or password, a domain name, and so on.
Template parameters enable you to create multiple versions of your infrastructure, such as staging
and production, while still using the exact same template.
Templates are linkable. You can link Resource Manager templates together to make the templates
themselves modular. You can write small templates that each define a piece of a solution, and
then combine them to create a complete system.
Templates simplify orchestration. You only need to deploy the template to deploy all of your
resources. Normally this would take multiple operations.
Explore the Azure Resource Manager template parameters
In the parameters section of the template, you specify which values you can input when
deploying the resources. The available properties for a parameter are:
Azure Quickstart Templates are Azure Resource Manager templates provided by the Azure
community.

Some templates provide everything you need to deploy your solution, while others might serve
as a starting point for your template. Either way, you can study these templates to learn how to
best author and structure your own templates.
The README.md file provides an overview of what the template does.
The azuredeploy.json file defines the resources that will be deployed.
The azuredeploy.parameters.json file provides the values the template needs.

CONFIGURING VIRTUAL NETWORKS


Azure virtual networks are an essential component for creating private networks in Azure.
They allow different Azure resources to securely communicate with each other, the internet,
and on- premises networks.
You can implement Azure Virtual Network to create a virtual representation of your network
in the cloud. Let's examine some characteristics of virtual networks in Azure.
An Azure virtual network is a logical isolation of the Azure cloud resources.
You can use virtual networks to provision and manage virtual private networks (VPNs) in Azure.
Each virtual network has its own Classless Inter-Domain Routing (CIDR) block and can be
linked to other virtual networks and on-premises networks.
You can link virtual networks with an on-premises IT infrastructure to create hybrid or cross-
premises solutions, when the CIDR blocks of the connecting networks don't overlap.
You control the DNS server settings for virtual networks, and segmentation of the virtual network
into subnets.
The following illustration depicts a virtual network that has a subnet containing two virtual
machines. The virtual network has connections to an on-premises infrastructure and a separate
virtual network.

Private IP addresses enable communication within an Azure virtual network and your on-premises
network. You create a private IP address for your resource when you use a VPN gateway or
Azure ExpressRoute circuit to extend your network to Azure.
Public IP addresses allow your resource to communicate with the internet. You can create a
public IP address to connect with Azure public-facing services.
The following illustration shows a virtual machine resource that has a private IP address and a
public IP address.

Let's take a closer look at the characteristics of IP addresses.


 IP addresses can be statically assigned or dynamically assigned.
 You can separate dynamically and statically assigned IP resources into different subnets.
 Static IP addresses don't change and are best for certain situations, such as:
 DNS name resolution, where a change in the IP address requires updating host records.
 IP address-based security models that require apps or services to have a static IP address.
 TLS/SSL certificates linked to an IP address.
 Firewall rules that allow or deny traffic by using IP address ranges.
 Role-based virtual machines such as Domain Controllers and DNS servers.
CONFIGURE NETWORK SECURITY GROUPS
Network security groups are a way to limit network traffic to resources in your virtual
network. Network security groups contain a list of security rules that allow or deny inbound or
outbound network traffic. the characteristics of network security groups.
 A network security group contains a list of security rules that allow or deny inbound
or outbound network traffic.
 A network security group can be associated to a subnet or a network interface.
 A network security group can be associated multiple times.
 You create a network security group and define security rules in the Azure portal.

The traffic rules you need to create and what services can fulfill your network requirements.

Source: Identifies how the security rule controls inbound traffic. The value specifies a specific
source IP address range that's allowed or denied. The source filter can be any resource, an IP
address range, an application security group, or a default tag.
Destination: Identifies how the security rule controls outbound traffic. The value specifies a
specific destination IP address range that's allowed or denied. The destination filter value is
similar to the source filter. The value can be any resource, an IP address range, an application
security group, or a default tag.
Service: Specifies the destination protocol and port range for the security rule. You can choose a
predefined service like RDP or SSH or provide a custom port range. There are a large number
of services to select from.
Priority: Assigns the priority order value for the security rule. Rules are processed according to
the priority order of all rules for a network security group, including a subnet and network
interface. The lower the priority value, the higher priority for the rule.

CONFIGURING AZURE DNS


Azure DNS enables you to host your DNS domains in Azure and access name resolution for your
domains by using Microsoft Azure infrastructure. You can configure and manage your custom
domains with Azure DNS in the Azure portal. By accessing your domains in Azure, you can use
your same credentials, support agreements, and billing preferences as for your other Azure
services.
You can add a DNS zone in the Azure portal, as shown in the following image. Several
configuration settings are required to create a DNS zone. In the portal, you specify the DNS
zone name, number of records, resource group, zone location, associated subscription, and DNS
name servers.

Take a moment to review some important characteristics about DNS zones.


Within a resource group, the name of a DNS zone must be unique.
By providing a unique name when you create a new DNS zone, Azure ensures that the DNS zone
doesn't already exist in the resource group.
Multiple DNS zones can have the same name, but the DNS zones must exist in different resource
groups or in different Azure subscriptions.
When multiple DNS zones share the same name, each DNS zone instance is assigned to a
different DNS name server address.
The Root/Parent domain is registered at the registrar and then pointed to Azure DNS.
Child domains are registered directly in Azure DNS.
The delegation process for your domain involves several steps:

 Identify your DNS name servers


 Update your parent domain
 Delegate subdomains (optional)
The resources need to be resolved from within the virtual network by using a specific domain
name (or DNS zone). The name resolution needs to be private and not accessible from the
internet. The scenario requires that Azure should automatically register the virtual machines
within the virtual network into the DNS zone.

Virtual network 1 contains two virtual machines: VM1 and VM2. VM1 and VM2 each have a
private IP address.
When an Azure Private DNS zone address is created (such as contoso.lab) and linked to
Virtual network 1, Azure DNS automatically creates two A records in the DNS zone if Auto
registration is enabled in the link configuration.
In this scenario, Azure DNS uses only Virtual network 2 to resolve domain name (or DNS zone)
queries.
Azure DNS queries from VM1 in Virtual network 1 to resolve the VM2.contoso.lab address
receive an Azure DNS response that contains the private IP address of VM2 (10.0.0.5).
A reverse DNS query (PTR) for the private IP address of VM1 (10.0.0.4) issued from VM2
receive an Azure DNS response that contains the FQDN of VM1, as expected.
The second scenario involves name resolution across multiple virtual networks, which is probably
the most common usage for Azure Private DNS zones. This scenario consists of two virtual
networks. One network is focused on registration for Azure Private DNS zone records and the
other supports name resolution.
Virtual network 1 is designated for registration. Virtual network 2 is designated for name
resolution.
The design strategy is for both virtual networks to share the common DNS zone
address, contoso.lab.
The resolution and registration virtual networks are linked to the common DNS zone.
Azure Private DNS zone records for virtual machines in Virtual network 1 (registration) are
created automatically.
For virtual machines in Virtual network 2 (resolution), Azure Private DNS zone records can be
created manually.
In this scenario, Azure DNS uses both virtual networks to resolve domain name queries.
An Azure DNS query from a virtual machine in Virtual network 2 (resolution) for a virtual
machine in Virtual network 1 (registration) receives an Azure DNS response containing
the private IP address of the virtual machine.
Reverse DNS queries are scoped to the same virtual network.
A reverse DNS (PTR) query from a virtual machine in Virtual network 2 (resolution) for a virtual
machine in Virtual network 1 (registration) receives an Azure DNS response containing
the NXDOMAIN of the virtual machine. NXDOMAIN is an error message that indicates the
queried domain doesn't exist.
A reverse DNS (PTR) query from a virtual machine in Virtual network 1 (registration) for
a virtual machine also in Virtual network 1 receives the FQDN for the virtual machine.

CONFIGURING AZURE VIRTUAL NETWORK PEERING


Azure Virtual Network peering lets you connect virtual networks in the same or different regions.
Azure Virtual Network peering provides secure communication between resources in the peered
networks.
Prominent characteristics of Azure Virtual Network peering:
There are two types of Azure Virtual Network peering: regional and global.

Regional virtual network peering connects Azure virtual networks that exist in the same region.
Global virtual network peering connects Azure virtual networks that exist in different regions.
You can create a regional peering of virtual networks in the same Azure public cloud region, or in
the same China cloud region, or in the same Microsoft Azure Government cloud region.
You can create a global peering of virtual networks in any Azure public cloud region, or in any
China cloud region.
Global peering of virtual networks in different Azure Government cloud regions isn't permitted.
After you create a peering between virtual networks, the individual virtual networks are still
managed as separate resources.
Virtual network A and virtual network B are each peered with a hub virtual network. The hub
virtual network contains several resources, including a gateway subnet and an Azure VPN
gateway. The VPN gateway is configured to allow VPN gateway transit. Virtual network B
accesses resources in the hub, including the gateway subnet, by using a remote VPN gateway.

Virtual network peering is nontransitive. The communication capabilities in peering are available
to only the virtual networks and resources in the peering. Other mechanisms have to be used to
enable traffic to and from resources and networks outside the private peering network.
The following diagram shows a hub and spoke virtual network with an NVA and VPN gateway.
The hub and spoke network is accessible to other virtual networks via user-defined routes and
service chaining.

CONFIGURE NETWORK ROUTING AND ENDPOINTS


Administrators use network routes to control the flow of traffic through a network. Azure virtual
networking provides capabilities to help you customize your network routes, establish service
endpoints, and access private links.
The following points and think about how you can implement service endpoints in your
configuration.
 Consider improved security for resources. Implement service endpoints to improve the
security of your Azure service resources. When service endpoints are enabled in your
virtual network, you secure Azure service resources to your virtual network with
virtual network rules. The rule improves security by fully removing public internet
access to resources, and allowing traffic only from your virtual network.
 Consider optimal routing for service traffic. Routes in your virtual network that force
internet traffic to your on-premises or network virtual appliances also typically force
Azure service traffic to take the same route as the internet traffic. This traffic control
process is known as forced-tunneling. Service endpoints provide optimal routing for
Azure service traffic to allow you to circumvent forced tunneling.
 Consider direct traffic to the Microsoft network. Use service endpoints to keep traffic on
the Azure backbone network. This approach allows you to continue auditing and
monitoring outbound internet traffic from your virtual networks, through forced-
tunneling, without impacting service traffic. Learn more about user-defined routes and
forced- tunneling.
 Consider easy configuration and maintenance. Configure service endpoints in your
subnets for simple setup and low maintenance. You no longer need reserved public IP
addresses in your virtual networks to secure Azure resources through an IP firewall. There
are no NAT or gateway devices required to set up the service endpoints.

The characteristics of Azure Private Link and network routing configurations.


Azure Private Link keeps all traffic on the Microsoft global network. There's no public internet
access.
Private Link is global and there are no regional restrictions. You can connect privately to services
running in other Azure regions.
Services delivered on Azure can be brought into your private virtual network by mapping your
network to a private endpoint.
Private Link can privately deliver your own services in your customer's virtual networks.
All traffic to the service can be routed through the private endpoint. No gateways, NAT devices,
Azure ExpressRoute or VPN connections, or public IP addresses are required.
The following Illustration demonstrates a network routing configuration with Azure Private Link.
The service connects to a network security group (NSG) private endpoint by using Azure SQL
Database. This configuration prevents a direct connection.

Things to consider when using Azure Private Link


There are many benefits to working with Azure Private Link. Review the following points and
consider how you can implement the service for your scenarios.

 Consider private connectivity to services on Azure. Connect privately to services


running in other Azure regions. Traffic remains on the Microsoft network with no public
internet access.
 Consider integration with on-premises and peered networks. Access private endpoints
over private peering or VPN tunnels from on-premises or peered virtual networks.
Microsoft hosts the traffic, so you don't need to set up public peering or use the internet to
migrate your workloads to the cloud.
 Consider protection against data exfiltration for Azure resources. Map private endpoints
to Azure PaaS resources. When there's a security incident within your network, only the
mapped resources are accessible. This implementation eliminates the threat of data
exfiltration.
CONFIGURING AZURE LOAD BALANCER
Azure Load Balancer delivers high availability and network performance to your applications.
Administrators use load balancing to efficiently distribute incoming network traffic across back-
end servers and resources. A load balancer is implemented by using load-balancing rules and
health probes.
A closer look at how Azure Load Balancer operates.

 Azure Load Balancer can be used for inbound and outbound scenarios.
 You can implement a public or internal load balancer, or use both types in a
combination configuration.
 To implement a load balancer, you configure four components:
 Front-end IP configuration
 Back-end pools
 Health probes
 Load-balancing rules
 The front-end configuration specifies the public IP or internal IP that your load
balancer responds to.
 The back-end pools are your services and resources, including Azure Virtual Machines
or instances in Azure Virtual Machine Scale Sets.
 Load-balancing rules determine how traffic is distributed to back-end resources.
 Health probes ensure the resources in the backend are healthy.
 Load Balancer scales up to millions of TCP and UDP application flows.

Each load balancer has one or more back-end pools that are used for distributing traffic. The
back- end pools contain the IP addresses of the virtual NICs that are connected to your load
balancer.
You configure these pool settings in the Azure portal.

Things to know about back-end pools


The SKU type that you select determines which endpoint configurations are supported for the
pool along with the number of pool instances allowed.
The Basic SKU allows up to 300 pools, and the Standard SKU allows up to 1,000 pools.
When you configure the back-end pools, you can connect to availability sets, virtual machines, or
Azure Virtual Machine Scale Sets.
For the Basic SKU, you can select virtual machines in a single availability set or virtual machines
in an instance of Azure Virtual Machine Scale Sets.
For the Standard SKU, you can select virtual machines or Virtual Machine Scale Sets in a
single virtual network. Your configuration can include a combination of virtual machines,
availability sets, and Virtual Machine Scale Sets.
 A closer look at how to configure load-balancing rules for your back-end pools.
 To configure a load-balancing rule, you need to have a frontend, backend, and health probe
for your load balancer.
 To define a rule in the Azure portal, you configure several settings:
 IP version (IPv4 or IPv6)
 Front-end IP address, *Port, and Protocol (TCP or UDP)
 Back-end pool and Back-end port
 Health probe
 Session persistence
By default, Azure Load Balancer distributes network traffic equally among multiple virtual machines.
 Azure Load Balancer uses a five-tuple hash to map traffic to available servers. The tuple
consists of the source IP address, source port, destination IP address, destination port, and
protocol type. The load balancer provides stickiness only within a transport session.
 Session persistence specifies how to handle traffic from a client. By default, successive
requests from a client go to any virtual machine in your pool.
 You can modify the session persistence behavior as follows:
 None (default): Any virtual machine can handle the request.
 Client IP: Successive requests from the same client IP address go to the same virtual machine.
Client IP and protocol: Successive requests from the same client IP address and protocol combination go
to the same virtual machine.

CONFIGURING AZURE APPLICATION GATEWAY


Administrators use Azure Application Gateway to manage requests from client applications to
their web apps. An application gateway listens for incoming traffic to web apps and checks for
messages sent via protocols like HTTP. Gateway rules direct the traffic to resources in a back-end

pool.
Routing options for Azure Application Gateway.
 Azure Application Gateway offers two primary methods for routing traffic:
 Path-based routing sends requests with different URL paths to different pools of back-
end servers.
 Multi-site routing configures more than one web application on the same
application gateway instance.
 You can configure your application gateway to redirect traffic.
Application Gateway can redirect traffic received at one listener to another listener, or to an external site. This
approach is commonly used by web apps to automatically redirect HTTP requests to communicate via HTTPS. The
redirection ensures all communication between your web app and clients occurs over an encrypted path.

You can implement Application Gateway to rewrite HTTP headers.

HTTP headers allow the client and server to pass parameter information with the request or the response. In this
scenario, you can translate URLs or query string parameters, and modify request and response headers. Add
conditions to ensure URLs or headers are rewritten only for certain conditions.

Application Gateway allows you to create custom error pages instead of displaying default error pages. You can use
your own branding and layout by using a custom error page.

the components of an application gateway work together.

The front-end IP address receives the client requests.

An optional Web Application Firewall checks incoming traffic for common threats before the requests reach the
listeners.
One or more listeners receive the traffic and route the requests to the back-end pool.

Routing rules define how to analyze the request to direct the request to the appropriate back-end pool.

A back-end pool contains web servers for resources like virtual machines or Virtual Machine Scale Sets. Each pool
has a load balancer to distribute the workload across the resources.

Health probes determine which back-end pool servers are available for load-balancing.

The following flowchart demonstrates how the Application Gateway components work together to direct traffic
requests between the frontend and back-end pools in your configuration.

CONFIGURING STORAGE ACCOUNTS


Azure Storage is Microsoft's cloud storage solution for modern data storage scenarios.
Azure Storage is Microsoft's cloud storage solution for modern data storage scenarios. Azure
Storage offers a massively scalable object store for data objects. It provides a file system service
for the cloud, a messaging store for reliable messaging, and a NoSQL store.
Azure Storage is a service that you can use to store files, messages, tables, and other types of
information. You use Azure Storage for applications like file shares. Developers use Azure
Storage for working data. Working data includes websites, mobile apps, and desktop applications.
Azure Storage is also used by IaaS virtual machines, and PaaS cloud services.
 Azure Storage offers four data services that can be accessed by using an Azure
storage account:
 Azure Blob Storage (containers): A massively scalable object store for text and
binary data.
 Azure Files: Managed file shares for cloud or on-premises deployments.
 Azure Queue Storage: A messaging store for reliable messaging between
application components.
 Azure Table Storage: A service that stores nonrelational structured data (also known
as structured NoSQL data).
If your storage account name is mystorageaccount, default endpoints for your storage account are
formed for the Azure services as shown in the following table:
Service Default endpoint
Container service //mystorageaccount.blob.core.windows.net
Table service //mystorageaccount.table.core.windows.net
Queue service //mystorageaccount.queue.core.windows.net
File service //mystorageaccount.file.core.windows.net

CONFIGURING AZURE BLOB STORAGE


Azure Blob Storage is a service for storing large amounts of unstructured object data.
Unstructured data is data that doesn't adhere to a particular data model or definition, such as text
or binary data.
The configuration characteristics of containers and blobs.
 All blobs must be in a container.
 A container can store an unlimited number of blobs.
 An Azure storage account can contain an unlimited number of containers.
 You can create the container in the Azure portal.
 You upload blobs into a container.
Azure Blob Storage lifecycle management policy rules to accomplish several tasks.
Transition blobs to a cooler storage tier (Hot to Cool, Hot to Archive, Cool to Archive) to
optimize for performance and cost.
Delete blobs at the end of their lifecycles.
Define rule-based conditions to run once per day at the Azure storage account level.
Apply rule-based conditions to containers or a subset of blobs.

The characteristics of blob types.


 Block blobs. A block blob consists of blocks of data that are assembled to make a
blob. Most Blob Storage scenarios use block blobs. Block blobs are ideal for storing
text and binary data in the cloud, like files, images, and videos.
 Append blobs. An append blob is similar to a block blob because the append blob
also consists of blocks of data. The blocks of data in an append blob are optimized
for append operations. Append blobs are useful for logging scenarios, where the amount
of data can increase as the logging operation continues.
 Page blobs. A page blob can be up to 8 TB in size. Page blobs are more efficient for
frequent read/write operations. Azure Virtual Machines uses page blobs for
operating system disks and data disks.
The block blob type is the default type for a new blob. When you're creating a new blob, if you
don't choose a specific type, the new blob is created as a block blob.
After you create a blob, you can't change its type.

CONFIGURING AZURE STORAGE SECURITY


Azure Storage provides a comprehensive set of security capabilities that work together to enable
developers to build secure applications.
A shared access signature (SAS) is a uniform resource identifier (URI) that grants restricted access rights
to Azure Storage resources. SAS is a secure way to share your storage resources without compromising
your account keys.
Configure a shared access signature:In the Azure portal, you configure several settings to create a
SAS. As you review these details, consider how you might implement shared access signatures in your
storage security solution.
 Signing method: Choose the signing method: Account key or User delegation key.
 Signing key: Select the signing key from your list of keys.
 Permissions: Select the permissions granted by the SAS, such as read or write.
 Start and Expiry date/time: Specify the time interval for which the SAS is valid. Set the start
time and the expiry time.
 Allowed IP addresses: (Optional) Identify an IP address or range of IP addresses from which
Azure Storage accepts the SAS..

Data is encrypted automatically before it's persisted to Azure Managed Disks, Azure Blob
Storage, Azure Queue Storage, Azure Cosmos DB, Azure Table Storage, or Azure Files.
Data is automatically decrypted before it's retrieved.
Azure Storage encryption, encryption at rest, decryption, and key management are transparent to
users.
All data written to Azure Storage is encrypted through 256-bit advanced encryption standard
(AES) encryption. AES is one of the strongest block ciphers available.
Azure Storage encryption is enabled for all new and existing storage accounts and can't be
disabled.
In the Azure portal, you configure Azure Storage encryption by specifying the encryption
type. You can manage the keys yourself, or you can have the keys managed by Microsoft.
Consider how you might implement Azure Storage encryption for your storage security.
It's important to understand that when you use SAS in your application, there can be
potential risks.
 If a SAS is compromised, it can be used by anyone who obtains it, including a
malicious user.
 If a SAS provided to a client application expires and the application is unable to retrieve
a new SAS from your service, the application functionality might be hindered.

CONFIGURING AZURE FILES AND AZURE FILE SYNC


Azure Files offers fully managed file shares in the cloud that are accessible via industry
standard protocols. Azure File Sync is a service that allows you to cache several Azure Files
shares on an on-premises Windows Server or cloud virtual machine.
Azure Files offers shared storage for applications by using the industry standard Server Message
Block and Network File System (NFS) protocols. Azure virtual machines (VMs) and cloud
services can share file data across application components by using mounted shares. On-premises
applications can also access file data in the share.
 Consider replacement and supplement options. Replace or supplement traditional
on- premises file servers or NAS devices by using Azure Files.
 Consider global access. Directly access Azure file shares by using most operating
systems, such as Windows, macOS, and Linux, from anywhere in the world.
 Consider lift and shift support. Lift and shift applications to the cloud with Azure Files
for apps that expect a file share to store file application or user data.
 Consider using Azure File Sync. Replicate Azure file shares to Windows Servers by
using Azure File Sync. You can replicate on-premises or in the cloud for performance
and distributed caching of the data where it's being used. We'll take a closer look at
Azure File Sync in a later unit.
 Consider shared applications. Store shared application settings such as configuration
files in Azure Files.
 Consider diagnostic data. Use Azure Files to store diagnostic data such as logs,
metrics, and crash dumps in a shared location.
 Consider tools and utilities. Azure Files is a good option for storing tools and utilities
that are needed for developing or administering Azure VMs or cloud services.
 Azure Storage Explorer is a standalone application that makes it easy to work with
Azure Storage data on Windows, macOS, and Linux. With Azure Storage Explorer, you
can access multiple accounts and subscriptions, and manage all your Storage content.
Things to know about Azure Storage Explorer
 Azure Storage Explorer has the following characteristics.
 Azure Storage Explorer requires both management (Azure Resource Manager) and data
layer permissions to allow full access to your resources. You need Azure Active
Directory (Azure AD) permissions to access your storage account, the containers in your
account, and the data in the containers.
 Azure Storage Explorer lets you connect to different storage accounts.
 Connect to storage accounts associated with your Azure subscriptions.
 Connect to storage accounts and services that are shared from other Azure subscriptions.
 Connect to and manage local storage by using the Azure Storage Emulator.

Azure File Sync enables you to cache several Azure Files shares on an on-premises Windows
Server or cloud virtual machine. You can use Azure File Sync to centralize your organization's
file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-
premises file server.
Cloud tiering
Cloud tiering is an optional feature of Azure File Sync. Frequently accessed files are cached locally on the
server while all other files are tiered to Azure Files based on policy settings.
When a file is tiered, Azure File Sync replaces the file locally with a pointer. A pointer is commonly
referred to as a reparse point. The reparse point represents a URL to the file in Azure Files.
When a user opens a tiered file, Azure File Sync seamlessly recalls the file data from Azure Files
without the user needing to know that the file is stored in Azure.
Cloud tiering files have greyed icons with an offline O file attribute to let the user know when the file is
only in Azure.

CONFIGURING VIRTUAL MACHINES


Azure Virtual Machines enables you to create on-demand, scalable computing resources. Azure
Architects commonly use virtual machines to gain greater control over the computing
environment.
Rather than specify processing power, memory, and storage capacity independently, Azure
provides different virtual machine sizes that offer variations of these elements in different size
configurations. Azure provides a wide range of virtual machine size options that allow you to
select the appropriate mix of compute, memory, and storage for your needs.
All Azure virtual machines have at least two disks: an operating system disk and a temporary disk.
Virtual machines can also have one or more data disks. All disks are stored as virtual hard disks
(VHDs). A VHD is like a physical disk in an on-premises server but, virtualized.
The Azure portal supports options for connecting your Windows and Linux machines, and
making connections by using Azure Bastion. The following diagram shows how you can connect
Azure virtual machines with the SSH and RDP protocols, Cloud Shell, and Azure Bastion.

CONFIGURING VIRTUAL MACHINE AVAILABILITY


Managing virtual machines at scale can be challenging, especially when usage patterns vary and
demands on applications fluctuate. Azure Administrators need to be able to adjust their virtual
machine resources to match changing demands.
You can create a virtual machine and an availability set at the same time.
A virtual machine can only be added to an availability set when the virtual machine is created. To
change the availability set for a virtual machine, you need to delete and then recreate the virtual
machine.
You can build availability sets by using the Azure portal, Azure Resource Manager (ARM)
templates, scripting, or API tools.
Microsoft provides robust Service Level Agreements (SLAs) for Azure virtual machines and
availability sets.
Azure Virtual Machine Scale Sets are an Azure Compute resource that you can use to deploy and
manage a set of identical virtual machines. When you implement Virtual Machine Scale Sets and
configure all your virtual machines in the same way, you gain true autoscaling. Virtual Machine
Scale Sets automatically increases the number of your virtual machine instances as application
demand increases, and reduces the number of machine instances as demand decreases.
In the Azure portal, there are several settings to configure to create an Azure Virtual Machine
Scale Sets implementation.

 Orchestration mode: Choose how virtual machines are managed by the scale set. In
flexible orchestration mode, you manually create and add a virtual machine of any
configuration to the scale set. In uniform orchestration mode, you define a virtual
machine model and Azure will generate identical instances based on that model.
 Image: Choose the base operating system or application for the VM.
 VM Architecture: Azure provides a choice of x64 or Arm64-based virtual machines to
run your applications.
 Run with Azure Spot discount: Azure Spot offers unused Azure capacity at a
discounted rate versus pay as you go prices. Workloads should be tolerant to
infrastructure loss as Azure may recall capacity.
 Size: Select a VM size to support the workload that you want to run. The size that you
choose then determines factors such as processing power, memory, and storage
capacity. Azure offers a wide variety of sizes to support many types of uses. Azure
charges an hourly price based on the VM's size and operating system.
An Azure Virtual Machine Scale Sets implementation can automatically increase or decrease the
number of virtual machine instances that run your application. This process is known
as autoscaling. Autoscaling allows you to dynamically scale your configuration to meet changing
workload demands.
CONFIGURING AZURE APP SERVICE PLANS
Azure Administrators need to be able to scale a web application. Scaling enables an application to
remain responsive during periods of high demand. Scaling also helps to save money by reducing
the resources required when demand drops.
To implement and use an App Service plan with your virtual machines.

 When you create an App Service plan in a region, a set of compute resources is created
for the plan in the specified region. Any applications that you place into the plan run on
the compute resources defined by the plan.
 Each App Service plan defines three settings:
 Region: The region for the App Service plan, such as West US, Central India,
North Europe, and so on.
 Number of VM instances: The number of virtual machine instances to allocate for the plan.
 Size of VM instances: The size of the virtual machine instances in the plan,
including Small, Medium, or Large.
 You can continue to add new applications to an existing plan as long as the plan
has enough resources to handle the increasing load.
How to use autoscale for your Azure App Service plan and applications.

 To use autoscale, you specify the minimum, and maximum number of instances to run
by using a set of rules and conditions.
 When your application runs under autoscale conditions, the number of virtual machine
instances are automatically adjusted based on your rules. When rule conditions are
met, one or more autoscale actions are triggered.
 An autoscale setting is read by the autoscale engine to determine whether to scale out or
in. Autoscale settings are grouped into profiles.
 Autoscale rules include a trigger and a scale action (in or out). The trigger can be
metric- based or time-based.

CONFIGURING AZURE APP SERVICE


Azure App Service brings together everything you need to create websites, mobile backends,
and web APIs for any platform or device. Applications run and scale with ease in both Windows
and Linux-based environments.
App Service provides Quickstarts for several products to help you easily create and deploy
your Windows and Linux apps:

With Azure DevOps, you can also define your own build and release process. Compile your
source code, run tests, and build and deploy the release into your web app every time you commit
the code. All of the operations happen implicitly without any need for human administration.
Characteristics of deployment slots.
Deployment slots are live apps that have their own hostnames.
Deployment slots are available in the Standard, Premium, and Isolated App Service pricing tiers.
Your app needs to be running in one of these tiers to use deployment slots.
The Standard, Premium, and Isolated tiers offer different numbers of deployment slots.
App content and configuration elements can be swapped between two deployment slots,
including the production slot.

The Backup and Restore feature in Azure App Service lets you easily create backups manually or
on a schedule. You can configure the backups to be retained for a specific or indefinite amount of
time. You can restore your app or site to a snapshot of a previous state by overwriting the
existing content or restoring to another app or site.

Characteristics of Application Insights for Azure Monitor.


Application Insights works on various platforms including .NET, Node.js and Java EE.
The feature can be used for configurations that are hosted on-premises, in a hybrid environment,
or in any public cloud.
Application Insights integrates with your Azure DevOps process, and has connection points to
many development tools.
You can monitor and analyze data from mobile apps by integrating with Visual Studio App
Center.
CONFIGURE AZURE CONTAINER INSTANCES
Containers and virtual machines are both forms of virtualization, but there are some key
differences between them.To provide context, let's consider a scenario: You're an Azure
Administrator responsible for deploying and managing applications in a cloud environment.
Containers are becoming the preferred way to package, deploy, and manage cloud applications.
Azure Container Instances offers the fastest and simplest way to run a container in Azure,
without having to manage any virtual machines and without having to adopt a higher-level
service. Azure Container Instances is a great solution for any scenario that can operate in isolated
containers, including simple applications, task automation, and build jobs.

Docker Hub provides a large global repository of container images from developers, open source
projects, and independent software vendors. You can access Docker Hub to find and share
container images for your app and containers. Docker Hosts are machines that run Docker and
allow you to run your apps as containers.
CONFIGURING FILE AND FOLDER BACKUPS
Azure Backup is the Azure-based service you can use to back up (or protect) and restore your
data in the Microsoft cloud. Azure Backup replaces your existing on-premises or off-site backup
solution with a cloud-based solution that's reliable, secure, and cost-competitive
In the Azure portal, search for Backup Center and browse to the Backup Center dashboard:

Azure Backup uses the Microsoft Azure Recovery Services (MARS) agent to back up files,
folders, and system data from your on-premises machines and Azure virtual machines. The
MARS agent is a full-featured agent that offers many benefits for both backing up and restoring
your data.

he MARS agent and Azure Backup to complete backups of your on-premises files and folders.
The following diagram shows the high-level steps to use the MARS agent for Azure Backup.
CONFIGURE VIRTUAL MACHINE BACKUPS
Azure Backup provides independent and isolated backups to guard against unintended destruction
of the data on your virtual machines. Administrators can implement Azure services to support
their backup requirements, including the Microsoft Azure Recovery Services (MARS) agent for
Azure Backup, the Microsoft Azure Backup Server (MABS), Azure managed disks snapshots,
and Azure Site Recovery.
Things to consider when creating images versus snapshots
It's important to understand the differences and benefits of creating an image and a snapshot
backup of an Azure managed disk.
 Consider images. With Azure managed disks, you can take an image of a generalized
virtual machine that's been deallocated. The image includes all of the disks attached to
the virtual machine. You can use the image to create a virtual machine that includes all
of the disks.
 Consider snapshots. A snapshot is a copy of a disk at the point in time the snapshot is
taken. The snapshot applies to one disk only, and doesn't have awareness of any disk
other than the one it contains. Snapshot backups are problematic for configurations that
require the coordination of multiple disks, such as striping. In this case, the snapshots
need to coordinate with each other, but this functionality isn't currently supported.
 Consider operating disk backups. If you have a virtual machine with only one disk (the
operating system disk), you can take a snapshot or an image of the disk. You can create
a virtual machine from either a snapshot or an image.

In the Azure portal, you can use an Azure Recovery Services vault to back up your Azure virtual machines:
A Recovery Services vault can be used to back up your on-premises virtual machines, such
as Hyper-V, VMware, System State, and Bare Metal Recovery:

CONFIGURING AZURE MONITOR


Azure Monitor is a comprehensive solution that collects, analyzes, and responds to telemetry data
from both on-premises and cloud environments.
Azure Monitor provides you with a comprehensive solution for collecting, analyzing, and
responding to telemetry data from your on-premises and cloud environments. The service features
help you understand how your applications are performing.
Azure components that support Azure Monitor capabilities. The following diagram provides
a high-level view of how Azure and Azure Monitor work together to provide you with a
robust monitoring and diagnostics solution.

Examine some details about working with activity logs in Azure Monitor.
 You can use the information in activity logs to understand the status of resource
operations and other relevant properties.
 Activity logs can help you determine the "what, who, and when" for any write
operation (PUT, POST, DELETE) performed on resources in your subscription.
 Activity logs are kept for 90 days.
 You can query for any range of dates in an activity log, as long as the starting date
isn't more than 90 days in the past.
 You can retrieve events from your activity logs by using the Azure portal, the Azure
CLI, PowerShell cmdlets, and the Azure Monitor REST API.

In the Azure portal, you can filter your Azure Monitor activity logs so you can view specific
information. The filters enable you to review only the activity log data that meets your criteria.
You might set filters to review monitoring data about critical events for your primary subscription
and production virtual machine during peak business hours.

IMPROVING INCIDENT RESPONSE WITH ALERTING ON AZURE


Microsoft Azure provides a robust alerting and monitoring solution called Azure Monitor. You
can use Azure Monitor to configure notifications and alerts for your key systems and
applications. These alerts ensure that the correct team knows when a problem arises.
Azure Monitor receives data from target resources like applications, operating systems, Azure
resources, Azure subscriptions, and Azure tenants. The nature of the resource defines which data
types are available. A data type can be a metric, a log, or both a metric and a log:
The focus for metric-based data types is the numerical time-sensitive values that represent some
aspect of the target resource.
The focus for log-based data types is the querying of content data held in structured, record-based
log files that are relevant to the target resource.
You'll learn about the three signal types that you can use to monitor your environment:
Metric alerts provide an alert trigger when a specified threshold is exceeded. For example, a
metric alert can notify you when CPU usage is greater than 95 percent.
Activity log alerts notify you when Azure resources change state. For example, an activity log
alert can notify you when a resource is deleted.

CONFIGURING LOG ANALYTICS


Azure Monitor collects log data and stores it in tables. Administrators use Log Analytics in
the Azure portal to configure their input data sources and conduct queries for their Azure
Monitor logs.
Queries provide insights into system infrastructure, such as assessing system updates
and troubleshooting operational incidents. To retrieve and consolidate data in the
repository, administrators can create Kusto Query Language (KQL) queries.
When you capture logs and data in Azure Monitor, Azure stores the collected information in a
Log Analytics workspace. Your Log Analytics workspace is the basic management environment
for Azure Monitor Logs.

Log Analytics in Azure Monitor supports the Kusto Query Language (KQL). The KQL
syntax helps you quickly and easily create simple or complex queries to retrieve and
consolidate your monitoring data in the repository.
KQL queries use the dedicated table data for your monitored services and resources.
CONFIGURING NETWORK WATCHER
Azure Network Watcher is a powerful tool that allows you to monitor, diagnose, and manage
resources in an Azure virtual network.Azure Network Watcher provides tools to monitor,
diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network.
Network Watcher is a regional service that enables you to monitor and diagnose conditions at a
network scenario level.

The configuration details and functionality of the IP flow verify feature in Azure Network
Watcher.
 You configure the IP flow verify feature with the following properties in the Azure portal:
 Virtual machine and network interface
 Local (source) port number
 Remote (destination) IP address, and remote port number
 Communication protocol (TCP or UDP)
 Traffic direction (Inbound or Outbound)
 The feature tests communication for a target virtual machine with associated network
security group (NSG) rules by running inbound and outbound packets to and from
the machine.
 After the test runs complete, the feature informs you whether communication with
the machine succeeds (allows access) or fails (denies access).
If the target machine denies the packet because of an NSG, the feature returns the name of the
controlling security rule.
Azure Network Watcher provides a network monitoring topology tool to help administrators
visualize and understand infrastructure. The following image shows an example topology
diagram for a virtual network in Network Watcher.

Characteristics of the network topology capability in Azure Network Watcher:


 The Network Watcher Topology tool generates a visual diagram of the resources in
a virtual network.
 The graphical display shows the resources in the network, their interconnections, and
their relationships with each other.
 You can view subnets, virtual machines, network interfaces, public IP addresses,
network security groups, route tables, and more.
 To generate a topology, you need an Azure Network Watcher instance in the same
region as the virtual network.
CHAPTER 3
AWS CLOUD PRACTITIONER

CLOUD CONCEPTS AND OVERVIEW


In computing, a client can be a web browser or desktop application that a person interacts
with to make requests to computer servers. A server can be services such as Amazon Elastic
Compute Cloud (Amazon EC2), a type of virtual server.

Deployment models for cloud computing


When selecting a cloud strategy, a company must consider factors such as required cloud
application components, preferred resource management tools, and any legacy IT
infrastructure requirements.

COMPUTING EC2
Amazon Elastic Compute Cloud (Amazon EC2) provides secure, resizable compute capacity
in the cloud as Amazon EC2 instances.
By comparison, with an Amazon EC2 instance you can use a virtual server to run
applications in the AWS Cloud.

 You can provision and launch an Amazon EC2 instance within minutes.
 You can stop using it when you have finished running a workload.
 You pay only for the compute time you use when an instance is running, not when
itis stopped or terminated.
EC2 runs on top of physical host machines managed by AWS using virtualization
technology. When you spin up an EC2 instance, you aren't necessarily taking an entire host to
yourself. Instead, you are sharing the host with multiple other instances, otherwise known as
virtual machines. And a hypervisor running on the host machine is responsible forsharing
the underlying physical resources between the virtual machines.
This idea of sharing underlying hardware is called multi-tenancy. The hypervisor is responsible for
coordinating this multi-tenancy and it is managed by AWS. The hypervisor isresponsible for isolating the
virtual machines from each other as they share resources from the host. This means EC2 instances are
secure. Even though they may be sharing resources, one EC2 instance is not aware of any other EC2
instances also on that host. They are secure and separate from each other. Amazon EC2 pricing
With Amazon EC2, you pay only for the compute time that you use. Amazon EC2 offers a
variety of pricing options for different use cases. For example, if your use case can withstand
interruptions, you can save with Spot Instances. You can also save by committing early and
locking in a minimum level of use with Reserved Instances.
On-Demand Instances are ideal for short-term, irregular workloads that cannot be interrupted. No
upfront costs or minimum contracts apply. The instances run continuouslyuntil you stop them,
and you pay for only the compute time you use.
Scalability
Scalability involves beginning with only the resources you need and designing your architecture
to automatically respond to changing demand by scaling out or in. As a result,you pay for only
the resources you use. You don’t have to worry about a lack of computing capacity to meet your
customers’ needs.
Amazon EC2 Auto Scaling
If you’ve tried to access a website that wouldn’t load and frequently timed out, the website might
have received more requests than it was able to handle. This situation is similar to waiting in a
long line at a coffee shop, when there is only one barista present to take orders from customers.

Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances
in response to changing application demand. By automatically scaling your instances in and out
as needed, you are able to maintain a greater sense of applicationavailability.
Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling andpredictive
scaling.
Dynamic scaling responds to changing demand.
Predictive scaling automatically schedules the right number of Amazon EC2 instancesbased on
predicted demand.
Elastic Load Balancing
Elastic Load Balancing (ELB) is the AWS service that automatically distributes
incomingapplication traffic across multiple resources, such as Amazon EC2 instances.
It is engineered to address the undifferentiated heavy lifting of load balancing. Elastic Load
Balancing is a Regional construct, and we'll explain more of what that means in later videos.But
the key value for you is that because it runs at the Region level rather than on individualEC2
instances, the service is automatically highly available with no additional effort on yourpart.
ELB is automatically scalable. As your traffic grows, ELB is designed to handle the
additionalthroughput with no change to the hourly cost. When your EC2 fleet auto-scales out, as
eachinstance comes online, the auto-scaling service just lets the Elastic Load Balancing service
know that it's ready to handle the traffic, and off it goes. Once the fleet scales in, ELB first stops
all new traffic, and waits for the existing requests to complete, to drain out. Once they do that,
then the auto-scaling engine can terminate the instances without disruption to existing customers.
Because ELB is regional, it's a single URL that each front end instance uses. Then the ELB
directs traffic to the back end that has the least outstanding requests. Now, if the back end scales,
once the new instance is ready, it just tells the ELB that it can take traffic and it getsto work. The
front end doesn't know and doesn't care how many back end instances are running. This is true
decoupled architecture.

Low-demand period
Here’s an example of how Elastic Load Balancing works. Suppose that a few customers have
come to the coffee shop and are ready to place their orders.
If only a few registers are open, this matches the demand of customers who need service. The
coffee shop is less likely to have open registers with no customers. In this example, you can think
of the registers as Amazon EC2 instances.

High-demand period
Throughout the day, as the number of customers increases, the coffee shop opens more registers to
accommodate them. In the diagram, the Auto Scaling group represents this.
Additionally, a coffee shop employee directs customers to the most appropriate register so that
the number of requests can evenly distribute across the open registers. You can think of this
coffee shop employee as a load balancer.
In file storage, multiple clients (such as users, applications, servers, and so on) can access data
that is stored in shared file folders. In this approach, a storage server uses block storage with a
local file system to organize files. Clients access data through file paths.
Compared to block storage and object storage, file storage is ideal for use cases in which a large
number of services and resources need to access the same data at the same time.
Amazon Elastic File System (Amazon EFS) is a scalable file system used with AWS Cloud
services and on-premises resources. As you add and remove files, Amazon EFS grows and
shrinks automatically. It can scale on demand to petabytes without disrupting applications.
EFS allows you to have multiple instances accessing the data in EFS at the same time. It scales up
and down as needed without you needing to do anything to make that scaling happen.
Relational databases
In a relational database, data is stored in a way that relates it to other pieces of data.
Relational databases use structured query language (SQL) to store and query data. This approach
allows data to be stored in an easily understandable, consistent, and scalable way.

Amazon Relational Database Service

Amazon Relational Database Service (Amazon RDS) is a service that enables you to run
relational databases in the AWS Cloud.
Amazon RDS is a managed service that automates tasks such as hardware provisioning, database
setup, patching, and backups. With these capabilities, you can spend less time completing
administrative tasks and more time using data to innovate your applications. You can integrate
Amazon RDS with other services to fulfil your business and operational needs, such as using AWS
Lambda to query your database from a serverless application.
Amazon RDS provides a number of different security options. Many Amazon RDS database
engines offer encryption at rest (protecting data while it is stored) and encryption in transit
(protecting data while it is being sent and received).
Amazon RDS database engines
Amazon RDS is available on six database engines, which optimize for memory, performance, or
input/output (I/O). Supported database engines include:
 Amazon Aurora
PostgreSQL MySQL
MariaDB Oracle Database
 Microsoft SQL Server
The benefits of Amazon RDS include automated patching, backups, redundancy, failover,disaster
recovery, all of which you normally have to manage for yourself. This makes it anextremely
attractive option to AWS customers, as it allows you to focus on business problems and not
maintaining databases.
Amazon Aurora
is an enterprise-class relational database. It is compatible with MySQL andPostgreSQL relational
databases. It is up to five times faster than standard MySQL databases and up to three times
faster than standard PostgreSQL databases.
Amazon Aurora is AWS’ most managed relational database option. It comes in two forms,
MySQL and PostgreSQL. And is priced is 1/10th the cost of commercial grade databases.That's a
pretty cost effective database.
Amazon Aurora helps to reduce your database costs by reducing unnecessary input/output (I/O)
operations, while ensuring that your database resources remain reliable and available.

Nonrelational databases
In a nonrelational database, you create tables. A table is a place where you can store and query
data.
Relational databases works great for a lot of use cases, and has been the standard type of database
historically. However, these types of rigid SQL databases, can have performance and scaling
issues when under stress. The rigid schema also makes it so that you cannot have any variation in
the types of data that you store in a table. So, it might not be the best fit for a dataset that is a little
bit less rigid, and is being accessed at a very high rate. This is where non-relational, or NoSQL,
databases come in.
Nonrelational databases are sometimes referred to as “NoSQL databases” because they use
structures other than rows and columns to organize data. Non-relational databases tend to have
simple flexible schemas, not complex rigid schemas, laying out multiple tables that all relate
to each other.
One type of structural approach for nonrelational databases is key-value pairs. With key-value
pairs, data is organized into items (keys), and items have attributes (values). You can think
of attributes as being different features of your data.
In a key-value database, you can add or remove attributes from items in the table at any time.
Additionally, not every item in the table has to have the same attributes.
Amazon DynamoDB,RDS
Amazon DynamoDB is a key-value database service. It delivers single-digit millisecond
performance at any scale. It is a non-relational, NoSQL database.
With DynamoDB, you create tables. A DynamoDB table is just a place where you can store and
query data. Data is organized into items, and items have attributes. Attributes are just different
features of your data. If you have one item in your table, or 2 million items in your table,
DynamoDB manages the underlying storage for you.
DynamoDB is serverless, which means that you do not have to provision, patch, or manage
servers. You also do not have to install, maintain, or operate software.
DynamoDB, beyond being massively scalable, is also highly performant. DynamoDB has a
millisecond response time. And when you have applications with potentially millions of users,
having scalability and reliable lightning-fast response times is important. As the size of your
database shrinks or grows, DynamoDB automatically scales to adjust for changes in capacity
while maintaining consistent performance. This makes it a suitable choice for use cases that
require high performance while scaling.
DynamoDB stores this data redundantly across availability zones and mirrors the data across
multiple drives under the hood for you. This makes the burden of operating a highly
available database, much lower.
Amazon Redshift
Amazon Redshift is a data warehousing service that you can use for big data analytics. It offers
the ability to collect data from many sources and helps you to understand relationships and
trends across your data. This is data warehousing as a service. It's massively scalable. Redshift
nodes in multiple petabyte sizes is very common. Infact, in cooperation with Amazon Redshift
Spectrum, you can directly run a single SQL query against exabytes of unstructured data running
in data lakes.

AWS Database Migration Service (AWS DMS)


AWS Database Migration Service (AWS DMS) enables you to migrate relational databases,
nonrelational databases, and other types of data stores.
DMS helps customers migrate existing databases onto AWS in a secure and easy fashion. With
AWS DMS, you move data between a source database and a target database. The source and
target databases can be of the same type (homogenous
migration) or different types (heterogeneous migration). During the migration, your source
database remains operational, reducing downtime for any applications that rely on the database.
For heterogeneous migrations, it's a two-step process. Since the schema structures, data types,
and database code are different between source and target, we first need to convert them using the
AWS Schema Conversion Tool. This will convert the source schema and codeto match that of the
target database. The next step is then to use DMS to migrate data from the source database to the
target database.
Other use cases for AWS DMS
Development and test database migrations: Enabling developers to test applications against
production data without affecting production users. In this case, you use DMS to migrate a copy
of your production database to your dev or test environments, either once- off or continuously.
Database consolidation: Combining several databases into a single central database
Continuous replication: Sending ongoing copies of your data to other target sources instead of
doing a one-time migration. This could be for disaster recovery or because of geographic
separation.
Additional database services
Amazon DocumentDB is a document database service that supports MongoDB workloads.
(MongoDB is a document database program). This is great for content management, catalogs, and
user profiles.
Amazon Neptune is a graph database service. You can use Amazon Neptune to build andrun
applications that work with highly connected datasets, such as social networking
andrecommendation engines, fraud detection, and knowledge graphs.
Amazon Quantum Ledger Database (Amazon QLDB) is an immutable ledger database service.
You can use Amazon QLDB to review a complete history of all the changes that have been
made to your application data.
Amazon Managed Blockchain is a service that you can use to create and manage
blockchainnetworks with open-source frameworks. Blockchain is a distributed ledger system that
lets multiple parties run transactions and share data without a central authority.
Amazon ElastiCache is a service that adds caching layers on top of your databases to
helpimprove the read times of common requests. It supports two types of data stores: Redis and
Memcached.
Amazon DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB. It helps to
dramatically improve response (read) times for your nonrelational data from single-digit
milliseconds to microseconds.
SECURITY
The AWS shared responsibility model
Throughout this course, you have learned about a variety of resources that you can create in the
AWS Cloud. These resources include Amazon EC2 instances, Amazon S3 buckets, and Amazon
RDS databases.
The shared responsibility model divides into customer responsibilities (commonly referred to as
“security in the cloud”) and AWS responsibilities (commonly referred to as “security of the
cloud”).

AWS is responsible for security of the cloud.


AWS operates, manages, and controls the components at all layers of infrastructure. This
includes areas such as the host operating system, the virtualization layer, and even the physical
security of the data centres from which services operate.
AWS is responsible for protecting the global infrastructure that runs all of the services offered in
the AWS Cloud. This infrastructure includes AWS Regions, Availability Zones, andedge
locations.
AWS manages the security of the cloud, specifically the physical infrastructure that hosts
your resources, which include:
Physical security of data centres Hardware and software infrastructure Network
infrastructure Virtualization infrastructure
Although you cannot visit AWS data centres to see this protection first-hand, AWS
provides several reports from third-party auditors. These auditors have verified its
compliance with a variety of computer security standards and regulations.
AWS Identity and Access Management (IAM)
AWS Identity and Access Management (IAM) enables you to manage access to AWSservices
and resources securely.
IAM gives you the flexibility to configure access based on your company’s specific
operational and security needs. You do this by using a combination of IAM features, which are
explored in detail in this lesson:
IAM users, groups, and roles IAM
policies Multi-factor authentication
AWS account root user
The root user is accessed by signing in with the email address and password that you used
to create your AWS account. You can think of the root user as being similar to the owner
ofthe coffee shop. It has complete access to all AWS services and resources in the account.

IAM users
An IAM user is an identity that you create in AWS. It represents the person or application that
interacts with AWS services and resources. It consists of a name and credentials.
IAM policies
An IAM policy is a JSON document that allows or denies permissions to AWS services
and resources.
IAM policies enable you to customize users’ levels of access to resources. For example, you
can allow users to access all of the Amazon S3 buckets within your AWS account, or only a
specific bucket.
IAM groups
One way to make it easier to manage your users and their permissions is to organize them into
IAM groups. An IAM group is a collection of IAM users. When you assign an IAM policyto
a group, all users in the group are granted permissions specified by the policy.

Assigning IAM policies at the group level also makes it easier to adjust permissions when an
employee transfers to a different job. For example, if a cashier becomes an inventory
specialist, the coffee shop owner removes them from the “Cashiers” IAM group and adds them
into the “Inventory Specialists” IAM group. This ensures that employees have only the
permissions that are required for their current role.
What if a coffee shop employee hasn’t switched jobs permanently, but instead, rotates to different
workstations throughout the day? This employee can get the access they need through IAM roles.

IAM roles
An IAM role is an identity that you can assume to gain temporary access to permissions.
Roles have associated permissions that allow or deny specific actions. And these roles can be
assumed for temporary amounts of time. It is similar to a user, but has no username and
password. Instead, it is an identity that you can assume to gain access to temporary permissions.
You use roles to temporarily grant access to AWS resources, to users, external identities,
applications, and even other AWS services. When an identity assumes a role, it abandons all of
the previous permissions that it has and it assumes the permissions of that role.
Best practice:

IAM roles are ideal for situations in which access to services or resources needs to be
granted temporarily, instead of long-term.

Multi-factor authentication
Have you ever signed in to a website that required you to provide multiple pieces of information
to verify your identity? You might have needed to provide your password and then a second form
of authentication, such as a random code sent to your phone. This is an example of multi-factor
authentication. In IAM, multi-factor authentication (MFA) providesan extra layer of security for
your AWS account.
How multi-factor authentication works
Denial-of-service attacks
Customers can call the coffee shop to place their orders. After answering each call, a cashier
takes the order and gives it to the barista.
However, suppose that a prankster is calling in multiple times to place orders but is never picking
up their drinks. This causes the cashier to be unavailable to take other customers’ calls. The
coffee shop can attempt to stop the false requests by blocking the phone number that the
prankster is using.
In this scenario, the prankster’s actions are similar to a denial-of-service attack.
A denial-of-service (DoS) attack is a deliberate attempt to make a website or application
unavailable to users.
Distributed denial-of-service attacks
A single machine attacking your application has no hope of providing enough of an attack by
itself, so the distributed part is that the attack leverages other machines around the internet to
unknowingly attack your infrastructure. Now, suppose that the prankster has enlisted thehelp of
friends.
The prankster and their friends repeatedly call the coffee shop with requests to place orders, even
though they do not intend to pick them up. These requests are coming in fromdifferent phone
numbers, and it’s impossible for the coffee shop to block them all.

Additionally, the influx of calls has made it increasingly difficult for customers to be able toget their calls
through. This is similar to a distributed denial-of-service attack
In a distributed denial-of-service (DDoS) attack, multiple sources are used to start an attack that
aims to make a website or application unavailable. This can come from a group of attackers, or
even a single attacker. The single attacker can use multiple infected computers (also known as
“bots”) to send excessive traffic to a website or application.

AWS Shield
AWS Shield is a service that protects applications against DDoS attacks. AWS Shield
providestwo levels of protection: Standard and Advanced.
AWS Shield Standard automatically protects all AWS customers at no cost. It protects yourAWS
resources from the most common, frequently occurring types of DDoS attacks
As network traffic comes into your applications, AWS Shield Standard uses a variety ofanalysis
techniques (e.g. security groups) to detect malicious traffic in real time and automatically
mitigates it.
AWS Shield Advanced is a paid service that provides detailed attack diagnostics and theability to
detect and mitigate sophisticated DDoS attacks.
It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and Elastic
Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing
custom rules to mitigate complex DDoS attacks.
AWS Key Management Service (AWS KMS)
The coffee shop has many items, such as coffee machines, pastries, money in the cash registers,
and so on. You can think of these items as data. The coffee shop owners want to ensure that all of
these items are secure, whether they’re sitting in the storage room or being transported between
shop locations.
In the same way, you must ensure that your applications’ data is secure while in storage
(encryption at rest) and while it is transmitted, known as encryption in transit.
Encryption is the securing of a message or data in a way that only authorized parties can access it.
AWS Key Management Service (AWS KMS) enables you to perform encryption operations
through the use of cryptographic keys. A cryptographic key is a random string of digits usedfor
locking (encrypting) and unlocking (decrypting) data. You can use AWS KMS to create, manage,
and use cryptographic keys. You can also control the use of keys across a wide range of services
and in your applications.
With AWS KMS, you can choose the specific levels of access control that you need for yourkeys.
For example, you can specify which IAM users and roles are able to manage keys.
Alternatively, you can temporarily disable keys so that they are no longer in use by anyone. Your
keys never leave AWS KMS, and you are always in control of them.

AWS WAF
AWS WAF is a web application firewall that lets you monitor network requests that comeinto
your web applications.
AWS WAF works together with Amazon CloudFront and an Application Load Balancer.
Recallthe network access control lists that you learned about in an earlier module. AWS WAF
works in a similar way to block or allow traffic. However, it does this by using a web access
control list (ACL) to protect your AWS resources.
Amazon Inspector
Suppose that the developers at the coffee shop are developing and testing a new ordering
application. They want to make sure that they are designing the application in accordance with
security best practices. However, they have several other applications to develop, so they cannot
spend much time conducting manual assessments. To perform automated security assessments,
they decide to use Amazon Inspector.
Amazon Inspector helps to improve the security and compliance of applications by running
automated security assessments. It checks applications for security vulnerabilities and deviations
from security best practices, such as open access to Amazon EC2 instances andinstallations of
vulnerable software versions.
Amazon GuardDuty
Amazon GuardDuty is a service that provides intelligent threat detection for your AWS
infrastructure and resources. It identifies threats by continuously monitoring the network activity
and account behaviour within your AWS environment.
After you have enabled GuardDuty for your AWS account, GuardDuty begins monitoring your
network and account activity. You do not have to deploy or manage any additional security
software. GuardDuty then continuously analyzes data from multiple AWS sources, including
VPC Flow Logs and DNS logs.
It uses integrated threat intelligence such as known malicious IP addresses, anomaly detection,
and machine learning to identify threats more accurately. The best part is that itruns
independently from your other AWS services. So it won't affect performance or availability of
your existing infrastructure, and workloads.
MONITORING AND ANALYTICS
Amazon CloudWatch
Amazon CloudWatch is a web service that enables you to monitor and manage various metrics
and configure alarm actions based on data from those metrics.
CloudWatch uses metrics to represent the data points for your resources. AWS services send
metrics to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that
show how performance has changed over time.
CloudWatch alarms
With CloudWatch, you can create alarms that automatically perform actions if the value of your
metric has gone above or below a predefined threshold.

CloudWatch dashboard

The CloudWatch dashboard feature enables you to access all the metrics for your resources from
a single location. This enables you to collect metrics and logs from all your AWS resources
applications, and services that run on AWS and on-premises servers, helping you break down
silos so that you can easily gain system-wide visibility. For example, you can use a CloudWatch
dashboard to monitor the CPU utilization of an Amazon EC2 instance, the total number of
requests made to an Amazon S3 bucket, and more. You can even customize separate dashboards
for different business purposes, applications, or resources.
You can get visibility across your applications, infrastructure, and services, which means you
gain insights across your distributed stack so you can correlate and visualize metrics and logs to
quickly pinpoint and resolve issues. This in turn means you can reduce mean time to resolution,
or MTTR, and improve total cost of ownership, or TCO. So in our coffee shop, if the MTTR of
cleaning hours for restaurant machines is shorter then we can save on TCO with them. This
means freeing up important resources like developers to focus on adding business value.

AWS CloudTrail
AWS CloudTrail is a comprehensive API auditing tool that records API calls for your
account.The recorded information includes the identity of the API caller, the time of the API call,
thesource IP address of the API caller, and more. You can think of CloudTrail as a “trail” of
breadcrumbs (or a log of actions) that someone has left behind them.

Recall that you can use API calls to provision, manage, and configure your AWS
resources.With CloudTrail, you can view a complete history of user activity and API calls for
your applications and resources.
CloudTrail can save those logs indefinitely in secure S3 buckets. In addition, with tamper-proof
methods like Vault Lock, you then can show absolute provenance of all of these critical
security audit logs.

CloudTrail Insights
Within CloudTrail, you can also enable CloudTrail Insights. This optional feature
allowsCloudTrail to automatically detect unusual API activities in your AWS account.
For example, CloudTrail Insights might detect that a higher number of Amazon EC2 instances
than usual have recently launched in your account. You can then review the full event details
to determine which actions you need to take next.

AWS Trusted Advisor


AWS Trusted Advisor is an automated web service that inspects your AWS environment
andprovides real-time recommendations in accordance with AWS best practices.
Trusted Advisor compares its findings to AWS best practices in five categories: cost
optimization, performance, security, fault tolerance, and service limits. For the checks ineach
category, Trusted Advisor offers a list of recommended actions and additional resources to learn
more about AWS best practices.
The guidance provided by AWS Trusted Advisor can benefit your company at all stages
ofdeployment. For example, you can use AWS Trusted Advisor to assist you while you are
creating new workflows and developing new applications. Or you can use it while you aremaking
ongoing improvements to existing applications and resources.

AWS Trusted Advisor dashboard


When you access the Trusted Advisor dashboard on the AWS Management Console, you can
review completed checks for cost optimization, performance, security, fault tolerance, and service
limits.
For each category:

The green check indicates the number of items for which it detected no problems. The orange
triangle represents the number of recommended investigations.
The red circle represents the number of recommended actions.

Pricing and Support

AWS Free Tier


The AWS Free Tier enables you to begin using certain services without having to worryabout
incurring costs for the specified period.
Three types of offers are available:
Always Free 12 Months Free Trials
For each free tier offer, make sure to review the specific details about exactly which resource
types are included.
Always Free
These offers do not expire and are available to all AWS customers.
For example, AWS Lambda allows 1 million free requests and up to 3.2 million seconds
ofcompute time per month. Amazon DynamoDB allows 25 GB of free storage per month.
12 Months Free
These offers are free for 12 months following your initial sign-up date to AWS (12 MonthsFree
category).
Examples include specific amounts of Amazon S3 Standard Storage, thresholds for monthly
hours of Amazon EC2 compute time, and amounts of Amazon CloudFront data transfer out.
Trials
Short-term free trial offers start from the date you activate a particular service. The length of each
trial might vary by number of days or the amount of usage in the service. For example, Amazon
Inspector offers a 90-day free trial. Amazon Lightsail (a service that enables you to run virtual
private servers) offers 750 free hours of usage over a 30-day period.

AWS Pricing Calculator


The AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of
your use cases on AWS. You can organize your AWS estimates by groups that you define. A
group can reflect how your company is organized, such as providing estimates bycost centre.
When you have created an estimate, you can save it and generate a link to share it with others.

Module 9 – Migration and Innovation

Six core perspectives of the AWS Cloud Adoption Framework


Migrating to the cloud is a process. You don't just snap your fingers and have everything
magically hosted in AWS. It takes a lot of effort to get applications migrated to AWS, andhaving
a successful cloud migration is something that requires expertise.

At the highest level, the AWS Cloud Adoption Framework (AWS CAF) organizes guidance into
six areas of focus, called Perspectives. Each Perspective addresses distinct responsibilities. The
planning process helps the right people across the organization prepare for the changes ahead.
People Perspective
The People Perspective supports development of an organization-wide change management
strategy for successful cloud adoption.

Use the People Perspective to evaluate organizational structures and roles, new skill and process
requirements, and identify gaps. This helps prioritize training, staffing, and organizational
changes.
Common roles in the People Perspective include: Human resources
Staffing
People managers

Governance Perspective
The Governance Perspective focuses on the skills and processes to align IT strategy withbusiness
strategy. This ensures that you maximize the business value and minimize risks.
Use the Governance Perspective to understand how to update the staff skills and processes
necessary to ensure business governance in the cloud. Manage and measure cloud investments to
evaluate business outcomes.

Common roles in the Governance Perspective include: Chief Information Officer (CIO)
Program managers Enterprise architects Business analysts Portfolio managers

Platform Perspective
The Platform Perspective includes principles and patterns for implementing new solutionson the
cloud, and migrating on-premises workloads to the cloud.

Use a variety of architectural models to understand and communicate the structure of IT systems
and their relationships. Describe the architecture of the target state environment in detail.

Common roles in the Platform Perspective include: Chief Technology Officer (CTO)
IT managers Solutions architects

Security Perspective
The Security Perspective ensures that the organization meets security objectives for visibility,
auditability, control, and agility.
Use the AWS CAF to structure the selection and implementation of security controls
andpermissions that meet the organization’s needs.
Common roles in the Security Perspective include: Chief Information Security Officer (CISO) IT
security managers

IT security analysts

Operations Perspective
The Operations Perspective helps you to enable, run, use, operate, and recover ITworkloads to the
level agreed upon with your business stakeholders.

Define how day-to-day, quarter-to-quarter, and year-to-year business is conducted. Align with
and support the operations of the business. The AWS CAF helps these stakeholders define current
operating procedures and identify the process changes and training needed to implement
successful cloud adoption.
Common roles in the Operations Perspective include: IT operations
managers IT support managers
3.10 – The Cloud Journey

The AWS Well-Architected Framework


The AWS Well-Architected Framework helps you understand how to design and operate reliable,
secure, efficient, and cost-effective systems in the AWS Cloud. It provides a way foryou to
consistently measure your architecture against best practices and design principles and identify
areas for improvement.

The Well-Architected Framework is designed to enable architects, developers, and users of


AWS to build secure, high-performing, resilient, and efficient infrastructure for their
applications. It is based on five pillars:
Operational excellence
Security Reliability
Performance efficiency Cost optimization
Operational excellence is the ability to run and monitor systems to deliver business valueand to
continually improve supporting processes and procedures.
Design principles for operational excellence in the cloud include performing operations as code,
annotating documentation, anticipating failure, and frequently making small, reversible changes.
The Security pillar is the ability to protect information, systems, and assets while
deliveringbusiness value through risk assessments and mitigation strategies.
When considering the security of your architecture, apply these best practices:

Automate security best practices when possible.


Apply security at all layers.
Protect data in transit and at
rest.
Reliability is the ability of a system to do the following:
Recover from infrastructure or service disruptions (e.g. Amazon DynamoDBdisruption, or
EC2 node failure)
Dynamically acquire computing resources to meet demand
Mitigate disruptions such as misconfigurations or transient network issues
Reliability includes testing recovery procedures, scaling horizontally to increase aggregatesystem
availability, and automatically recovering from failure.

Performance efficiency is the ability to use computing resources efficiently to meet


systemrequirements and to maintain that efficiency as demand changes and technologies evolve.

Evaluating the performance efficiency of your architecture includes experimenting more often,
using serverless architectures, and designing systems to be able to go global in minutes.

Cost optimization is the ability to run systems to deliver business value at the lowest price point.
Cost optimization includes adopting a consumption model, analysing and attributingexpenditure,
and using managed services to reduce the cost of ownership.
In the past, you'd need to evaluate these against your AWS infrastructure with the help of a
Solutions Architect. Not that you can't, and aren't still encouraged to do that, but we listened to
customer feedback, and decided to release the Framework as a self-service tool,the AWS
Well- Architected Tool.
You can access it by the AWS Management Console. Create a workload and run it againstyour
AWS account.
Advantages of cloud computing
Operating in the AWS Cloud offers many benefits over computing in on-premises or hybrid
environments.
In this section, you will learn about six advantages of cloud computing:
Trade upfront expense for variable expense. Benefit from massive economies of scale.
Stop guessing capacity. Increase speed and agility.
Stop spending money running and maintaining data centres. Go global in minutes.
Trade upfront expense for variable expense.
Upfront expenses include data centres, physical servers, and other resources that you would need
to invest in before using computing resources. These on-premises data centre costs include
things like physical space, hardware, staff for racking and stacking, and overhead for running the
data centre.

Instead of investing heavily in data centres and servers before you know how you’re going to use
them, you can pay only when you consume computing resources.

Benefit from massive economies of scale.


By using cloud computing, you can achieve a lower variable cost than you can get on your own.
Because usage from hundreds of thousands of customers aggregates in the cloud, providers
such as AWS can achieve higher economies of scale. Economies of scale translate into
lowerpay-as- you-go prices.

AWS is also an expert at building efficient data centres. We can buy the hardware at a lower price
because of the massive volume, and then we install it and run it efficiently. Because of these
factors, you can achieve a lower variable cost than you could running a data centre on your own.

Stop guessing capacity.


With cloud computing, you don’t have to predict how much infrastructure capacity you will need
before deploying an application. You provision the resources you need for the now, and you
scale up and down accordingly.

For example, you can launch Amazon Elastic Compute Cloud (Amazon EC2) instances when
needed and pay only for the compute time you use. Instead of paying for resources that are
unused or dealing with limited capacity, you can access only the capacity that you need, and scale
in or out in response to demand.
Increase speed and agility.
The flexibility of cloud computing makes it easier for you to develop and deploy
applications. This flexibility also provides your development teams with more time to
experiment and innovate.
With AWS, it's easy to try new things. You can spin up test environments and run
experiments on new ways to approach solving a problem. And then if that approach
didn't work, you can just delete those resources and stop incurring cost. Traditional data
centres don't offer the same flexibility.

Stop spending money running and maintaining data centres.


Cloud computing in data centres often requires you to spend more money and time
managing infrastructure and servers. A benefit of cloud computing is the ability to focus
lesson these tasks and more on your applications and customers.
If you aren't a data centre company, why spend so much money and time running data
centres? Let AWS take the undifferentiated heavy lifting off your hands and instead
focus on what makes your business valuable.

Go global in minutes.
The AWS Cloud global footprint enables you to quickly deploy applications to customers
around the world, while providing them with low latency. Traditionally, you would need
to have staff overseas running and operating a data centre for you. With AWS, you can
just replicate your architecture to a region in that foreign country.

This architecture has now been replicated across Availability Zones or AZs. This is importantfor
reliability. If one AZ is having issues, your application will still be up and running in the second AZ.
CHAPTER 4
PARTICIPATIONS AND BADGES
CHAPTER 5
CONCLUSION

5.1 CONCLUSION
In conclusion, the Microsoft Azure Administrator Associate certification equips
professionals with a robust skill set to effectively administer Azure cloud services. Through a
comprehensive curriculum encompassing modules ranging from managing identities and
governance to implementing and managing Azure resources and containers, candidates gain
proficiency in critical areas of cloud administration.
By completing this certification, individuals demonstrate their ability to deploy, monitor,
secure, and optimize Azure environments, thereby contributing to the operational excellence
and success of organizations leveraging Azure cloud solutions. Moreover, the certification
serves as a testament to their expertise in implementing Azure best practices, ensuring
compliance, and driving innovation in cloud computing.
As organizations continue to embrace the cloud as a fundamental pillar of their IT
infrastructure, the demand for skilled Azure administrators remains high. The Azure
Administrator Associate certification not only validates the technical capabilities of
professionals but also opens up diverse career opportunities in cloud administration and
architecture roles.
In an era defined by digital transformation and rapid technological advancements, the Azure
Administrator Associate certification empowers individuals to stay ahead of the curve and
become trusted experts in cloud administration. It is a testament to their dedication to
continuous learning and professional growth, positioning them as invaluable assets in today's
competitive job market.
In summary, the AWS Cloud Practitioner certification serves as an essential foundation for
individuals entering the realm of cloud computing. Covering fundamental concepts of the
Amazon Web Services (AWS) Cloud, this certification equips candidates with the knowledge
and skills necessary to understand core AWS services, security best practices, and basic
architectural principles.
Through modules such as AWS global infrastructure, core services, security, compliance,
and billing models, candidates gain a comprehensive understanding of the AWS Cloud's
building blocks. By mastering these foundational concepts, individuals establish a solid
groundwork for further specialization in AWS and cloud-related roles.
Page 103 of 104
The AWS Cloud Practitioner certification not only validates technical expertise but also demonstrates
a commitment to cloud fluency and AWS best practices. Whether embarking on a career in cloud
computing or seeking to enhance existing skills, this certification provides a valuable credential
recognized by employers worldwide.
As organizations increasingly migrate to the cloud, the demand for professionals with AWS expertise
continues to grow. The AWS Cloud Practitioner certification not only opens doors to diverse career
opportunities but also lays the groundwork for pursuing advanced AWS certifications and
specialization tracks.
In a rapidly evolving technological landscape, the AWS Cloud Practitioner certification empowers
individuals to navigate the complexities of cloud computing with confidence. By obtaining this
certification, candidates establish themselves as competent cloud practitioners capable of driving
innovation and success in the digital age.

Sincerely,
Enakshi Kapoor

You might also like