Report Removed
Report Removed
Submitted by
21BEC1194
April 2024
School of Electronics Engineering
DECLARATION BY THE CANDIDATE
I hereby declare that the Industrial Internship Report entitled “Microsoft Azure
Administrator (Associate) and AWS Cloud Practitioner (Foundational)
Certification” submitted by me to VIT University, Chennai in partial fulfillment
of the requirement for the award of the degree of Bachelor of Technology in
Electronics and Communications Engineering is a record of bonafide industrial
training undertaken by me. I further declare that the work reported in this report
has not been submitted and will not be submitted, either in part or in full, for the
award of any other degree or diploma in this institute or any other institute or
university.
This is to certify that the Industrial Internship Report entitled “Microsoft Azure
Administrator (Associate) and AWS Cloud Practitioner (Foundational)
Certification” submitted by Enakshi Kapoor(21BEC1194) to VIT, Chennai in
partial fulfillment of the requirement for the award of the degree of Bachelor of
Technology in Electronics and Communication Engineering is a record of
bonafide industrial internship undertaken by him/her fulfills the requirements as
per the regulations of this institute and in my opinion meets the necessary
standards for submission. The contents of this report have not been submitted and
will not be submitted either in part or in full, for the award of any other degree or
diploma in this institute or any other institute or university.
I sincerely thank the individuals who made this internship experience both
educational and enriching. I want to express my heartfelt thanks to Satya Narayan
Nadella, CEO of Microsoft, and Maureen Lonergan, Director of Training and
Certification at AWS Academy for providing me with this incredible opportunity to
get certified at Microsoft Azure Administrator (Associate) and AWS Cloud
Practitioner (Foundational) Certification. Would like to extend my thanks further to
Mr. Jayakumar Sadhasivam, AWS Academy Accredited Educator from the
Software Systems Department for their guidance, and support, and for creating a
conducive learning environment throughout my internship. Their mentorship and
insights have been instrumental in shaping my understanding of the cloud and its
operations.
I am immensely grateful to Priyadaarshini M and Padmavathy T V, Associate senior
professors, dedication to their work, and patience in answering my queries have
broadened my perspective and helped me develop practical skills.
Furthermore, I would like to express my deep appreciation to my parents, my sister,
and friends for their trust in my abilities and belief in my potential has given me the
platform to apply my theoretical knowledge in a real-world setting. I am grateful for
their unwavering support and encouragement throughout this journey.
Lastly, I would like to acknowledge that as the sole intern, I have received
undivided attention and guidance from the entire team at Pearson Vue. Their
openness, collaboration, and willingness to share their experiences have contributed
significantly to my personal and professional growth.
Enakshi Kapoor
(21BEC1194)
TABLE OF CONTENTS
5. Conclusion 83
LIST OF SYMBOLS AND ABBREVIATIONS
AWS Amazon Web Services
Amazon ES Amazon Elasticsearch Service
AMI Amazon Machine Image
API Application Programming Interface
AI Artificial Intelligence
ACL Access Control List
ALB Application Load Balancer
ARN Amazon Resource Name
AZ Availability Zone
ACM AWS Certificate Management
ASG Auto Scaling Group
AES Advanced Encryption System
ADFS Active Directory Federation Service
AVX Advanced Vector Extensions
LB Load Balancer
LCU Load Balancer Capacity Unit
1.1 INTRODUCTION
The world of cloud computing has revolutionized the way organizations deploy, manage, and
scale their IT infrastructure. With the increasing adoption of cloud services, the demand for
skilled professionals who can design, implement, and manage cloud solutions has never been
higher. In response to this growing need, certifications such as the Microsoft Azure Associate
Administrator Certification and the AWS Cloud Practitioner Certification have emerged as
valuable credentials that validate expertise in leading cloud platforms.
This introduction provides an overview of the significance of these certifications in the context of
cloud computing and outlines the objectives of this document. It sets the stage for a detailed
exploration of the Microsoft Azure Administrator Associate Certification and the AWS Cloud
Practitioner Certification, highlighting their key components, benefits, and relevance in today's
technology landscape.
As organizations transition their infrastructure to the cloud, they seek professionals who possess
the knowledge and skills to leverage cloud platforms effectively. The Microsoft Azure
Administrator Associate Certification and the AWS Cloud Practitioner Certification serve as
benchmarks of proficiency in Microsoft Azure and Amazon Web Services (AWS), two of the
leading cloud providers in the industry. These certifications validate not only technical expertise
but also the ability to design scalable, secure, and cost-effective cloud solutions that meet the
needs of modern businesses.
Throughout this document, we will delve into the core components of each certification,
including the topics covered, exam objectives, and recommended study resources. We will
explore the benefits of obtaining these certifications, both for individuals seeking to advance their
careers in cloud computing and for organizations looking to build a skilled workforce capable of
harnessing the full potential of cloud technology.
Whether you are an aspiring cloud professional looking to kickstart your career or an experienced
IT professional seeking to validate your expertise in cloud computing, this document aims to
provide valuable insights into the Microsoft Azure Administrator Associate Certification and the
AWS Cloud Practitioner Certification. By understanding the requirements and benefits of these
certifications, you can embark on a journey to enhance your skills, advance your career, and
become a trusted cloud expert in today's dynamic and competitive job market.
1.1 CERTIFICATES:
3. Concepts:
An identity is an object that can be authenticated. The identity can be a user with a
username and password. Identities can also be applications or other servers that
require authentication by using secret keys or certificates. Microsoft Entra ID is the
underlying product that provides the identity service.
An account is an identity that has data associated with it. To have an account, you
must first have a valid identity. You can't have an account without an identity.
An Azure tenant is a single dedicated and trusted instance of Microsoft Entra ID. Each
tenant (also called a directory) represents a single organization. When your organization
signs up for a Microsoft cloud service subscription, a new tenant is automatically
created. Because each tenant is a dedicated and trusted instance of Microsoft Entra ID,
you can create multiple tenants or instances.
4. Implement Microsoft Entra self-service password reset: The selected option is useful for
creating specific groups that have SSPR enabled. You can create groups for testing or proof
of concept before applying the feature to a larger group. When you're ready to deploy SSPR
to
all user accounts in your Microsoft Entra tenant, you can change the setting.
Let's examine some characteristics of bulk operations in the Azure portal. Here's an example that
shows the Bulk create user option for new user accounts in Microsoft Entra ID:
Only Global administrators or User administrators have privileges to create and delete user
accounts in the Azure portal.
To complete bulk create or delete operations, the admin fills out a comma-separated values (CSV)
template of the data for the user accounts.
Bulk operation templates can be downloaded from the Microsoft Entra admin center.
Bulk lists of user accounts can be downloaded.
4. Things to know about creating group accounts
Review the following characteristics of group accounts in Microsoft Entra ID. The following
screenshot shows a list of groups in the Azure portal:
Use security groups to set permissions for all group members at the same time, rather than adding
permissions to each member individually.
Add Microsoft 365 groups to enable group access for guest users outside your Microsoft Entra
organization.
Security groups can be implemented only by a Microsoft Entra administrator.
Normal users and Microsoft Entra admins can both use Microsoft 365
groups.
Consider the management tasks for a large university that's composed of several different schools
like Business, Engineering, and Medicine. The university has administrative offices, academic
buildings, social buildings, and student dormitories. For security purposes, each business office
has its own internal network for resources like servers, printers, and fax machines. Each academic
building is connected to the university network, so both instructors and students can access their
accounts. The network is also available to students and deans in the dormitories and social
buildings. Across the university, guest users require access to the internet via the university
network.
Consider how a central admin role can use administrative units to support the
Engineering department in our scenario:
Create a role that has administrative permissions for only Microsoft Entra users
Create an administrative unit for the Engineering department.
Populate the administrative unit with only the Engineering department students, staff,
and resources.
Add the Engineering department IT team to the role, along with its scope.
Things to consider when working with administrative units
Think about how you can implement administrative units in your organization. Here are some
considerations:
Consider management tools. Review your options for managing AUs. You can use
the Azure portal, PowerShell cmdlets and scripts, or Microsoft Graph.
Consider role requirements in the Azure portal. Plan your strategy for
administrative units according to role privileges. In the Azure portal, only the Global
Administrator or Privileged Role Administrator users can manage AUs.
Consider scope of administrative units. Recognize that the scope of an administrative
unit applies only to management permissions. Members and admins of an administrative
unit can exercise their default user permissions to browse other users, groups, or
resources outside of their administrative unit.
CONFIGURING SUBSCRIPTIONS
Types of Azure subscriptions would work for your organization, consider these scenarios:
Consider trying Azure for free. An Azure free subscription includes a monetary credit to spend
on any service for the first 30 days. You get free access to the most popular Azure products for 12
months, and access to more than 25 products that are always free. An Azure free subscription is
an excellent way for new users to get started.
To set up a free subscription, you need a phone number, a credit card, and a Microsoft account.
The credit card information is used for identity verification only. You aren't charged for any
services until you upgrade to a paid subscription.
Consider paying monthly for used services. A Pay-As-You-Go (PAYG) subscription charges
you monthly for the services you used in that billing period. This subscription type is appropriate
for a wide range of users, from individuals to small businesses, and many large organizations as
well.
Consider using an Azure Enterprise Agreement. An Enterprise Agreement provides flexibility
to buy cloud services and software licenses under one agreement. The agreement comes with
discounts for new licenses and Software Assurance. This type of subscription targets enterprise-
scale organizations.
Consider supporting Azure for students. An Azure for Students subscription includes a
monetary credit that can be used within the first 12 months.
Students can select free services without providing a credit card during the sign-up process.You
must verify your student status through your organizational email address.
To establish the scope, you select the affected subscriptions. As an option, you can also choose the
affected resource groups.
The following example shows how to apply the scope:
CONFIGURING ROLE-BASED ACCESS CONTROL
Azure Administrators need to secure access to their Azure resources like virtual machines (VMs),
websites, networks, and storage. Administrators need mechanisms to help them manage who can
access their resources, and what actions are allowed. Organizations that do business in the cloud
recognize that securing their resources is a critical function of their infrastructure.
Secure access management for cloud resources is critical for businesses that operate in the cloud.
Role-based access control (RBAC) is a mechanism that can help you manage who can access
your Azure resources. RBAC lets you determine what operations specific users can do on
specific resources, and control what areas of a resource each user can access.
Azure RBAC is an authorization system built on Azure Resource Manager. Azure RBAC
provides fine-grained access management of resources in Azure.
Things to know about Azure RBAC
Allow an application to access all resources in a resource group.
Allow one user to manage VMs in a subscription, and allow another user to manage
virtual networks.
Allow a database administrator (DBA) group to manage SQL databases in a subscription.
Allow a user to manage all resources in a resource group, such as VMs, websites,
and subnets.
Things to consider when using Azure RBAC
Consider your requestors. Plan your strategy to accommodate for all types of access to
your resources. Security principals are created for anything that requests access to your
resources. Determine who are the requestors in your organization. Requestors can be internal
or external users, groups of users, applications and services, resources, and so on.
Consider your roles. Examine the types of job responsibilities and work scenarios in your
organization. Roles are commonly built around the requirements to fulfill job tasks or complete
work goals. Certain users like administrators, corporate controllers, and engineers can require a
level of access beyond what most users need. Some roles can be defined to provide the same
access for all members of a team or department for specific resources or applications.
Consider scope of permissions. Think about how you can ensure security by controlling the
scope of permissions for role assignments. Outline the types of permissions and levels of scope
that you need to support. You can apply different scope levels for a single role to support
requestors in different scenarios.
Consider built-in or custom definitions. Review the built-in role definitions in Azure
RBAC. Built-in roles can be used as-is, or adjusted to meet the specific requirements for your
organization. You can also create custom role definitions from scratch.
Azure RBAC roles Microsoft Entra ID admin roles
Access management Manages access to Manages access to
Azure resources Microsoft Entra resources
Scope assignment Scope can be specified at multiple The scope is specified at
levels, including management the tenant level
groups, subscriptions, resource
groups, and resources
Role definitions Roles can be defined via the Roles can be defined via the
Azure portal, the Azure CLI, Azure admin portal, Microsoft
Azure PowerShell, Azure 365 admin portal, and Microsoft
Resource Manager templates, and Graph PowerShell
the REST API
Built-in role definitions are defined for several categories of services, tasks, and users. You can
assign built-in roles at different scopes to support various scenarios, and build custom roles from
the base definitions.
Microsoft Entra ID also provides built-in roles to manage resources in Microsoft Entra ID,
including users, groups, and domains. Microsoft Entra ID offers administrator roles that you can
implement for your organization, such as Global admin, Application admin, and Application
developer.
The following diagram illustrates how you can apply Microsoft Entra administrator roles and
Azure roles in your organization.
Microsoft Entra admin roles are used to manage resources in Microsoft Entra ID, such as users,
groups, and domains. These roles are defined for the Microsoft Entra tenant at the root level of
the configuration.
Azure RBAC roles provide more granular access management for Azure resources. These roles
are defined for a requestor or resource and can be applied at multiple levels: the root,
management groups, subscriptions, resource groups, or resources.
Azure PowerShell is a module that you add to Windows PowerShell or PowerShell Core to
enable you to connect to your Azure subscription and manage resources. Azure PowerShell
requires PowerShell to function. PowerShell provides services such as the shell window and
command parsing. Azure PowerShell adds the Azure-specific commands.
For example, Azure PowerShell provides the New-AzVm command that creates a virtual
machine inside your Azure subscription. To use it, you would launch the PowerShell application
and then issue a command such as the following command:
Azure CLI is a command-line program to connect to Azure and execute administrative commands
on Azure resources. It runs on Linux, macOS, and Windows, and allows administrators and
developers to execute their commands through a terminal, command-line prompt, or script
instead of a web browser. For example, to restart a VM, you would use a command such as the
following:
Azure CLI is a command-line program to connect to Azure and execute administrative commands
on Azure resources. It runs on Linux, macOS, and Windows, and allows administrators and
developers to execute their commands through a terminal, command-line prompt, or script
instead of a web browser. For example, to restart a VM, you would use a command such as the
following:
Some templates provide everything you need to deploy your solution, while others might serve
as a starting point for your template. Either way, you can study these templates to learn how to
best author and structure your own templates.
The README.md file provides an overview of what the template does.
The azuredeploy.json file defines the resources that will be deployed.
The azuredeploy.parameters.json file provides the values the template needs.
Private IP addresses enable communication within an Azure virtual network and your on-premises
network. You create a private IP address for your resource when you use a VPN gateway or
Azure ExpressRoute circuit to extend your network to Azure.
Public IP addresses allow your resource to communicate with the internet. You can create a
public IP address to connect with Azure public-facing services.
The following illustration shows a virtual machine resource that has a private IP address and a
public IP address.
The traffic rules you need to create and what services can fulfill your network requirements.
Source: Identifies how the security rule controls inbound traffic. The value specifies a specific
source IP address range that's allowed or denied. The source filter can be any resource, an IP
address range, an application security group, or a default tag.
Destination: Identifies how the security rule controls outbound traffic. The value specifies a
specific destination IP address range that's allowed or denied. The destination filter value is
similar to the source filter. The value can be any resource, an IP address range, an application
security group, or a default tag.
Service: Specifies the destination protocol and port range for the security rule. You can choose a
predefined service like RDP or SSH or provide a custom port range. There are a large number
of services to select from.
Priority: Assigns the priority order value for the security rule. Rules are processed according to
the priority order of all rules for a network security group, including a subnet and network
interface. The lower the priority value, the higher priority for the rule.
Virtual network 1 contains two virtual machines: VM1 and VM2. VM1 and VM2 each have a
private IP address.
When an Azure Private DNS zone address is created (such as contoso.lab) and linked to
Virtual network 1, Azure DNS automatically creates two A records in the DNS zone if Auto
registration is enabled in the link configuration.
In this scenario, Azure DNS uses only Virtual network 2 to resolve domain name (or DNS zone)
queries.
Azure DNS queries from VM1 in Virtual network 1 to resolve the VM2.contoso.lab address
receive an Azure DNS response that contains the private IP address of VM2 (10.0.0.5).
A reverse DNS query (PTR) for the private IP address of VM1 (10.0.0.4) issued from VM2
receive an Azure DNS response that contains the FQDN of VM1, as expected.
The second scenario involves name resolution across multiple virtual networks, which is probably
the most common usage for Azure Private DNS zones. This scenario consists of two virtual
networks. One network is focused on registration for Azure Private DNS zone records and the
other supports name resolution.
Virtual network 1 is designated for registration. Virtual network 2 is designated for name
resolution.
The design strategy is for both virtual networks to share the common DNS zone
address, contoso.lab.
The resolution and registration virtual networks are linked to the common DNS zone.
Azure Private DNS zone records for virtual machines in Virtual network 1 (registration) are
created automatically.
For virtual machines in Virtual network 2 (resolution), Azure Private DNS zone records can be
created manually.
In this scenario, Azure DNS uses both virtual networks to resolve domain name queries.
An Azure DNS query from a virtual machine in Virtual network 2 (resolution) for a virtual
machine in Virtual network 1 (registration) receives an Azure DNS response containing
the private IP address of the virtual machine.
Reverse DNS queries are scoped to the same virtual network.
A reverse DNS (PTR) query from a virtual machine in Virtual network 2 (resolution) for a virtual
machine in Virtual network 1 (registration) receives an Azure DNS response containing
the NXDOMAIN of the virtual machine. NXDOMAIN is an error message that indicates the
queried domain doesn't exist.
A reverse DNS (PTR) query from a virtual machine in Virtual network 1 (registration) for
a virtual machine also in Virtual network 1 receives the FQDN for the virtual machine.
Regional virtual network peering connects Azure virtual networks that exist in the same region.
Global virtual network peering connects Azure virtual networks that exist in different regions.
You can create a regional peering of virtual networks in the same Azure public cloud region, or in
the same China cloud region, or in the same Microsoft Azure Government cloud region.
You can create a global peering of virtual networks in any Azure public cloud region, or in any
China cloud region.
Global peering of virtual networks in different Azure Government cloud regions isn't permitted.
After you create a peering between virtual networks, the individual virtual networks are still
managed as separate resources.
Virtual network A and virtual network B are each peered with a hub virtual network. The hub
virtual network contains several resources, including a gateway subnet and an Azure VPN
gateway. The VPN gateway is configured to allow VPN gateway transit. Virtual network B
accesses resources in the hub, including the gateway subnet, by using a remote VPN gateway.
Virtual network peering is nontransitive. The communication capabilities in peering are available
to only the virtual networks and resources in the peering. Other mechanisms have to be used to
enable traffic to and from resources and networks outside the private peering network.
The following diagram shows a hub and spoke virtual network with an NVA and VPN gateway.
The hub and spoke network is accessible to other virtual networks via user-defined routes and
service chaining.
Azure Load Balancer can be used for inbound and outbound scenarios.
You can implement a public or internal load balancer, or use both types in a
combination configuration.
To implement a load balancer, you configure four components:
Front-end IP configuration
Back-end pools
Health probes
Load-balancing rules
The front-end configuration specifies the public IP or internal IP that your load
balancer responds to.
The back-end pools are your services and resources, including Azure Virtual Machines
or instances in Azure Virtual Machine Scale Sets.
Load-balancing rules determine how traffic is distributed to back-end resources.
Health probes ensure the resources in the backend are healthy.
Load Balancer scales up to millions of TCP and UDP application flows.
Each load balancer has one or more back-end pools that are used for distributing traffic. The
back- end pools contain the IP addresses of the virtual NICs that are connected to your load
balancer.
You configure these pool settings in the Azure portal.
pool.
Routing options for Azure Application Gateway.
Azure Application Gateway offers two primary methods for routing traffic:
Path-based routing sends requests with different URL paths to different pools of back-
end servers.
Multi-site routing configures more than one web application on the same
application gateway instance.
You can configure your application gateway to redirect traffic.
Application Gateway can redirect traffic received at one listener to another listener, or to an external site. This
approach is commonly used by web apps to automatically redirect HTTP requests to communicate via HTTPS. The
redirection ensures all communication between your web app and clients occurs over an encrypted path.
HTTP headers allow the client and server to pass parameter information with the request or the response. In this
scenario, you can translate URLs or query string parameters, and modify request and response headers. Add
conditions to ensure URLs or headers are rewritten only for certain conditions.
Application Gateway allows you to create custom error pages instead of displaying default error pages. You can use
your own branding and layout by using a custom error page.
An optional Web Application Firewall checks incoming traffic for common threats before the requests reach the
listeners.
One or more listeners receive the traffic and route the requests to the back-end pool.
Routing rules define how to analyze the request to direct the request to the appropriate back-end pool.
A back-end pool contains web servers for resources like virtual machines or Virtual Machine Scale Sets. Each pool
has a load balancer to distribute the workload across the resources.
Health probes determine which back-end pool servers are available for load-balancing.
The following flowchart demonstrates how the Application Gateway components work together to direct traffic
requests between the frontend and back-end pools in your configuration.
Data is encrypted automatically before it's persisted to Azure Managed Disks, Azure Blob
Storage, Azure Queue Storage, Azure Cosmos DB, Azure Table Storage, or Azure Files.
Data is automatically decrypted before it's retrieved.
Azure Storage encryption, encryption at rest, decryption, and key management are transparent to
users.
All data written to Azure Storage is encrypted through 256-bit advanced encryption standard
(AES) encryption. AES is one of the strongest block ciphers available.
Azure Storage encryption is enabled for all new and existing storage accounts and can't be
disabled.
In the Azure portal, you configure Azure Storage encryption by specifying the encryption
type. You can manage the keys yourself, or you can have the keys managed by Microsoft.
Consider how you might implement Azure Storage encryption for your storage security.
It's important to understand that when you use SAS in your application, there can be
potential risks.
If a SAS is compromised, it can be used by anyone who obtains it, including a
malicious user.
If a SAS provided to a client application expires and the application is unable to retrieve
a new SAS from your service, the application functionality might be hindered.
Azure File Sync enables you to cache several Azure Files shares on an on-premises Windows
Server or cloud virtual machine. You can use Azure File Sync to centralize your organization's
file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-
premises file server.
Cloud tiering
Cloud tiering is an optional feature of Azure File Sync. Frequently accessed files are cached locally on the
server while all other files are tiered to Azure Files based on policy settings.
When a file is tiered, Azure File Sync replaces the file locally with a pointer. A pointer is commonly
referred to as a reparse point. The reparse point represents a URL to the file in Azure Files.
When a user opens a tiered file, Azure File Sync seamlessly recalls the file data from Azure Files
without the user needing to know that the file is stored in Azure.
Cloud tiering files have greyed icons with an offline O file attribute to let the user know when the file is
only in Azure.
Orchestration mode: Choose how virtual machines are managed by the scale set. In
flexible orchestration mode, you manually create and add a virtual machine of any
configuration to the scale set. In uniform orchestration mode, you define a virtual
machine model and Azure will generate identical instances based on that model.
Image: Choose the base operating system or application for the VM.
VM Architecture: Azure provides a choice of x64 or Arm64-based virtual machines to
run your applications.
Run with Azure Spot discount: Azure Spot offers unused Azure capacity at a
discounted rate versus pay as you go prices. Workloads should be tolerant to
infrastructure loss as Azure may recall capacity.
Size: Select a VM size to support the workload that you want to run. The size that you
choose then determines factors such as processing power, memory, and storage
capacity. Azure offers a wide variety of sizes to support many types of uses. Azure
charges an hourly price based on the VM's size and operating system.
An Azure Virtual Machine Scale Sets implementation can automatically increase or decrease the
number of virtual machine instances that run your application. This process is known
as autoscaling. Autoscaling allows you to dynamically scale your configuration to meet changing
workload demands.
CONFIGURING AZURE APP SERVICE PLANS
Azure Administrators need to be able to scale a web application. Scaling enables an application to
remain responsive during periods of high demand. Scaling also helps to save money by reducing
the resources required when demand drops.
To implement and use an App Service plan with your virtual machines.
When you create an App Service plan in a region, a set of compute resources is created
for the plan in the specified region. Any applications that you place into the plan run on
the compute resources defined by the plan.
Each App Service plan defines three settings:
Region: The region for the App Service plan, such as West US, Central India,
North Europe, and so on.
Number of VM instances: The number of virtual machine instances to allocate for the plan.
Size of VM instances: The size of the virtual machine instances in the plan,
including Small, Medium, or Large.
You can continue to add new applications to an existing plan as long as the plan
has enough resources to handle the increasing load.
How to use autoscale for your Azure App Service plan and applications.
To use autoscale, you specify the minimum, and maximum number of instances to run
by using a set of rules and conditions.
When your application runs under autoscale conditions, the number of virtual machine
instances are automatically adjusted based on your rules. When rule conditions are
met, one or more autoscale actions are triggered.
An autoscale setting is read by the autoscale engine to determine whether to scale out or
in. Autoscale settings are grouped into profiles.
Autoscale rules include a trigger and a scale action (in or out). The trigger can be
metric- based or time-based.
With Azure DevOps, you can also define your own build and release process. Compile your
source code, run tests, and build and deploy the release into your web app every time you commit
the code. All of the operations happen implicitly without any need for human administration.
Characteristics of deployment slots.
Deployment slots are live apps that have their own hostnames.
Deployment slots are available in the Standard, Premium, and Isolated App Service pricing tiers.
Your app needs to be running in one of these tiers to use deployment slots.
The Standard, Premium, and Isolated tiers offer different numbers of deployment slots.
App content and configuration elements can be swapped between two deployment slots,
including the production slot.
The Backup and Restore feature in Azure App Service lets you easily create backups manually or
on a schedule. You can configure the backups to be retained for a specific or indefinite amount of
time. You can restore your app or site to a snapshot of a previous state by overwriting the
existing content or restoring to another app or site.
Docker Hub provides a large global repository of container images from developers, open source
projects, and independent software vendors. You can access Docker Hub to find and share
container images for your app and containers. Docker Hosts are machines that run Docker and
allow you to run your apps as containers.
CONFIGURING FILE AND FOLDER BACKUPS
Azure Backup is the Azure-based service you can use to back up (or protect) and restore your
data in the Microsoft cloud. Azure Backup replaces your existing on-premises or off-site backup
solution with a cloud-based solution that's reliable, secure, and cost-competitive
In the Azure portal, search for Backup Center and browse to the Backup Center dashboard:
Azure Backup uses the Microsoft Azure Recovery Services (MARS) agent to back up files,
folders, and system data from your on-premises machines and Azure virtual machines. The
MARS agent is a full-featured agent that offers many benefits for both backing up and restoring
your data.
he MARS agent and Azure Backup to complete backups of your on-premises files and folders.
The following diagram shows the high-level steps to use the MARS agent for Azure Backup.
CONFIGURE VIRTUAL MACHINE BACKUPS
Azure Backup provides independent and isolated backups to guard against unintended destruction
of the data on your virtual machines. Administrators can implement Azure services to support
their backup requirements, including the Microsoft Azure Recovery Services (MARS) agent for
Azure Backup, the Microsoft Azure Backup Server (MABS), Azure managed disks snapshots,
and Azure Site Recovery.
Things to consider when creating images versus snapshots
It's important to understand the differences and benefits of creating an image and a snapshot
backup of an Azure managed disk.
Consider images. With Azure managed disks, you can take an image of a generalized
virtual machine that's been deallocated. The image includes all of the disks attached to
the virtual machine. You can use the image to create a virtual machine that includes all
of the disks.
Consider snapshots. A snapshot is a copy of a disk at the point in time the snapshot is
taken. The snapshot applies to one disk only, and doesn't have awareness of any disk
other than the one it contains. Snapshot backups are problematic for configurations that
require the coordination of multiple disks, such as striping. In this case, the snapshots
need to coordinate with each other, but this functionality isn't currently supported.
Consider operating disk backups. If you have a virtual machine with only one disk (the
operating system disk), you can take a snapshot or an image of the disk. You can create
a virtual machine from either a snapshot or an image.
In the Azure portal, you can use an Azure Recovery Services vault to back up your Azure virtual machines:
A Recovery Services vault can be used to back up your on-premises virtual machines, such
as Hyper-V, VMware, System State, and Bare Metal Recovery:
Examine some details about working with activity logs in Azure Monitor.
You can use the information in activity logs to understand the status of resource
operations and other relevant properties.
Activity logs can help you determine the "what, who, and when" for any write
operation (PUT, POST, DELETE) performed on resources in your subscription.
Activity logs are kept for 90 days.
You can query for any range of dates in an activity log, as long as the starting date
isn't more than 90 days in the past.
You can retrieve events from your activity logs by using the Azure portal, the Azure
CLI, PowerShell cmdlets, and the Azure Monitor REST API.
In the Azure portal, you can filter your Azure Monitor activity logs so you can view specific
information. The filters enable you to review only the activity log data that meets your criteria.
You might set filters to review monitoring data about critical events for your primary subscription
and production virtual machine during peak business hours.
Log Analytics in Azure Monitor supports the Kusto Query Language (KQL). The KQL
syntax helps you quickly and easily create simple or complex queries to retrieve and
consolidate your monitoring data in the repository.
KQL queries use the dedicated table data for your monitored services and resources.
CONFIGURING NETWORK WATCHER
Azure Network Watcher is a powerful tool that allows you to monitor, diagnose, and manage
resources in an Azure virtual network.Azure Network Watcher provides tools to monitor,
diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network.
Network Watcher is a regional service that enables you to monitor and diagnose conditions at a
network scenario level.
The configuration details and functionality of the IP flow verify feature in Azure Network
Watcher.
You configure the IP flow verify feature with the following properties in the Azure portal:
Virtual machine and network interface
Local (source) port number
Remote (destination) IP address, and remote port number
Communication protocol (TCP or UDP)
Traffic direction (Inbound or Outbound)
The feature tests communication for a target virtual machine with associated network
security group (NSG) rules by running inbound and outbound packets to and from
the machine.
After the test runs complete, the feature informs you whether communication with
the machine succeeds (allows access) or fails (denies access).
If the target machine denies the packet because of an NSG, the feature returns the name of the
controlling security rule.
Azure Network Watcher provides a network monitoring topology tool to help administrators
visualize and understand infrastructure. The following image shows an example topology
diagram for a virtual network in Network Watcher.
COMPUTING EC2
Amazon Elastic Compute Cloud (Amazon EC2) provides secure, resizable compute capacity
in the cloud as Amazon EC2 instances.
By comparison, with an Amazon EC2 instance you can use a virtual server to run
applications in the AWS Cloud.
You can provision and launch an Amazon EC2 instance within minutes.
You can stop using it when you have finished running a workload.
You pay only for the compute time you use when an instance is running, not when
itis stopped or terminated.
EC2 runs on top of physical host machines managed by AWS using virtualization
technology. When you spin up an EC2 instance, you aren't necessarily taking an entire host to
yourself. Instead, you are sharing the host with multiple other instances, otherwise known as
virtual machines. And a hypervisor running on the host machine is responsible forsharing
the underlying physical resources between the virtual machines.
This idea of sharing underlying hardware is called multi-tenancy. The hypervisor is responsible for
coordinating this multi-tenancy and it is managed by AWS. The hypervisor isresponsible for isolating the
virtual machines from each other as they share resources from the host. This means EC2 instances are
secure. Even though they may be sharing resources, one EC2 instance is not aware of any other EC2
instances also on that host. They are secure and separate from each other. Amazon EC2 pricing
With Amazon EC2, you pay only for the compute time that you use. Amazon EC2 offers a
variety of pricing options for different use cases. For example, if your use case can withstand
interruptions, you can save with Spot Instances. You can also save by committing early and
locking in a minimum level of use with Reserved Instances.
On-Demand Instances are ideal for short-term, irregular workloads that cannot be interrupted. No
upfront costs or minimum contracts apply. The instances run continuouslyuntil you stop them,
and you pay for only the compute time you use.
Scalability
Scalability involves beginning with only the resources you need and designing your architecture
to automatically respond to changing demand by scaling out or in. As a result,you pay for only
the resources you use. You don’t have to worry about a lack of computing capacity to meet your
customers’ needs.
Amazon EC2 Auto Scaling
If you’ve tried to access a website that wouldn’t load and frequently timed out, the website might
have received more requests than it was able to handle. This situation is similar to waiting in a
long line at a coffee shop, when there is only one barista present to take orders from customers.
Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances
in response to changing application demand. By automatically scaling your instances in and out
as needed, you are able to maintain a greater sense of applicationavailability.
Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling andpredictive
scaling.
Dynamic scaling responds to changing demand.
Predictive scaling automatically schedules the right number of Amazon EC2 instancesbased on
predicted demand.
Elastic Load Balancing
Elastic Load Balancing (ELB) is the AWS service that automatically distributes
incomingapplication traffic across multiple resources, such as Amazon EC2 instances.
It is engineered to address the undifferentiated heavy lifting of load balancing. Elastic Load
Balancing is a Regional construct, and we'll explain more of what that means in later videos.But
the key value for you is that because it runs at the Region level rather than on individualEC2
instances, the service is automatically highly available with no additional effort on yourpart.
ELB is automatically scalable. As your traffic grows, ELB is designed to handle the
additionalthroughput with no change to the hourly cost. When your EC2 fleet auto-scales out, as
eachinstance comes online, the auto-scaling service just lets the Elastic Load Balancing service
know that it's ready to handle the traffic, and off it goes. Once the fleet scales in, ELB first stops
all new traffic, and waits for the existing requests to complete, to drain out. Once they do that,
then the auto-scaling engine can terminate the instances without disruption to existing customers.
Because ELB is regional, it's a single URL that each front end instance uses. Then the ELB
directs traffic to the back end that has the least outstanding requests. Now, if the back end scales,
once the new instance is ready, it just tells the ELB that it can take traffic and it getsto work. The
front end doesn't know and doesn't care how many back end instances are running. This is true
decoupled architecture.
Low-demand period
Here’s an example of how Elastic Load Balancing works. Suppose that a few customers have
come to the coffee shop and are ready to place their orders.
If only a few registers are open, this matches the demand of customers who need service. The
coffee shop is less likely to have open registers with no customers. In this example, you can think
of the registers as Amazon EC2 instances.
High-demand period
Throughout the day, as the number of customers increases, the coffee shop opens more registers to
accommodate them. In the diagram, the Auto Scaling group represents this.
Additionally, a coffee shop employee directs customers to the most appropriate register so that
the number of requests can evenly distribute across the open registers. You can think of this
coffee shop employee as a load balancer.
In file storage, multiple clients (such as users, applications, servers, and so on) can access data
that is stored in shared file folders. In this approach, a storage server uses block storage with a
local file system to organize files. Clients access data through file paths.
Compared to block storage and object storage, file storage is ideal for use cases in which a large
number of services and resources need to access the same data at the same time.
Amazon Elastic File System (Amazon EFS) is a scalable file system used with AWS Cloud
services and on-premises resources. As you add and remove files, Amazon EFS grows and
shrinks automatically. It can scale on demand to petabytes without disrupting applications.
EFS allows you to have multiple instances accessing the data in EFS at the same time. It scales up
and down as needed without you needing to do anything to make that scaling happen.
Relational databases
In a relational database, data is stored in a way that relates it to other pieces of data.
Relational databases use structured query language (SQL) to store and query data. This approach
allows data to be stored in an easily understandable, consistent, and scalable way.
Amazon Relational Database Service (Amazon RDS) is a service that enables you to run
relational databases in the AWS Cloud.
Amazon RDS is a managed service that automates tasks such as hardware provisioning, database
setup, patching, and backups. With these capabilities, you can spend less time completing
administrative tasks and more time using data to innovate your applications. You can integrate
Amazon RDS with other services to fulfil your business and operational needs, such as using AWS
Lambda to query your database from a serverless application.
Amazon RDS provides a number of different security options. Many Amazon RDS database
engines offer encryption at rest (protecting data while it is stored) and encryption in transit
(protecting data while it is being sent and received).
Amazon RDS database engines
Amazon RDS is available on six database engines, which optimize for memory, performance, or
input/output (I/O). Supported database engines include:
Amazon Aurora
PostgreSQL MySQL
MariaDB Oracle Database
Microsoft SQL Server
The benefits of Amazon RDS include automated patching, backups, redundancy, failover,disaster
recovery, all of which you normally have to manage for yourself. This makes it anextremely
attractive option to AWS customers, as it allows you to focus on business problems and not
maintaining databases.
Amazon Aurora
is an enterprise-class relational database. It is compatible with MySQL andPostgreSQL relational
databases. It is up to five times faster than standard MySQL databases and up to three times
faster than standard PostgreSQL databases.
Amazon Aurora is AWS’ most managed relational database option. It comes in two forms,
MySQL and PostgreSQL. And is priced is 1/10th the cost of commercial grade databases.That's a
pretty cost effective database.
Amazon Aurora helps to reduce your database costs by reducing unnecessary input/output (I/O)
operations, while ensuring that your database resources remain reliable and available.
Nonrelational databases
In a nonrelational database, you create tables. A table is a place where you can store and query
data.
Relational databases works great for a lot of use cases, and has been the standard type of database
historically. However, these types of rigid SQL databases, can have performance and scaling
issues when under stress. The rigid schema also makes it so that you cannot have any variation in
the types of data that you store in a table. So, it might not be the best fit for a dataset that is a little
bit less rigid, and is being accessed at a very high rate. This is where non-relational, or NoSQL,
databases come in.
Nonrelational databases are sometimes referred to as “NoSQL databases” because they use
structures other than rows and columns to organize data. Non-relational databases tend to have
simple flexible schemas, not complex rigid schemas, laying out multiple tables that all relate
to each other.
One type of structural approach for nonrelational databases is key-value pairs. With key-value
pairs, data is organized into items (keys), and items have attributes (values). You can think
of attributes as being different features of your data.
In a key-value database, you can add or remove attributes from items in the table at any time.
Additionally, not every item in the table has to have the same attributes.
Amazon DynamoDB,RDS
Amazon DynamoDB is a key-value database service. It delivers single-digit millisecond
performance at any scale. It is a non-relational, NoSQL database.
With DynamoDB, you create tables. A DynamoDB table is just a place where you can store and
query data. Data is organized into items, and items have attributes. Attributes are just different
features of your data. If you have one item in your table, or 2 million items in your table,
DynamoDB manages the underlying storage for you.
DynamoDB is serverless, which means that you do not have to provision, patch, or manage
servers. You also do not have to install, maintain, or operate software.
DynamoDB, beyond being massively scalable, is also highly performant. DynamoDB has a
millisecond response time. And when you have applications with potentially millions of users,
having scalability and reliable lightning-fast response times is important. As the size of your
database shrinks or grows, DynamoDB automatically scales to adjust for changes in capacity
while maintaining consistent performance. This makes it a suitable choice for use cases that
require high performance while scaling.
DynamoDB stores this data redundantly across availability zones and mirrors the data across
multiple drives under the hood for you. This makes the burden of operating a highly
available database, much lower.
Amazon Redshift
Amazon Redshift is a data warehousing service that you can use for big data analytics. It offers
the ability to collect data from many sources and helps you to understand relationships and
trends across your data. This is data warehousing as a service. It's massively scalable. Redshift
nodes in multiple petabyte sizes is very common. Infact, in cooperation with Amazon Redshift
Spectrum, you can directly run a single SQL query against exabytes of unstructured data running
in data lakes.
IAM users
An IAM user is an identity that you create in AWS. It represents the person or application that
interacts with AWS services and resources. It consists of a name and credentials.
IAM policies
An IAM policy is a JSON document that allows or denies permissions to AWS services
and resources.
IAM policies enable you to customize users’ levels of access to resources. For example, you
can allow users to access all of the Amazon S3 buckets within your AWS account, or only a
specific bucket.
IAM groups
One way to make it easier to manage your users and their permissions is to organize them into
IAM groups. An IAM group is a collection of IAM users. When you assign an IAM policyto
a group, all users in the group are granted permissions specified by the policy.
Assigning IAM policies at the group level also makes it easier to adjust permissions when an
employee transfers to a different job. For example, if a cashier becomes an inventory
specialist, the coffee shop owner removes them from the “Cashiers” IAM group and adds them
into the “Inventory Specialists” IAM group. This ensures that employees have only the
permissions that are required for their current role.
What if a coffee shop employee hasn’t switched jobs permanently, but instead, rotates to different
workstations throughout the day? This employee can get the access they need through IAM roles.
IAM roles
An IAM role is an identity that you can assume to gain temporary access to permissions.
Roles have associated permissions that allow or deny specific actions. And these roles can be
assumed for temporary amounts of time. It is similar to a user, but has no username and
password. Instead, it is an identity that you can assume to gain access to temporary permissions.
You use roles to temporarily grant access to AWS resources, to users, external identities,
applications, and even other AWS services. When an identity assumes a role, it abandons all of
the previous permissions that it has and it assumes the permissions of that role.
Best practice:
IAM roles are ideal for situations in which access to services or resources needs to be
granted temporarily, instead of long-term.
Multi-factor authentication
Have you ever signed in to a website that required you to provide multiple pieces of information
to verify your identity? You might have needed to provide your password and then a second form
of authentication, such as a random code sent to your phone. This is an example of multi-factor
authentication. In IAM, multi-factor authentication (MFA) providesan extra layer of security for
your AWS account.
How multi-factor authentication works
Denial-of-service attacks
Customers can call the coffee shop to place their orders. After answering each call, a cashier
takes the order and gives it to the barista.
However, suppose that a prankster is calling in multiple times to place orders but is never picking
up their drinks. This causes the cashier to be unavailable to take other customers’ calls. The
coffee shop can attempt to stop the false requests by blocking the phone number that the
prankster is using.
In this scenario, the prankster’s actions are similar to a denial-of-service attack.
A denial-of-service (DoS) attack is a deliberate attempt to make a website or application
unavailable to users.
Distributed denial-of-service attacks
A single machine attacking your application has no hope of providing enough of an attack by
itself, so the distributed part is that the attack leverages other machines around the internet to
unknowingly attack your infrastructure. Now, suppose that the prankster has enlisted thehelp of
friends.
The prankster and their friends repeatedly call the coffee shop with requests to place orders, even
though they do not intend to pick them up. These requests are coming in fromdifferent phone
numbers, and it’s impossible for the coffee shop to block them all.
Additionally, the influx of calls has made it increasingly difficult for customers to be able toget their calls
through. This is similar to a distributed denial-of-service attack
In a distributed denial-of-service (DDoS) attack, multiple sources are used to start an attack that
aims to make a website or application unavailable. This can come from a group of attackers, or
even a single attacker. The single attacker can use multiple infected computers (also known as
“bots”) to send excessive traffic to a website or application.
AWS Shield
AWS Shield is a service that protects applications against DDoS attacks. AWS Shield
providestwo levels of protection: Standard and Advanced.
AWS Shield Standard automatically protects all AWS customers at no cost. It protects yourAWS
resources from the most common, frequently occurring types of DDoS attacks
As network traffic comes into your applications, AWS Shield Standard uses a variety ofanalysis
techniques (e.g. security groups) to detect malicious traffic in real time and automatically
mitigates it.
AWS Shield Advanced is a paid service that provides detailed attack diagnostics and theability to
detect and mitigate sophisticated DDoS attacks.
It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and Elastic
Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing
custom rules to mitigate complex DDoS attacks.
AWS Key Management Service (AWS KMS)
The coffee shop has many items, such as coffee machines, pastries, money in the cash registers,
and so on. You can think of these items as data. The coffee shop owners want to ensure that all of
these items are secure, whether they’re sitting in the storage room or being transported between
shop locations.
In the same way, you must ensure that your applications’ data is secure while in storage
(encryption at rest) and while it is transmitted, known as encryption in transit.
Encryption is the securing of a message or data in a way that only authorized parties can access it.
AWS Key Management Service (AWS KMS) enables you to perform encryption operations
through the use of cryptographic keys. A cryptographic key is a random string of digits usedfor
locking (encrypting) and unlocking (decrypting) data. You can use AWS KMS to create, manage,
and use cryptographic keys. You can also control the use of keys across a wide range of services
and in your applications.
With AWS KMS, you can choose the specific levels of access control that you need for yourkeys.
For example, you can specify which IAM users and roles are able to manage keys.
Alternatively, you can temporarily disable keys so that they are no longer in use by anyone. Your
keys never leave AWS KMS, and you are always in control of them.
AWS WAF
AWS WAF is a web application firewall that lets you monitor network requests that comeinto
your web applications.
AWS WAF works together with Amazon CloudFront and an Application Load Balancer.
Recallthe network access control lists that you learned about in an earlier module. AWS WAF
works in a similar way to block or allow traffic. However, it does this by using a web access
control list (ACL) to protect your AWS resources.
Amazon Inspector
Suppose that the developers at the coffee shop are developing and testing a new ordering
application. They want to make sure that they are designing the application in accordance with
security best practices. However, they have several other applications to develop, so they cannot
spend much time conducting manual assessments. To perform automated security assessments,
they decide to use Amazon Inspector.
Amazon Inspector helps to improve the security and compliance of applications by running
automated security assessments. It checks applications for security vulnerabilities and deviations
from security best practices, such as open access to Amazon EC2 instances andinstallations of
vulnerable software versions.
Amazon GuardDuty
Amazon GuardDuty is a service that provides intelligent threat detection for your AWS
infrastructure and resources. It identifies threats by continuously monitoring the network activity
and account behaviour within your AWS environment.
After you have enabled GuardDuty for your AWS account, GuardDuty begins monitoring your
network and account activity. You do not have to deploy or manage any additional security
software. GuardDuty then continuously analyzes data from multiple AWS sources, including
VPC Flow Logs and DNS logs.
It uses integrated threat intelligence such as known malicious IP addresses, anomaly detection,
and machine learning to identify threats more accurately. The best part is that itruns
independently from your other AWS services. So it won't affect performance or availability of
your existing infrastructure, and workloads.
MONITORING AND ANALYTICS
Amazon CloudWatch
Amazon CloudWatch is a web service that enables you to monitor and manage various metrics
and configure alarm actions based on data from those metrics.
CloudWatch uses metrics to represent the data points for your resources. AWS services send
metrics to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that
show how performance has changed over time.
CloudWatch alarms
With CloudWatch, you can create alarms that automatically perform actions if the value of your
metric has gone above or below a predefined threshold.
CloudWatch dashboard
The CloudWatch dashboard feature enables you to access all the metrics for your resources from
a single location. This enables you to collect metrics and logs from all your AWS resources
applications, and services that run on AWS and on-premises servers, helping you break down
silos so that you can easily gain system-wide visibility. For example, you can use a CloudWatch
dashboard to monitor the CPU utilization of an Amazon EC2 instance, the total number of
requests made to an Amazon S3 bucket, and more. You can even customize separate dashboards
for different business purposes, applications, or resources.
You can get visibility across your applications, infrastructure, and services, which means you
gain insights across your distributed stack so you can correlate and visualize metrics and logs to
quickly pinpoint and resolve issues. This in turn means you can reduce mean time to resolution,
or MTTR, and improve total cost of ownership, or TCO. So in our coffee shop, if the MTTR of
cleaning hours for restaurant machines is shorter then we can save on TCO with them. This
means freeing up important resources like developers to focus on adding business value.
AWS CloudTrail
AWS CloudTrail is a comprehensive API auditing tool that records API calls for your
account.The recorded information includes the identity of the API caller, the time of the API call,
thesource IP address of the API caller, and more. You can think of CloudTrail as a “trail” of
breadcrumbs (or a log of actions) that someone has left behind them.
Recall that you can use API calls to provision, manage, and configure your AWS
resources.With CloudTrail, you can view a complete history of user activity and API calls for
your applications and resources.
CloudTrail can save those logs indefinitely in secure S3 buckets. In addition, with tamper-proof
methods like Vault Lock, you then can show absolute provenance of all of these critical
security audit logs.
CloudTrail Insights
Within CloudTrail, you can also enable CloudTrail Insights. This optional feature
allowsCloudTrail to automatically detect unusual API activities in your AWS account.
For example, CloudTrail Insights might detect that a higher number of Amazon EC2 instances
than usual have recently launched in your account. You can then review the full event details
to determine which actions you need to take next.
The green check indicates the number of items for which it detected no problems. The orange
triangle represents the number of recommended investigations.
The red circle represents the number of recommended actions.
At the highest level, the AWS Cloud Adoption Framework (AWS CAF) organizes guidance into
six areas of focus, called Perspectives. Each Perspective addresses distinct responsibilities. The
planning process helps the right people across the organization prepare for the changes ahead.
People Perspective
The People Perspective supports development of an organization-wide change management
strategy for successful cloud adoption.
Use the People Perspective to evaluate organizational structures and roles, new skill and process
requirements, and identify gaps. This helps prioritize training, staffing, and organizational
changes.
Common roles in the People Perspective include: Human resources
Staffing
People managers
Governance Perspective
The Governance Perspective focuses on the skills and processes to align IT strategy withbusiness
strategy. This ensures that you maximize the business value and minimize risks.
Use the Governance Perspective to understand how to update the staff skills and processes
necessary to ensure business governance in the cloud. Manage and measure cloud investments to
evaluate business outcomes.
Common roles in the Governance Perspective include: Chief Information Officer (CIO)
Program managers Enterprise architects Business analysts Portfolio managers
Platform Perspective
The Platform Perspective includes principles and patterns for implementing new solutionson the
cloud, and migrating on-premises workloads to the cloud.
Use a variety of architectural models to understand and communicate the structure of IT systems
and their relationships. Describe the architecture of the target state environment in detail.
Common roles in the Platform Perspective include: Chief Technology Officer (CTO)
IT managers Solutions architects
Security Perspective
The Security Perspective ensures that the organization meets security objectives for visibility,
auditability, control, and agility.
Use the AWS CAF to structure the selection and implementation of security controls
andpermissions that meet the organization’s needs.
Common roles in the Security Perspective include: Chief Information Security Officer (CISO) IT
security managers
IT security analysts
Operations Perspective
The Operations Perspective helps you to enable, run, use, operate, and recover ITworkloads to the
level agreed upon with your business stakeholders.
Define how day-to-day, quarter-to-quarter, and year-to-year business is conducted. Align with
and support the operations of the business. The AWS CAF helps these stakeholders define current
operating procedures and identify the process changes and training needed to implement
successful cloud adoption.
Common roles in the Operations Perspective include: IT operations
managers IT support managers
3.10 – The Cloud Journey
Evaluating the performance efficiency of your architecture includes experimenting more often,
using serverless architectures, and designing systems to be able to go global in minutes.
Cost optimization is the ability to run systems to deliver business value at the lowest price point.
Cost optimization includes adopting a consumption model, analysing and attributingexpenditure,
and using managed services to reduce the cost of ownership.
In the past, you'd need to evaluate these against your AWS infrastructure with the help of a
Solutions Architect. Not that you can't, and aren't still encouraged to do that, but we listened to
customer feedback, and decided to release the Framework as a self-service tool,the AWS
Well- Architected Tool.
You can access it by the AWS Management Console. Create a workload and run it againstyour
AWS account.
Advantages of cloud computing
Operating in the AWS Cloud offers many benefits over computing in on-premises or hybrid
environments.
In this section, you will learn about six advantages of cloud computing:
Trade upfront expense for variable expense. Benefit from massive economies of scale.
Stop guessing capacity. Increase speed and agility.
Stop spending money running and maintaining data centres. Go global in minutes.
Trade upfront expense for variable expense.
Upfront expenses include data centres, physical servers, and other resources that you would need
to invest in before using computing resources. These on-premises data centre costs include
things like physical space, hardware, staff for racking and stacking, and overhead for running the
data centre.
Instead of investing heavily in data centres and servers before you know how you’re going to use
them, you can pay only when you consume computing resources.
AWS is also an expert at building efficient data centres. We can buy the hardware at a lower price
because of the massive volume, and then we install it and run it efficiently. Because of these
factors, you can achieve a lower variable cost than you could running a data centre on your own.
For example, you can launch Amazon Elastic Compute Cloud (Amazon EC2) instances when
needed and pay only for the compute time you use. Instead of paying for resources that are
unused or dealing with limited capacity, you can access only the capacity that you need, and scale
in or out in response to demand.
Increase speed and agility.
The flexibility of cloud computing makes it easier for you to develop and deploy
applications. This flexibility also provides your development teams with more time to
experiment and innovate.
With AWS, it's easy to try new things. You can spin up test environments and run
experiments on new ways to approach solving a problem. And then if that approach
didn't work, you can just delete those resources and stop incurring cost. Traditional data
centres don't offer the same flexibility.
Go global in minutes.
The AWS Cloud global footprint enables you to quickly deploy applications to customers
around the world, while providing them with low latency. Traditionally, you would need
to have staff overseas running and operating a data centre for you. With AWS, you can
just replicate your architecture to a region in that foreign country.
This architecture has now been replicated across Availability Zones or AZs. This is importantfor
reliability. If one AZ is having issues, your application will still be up and running in the second AZ.
CHAPTER 4
PARTICIPATIONS AND BADGES
CHAPTER 5
CONCLUSION
5.1 CONCLUSION
In conclusion, the Microsoft Azure Administrator Associate certification equips
professionals with a robust skill set to effectively administer Azure cloud services. Through a
comprehensive curriculum encompassing modules ranging from managing identities and
governance to implementing and managing Azure resources and containers, candidates gain
proficiency in critical areas of cloud administration.
By completing this certification, individuals demonstrate their ability to deploy, monitor,
secure, and optimize Azure environments, thereby contributing to the operational excellence
and success of organizations leveraging Azure cloud solutions. Moreover, the certification
serves as a testament to their expertise in implementing Azure best practices, ensuring
compliance, and driving innovation in cloud computing.
As organizations continue to embrace the cloud as a fundamental pillar of their IT
infrastructure, the demand for skilled Azure administrators remains high. The Azure
Administrator Associate certification not only validates the technical capabilities of
professionals but also opens up diverse career opportunities in cloud administration and
architecture roles.
In an era defined by digital transformation and rapid technological advancements, the Azure
Administrator Associate certification empowers individuals to stay ahead of the curve and
become trusted experts in cloud administration. It is a testament to their dedication to
continuous learning and professional growth, positioning them as invaluable assets in today's
competitive job market.
In summary, the AWS Cloud Practitioner certification serves as an essential foundation for
individuals entering the realm of cloud computing. Covering fundamental concepts of the
Amazon Web Services (AWS) Cloud, this certification equips candidates with the knowledge
and skills necessary to understand core AWS services, security best practices, and basic
architectural principles.
Through modules such as AWS global infrastructure, core services, security, compliance,
and billing models, candidates gain a comprehensive understanding of the AWS Cloud's
building blocks. By mastering these foundational concepts, individuals establish a solid
groundwork for further specialization in AWS and cloud-related roles.
Page 103 of 104
The AWS Cloud Practitioner certification not only validates technical expertise but also demonstrates
a commitment to cloud fluency and AWS best practices. Whether embarking on a career in cloud
computing or seeking to enhance existing skills, this certification provides a valuable credential
recognized by employers worldwide.
As organizations increasingly migrate to the cloud, the demand for professionals with AWS expertise
continues to grow. The AWS Cloud Practitioner certification not only opens doors to diverse career
opportunities but also lays the groundwork for pursuing advanced AWS certifications and
specialization tracks.
In a rapidly evolving technological landscape, the AWS Cloud Practitioner certification empowers
individuals to navigate the complexities of cloud computing with confidence. By obtaining this
certification, candidates establish themselves as competent cloud practitioners capable of driving
innovation and success in the digital age.
Sincerely,
Enakshi Kapoor