Practice Test2
Practice Test2
Results
Question 1Correct
"Version": "2012-10-17",
"Statement": [
"Action": [
"ec2:RunInstances"
],
"Effect": "Allow",
"Resource": "*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "34.50.31.0/24"
Overall explanation
Correct option:
You manage access in AWS by creating policies and attaching them to IAM
identities (users, groups of users, or roles) or AWS resources. A policy is an
object in AWS that, when associated with an identity or resource, defines
their permissions. AWS evaluates these policies when an IAM principal (user
or role) makes a request. Permissions in the policies determine whether the
request is allowed or denied. Most policies are stored in AWS as JSON
documents. AWS supports six types of policies: identity-based policies,
resource-based policies, permissions boundaries, Organizations service
control policy (SCPs), access control lists (ACLs), and session policies.
"Condition": {
"IpAddress": {
"aws:SourceIp": "34.50.31.0/24"
Incorrect options:
Each of these three options suggests that the IP addresses of the Amazon
EC2 instances must belong to the 34.50.31.0/24 CIDR block for the EC2
instances to start. Actually, the policy states that the AMazon EC2 instance
should start only when the IP where the call originates is within
the 34.50.31.0/24 CIDR block. Hence these options are incorrect.
References:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
https://aws.amazon.com/premiumsupport/knowledge-center/iam-restrict-
calls-ip-addresses/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-
addressing.html
Domain
Question 2Correct
A systems administrator has created a private hosted zone and associated it
with a Virtual Private Cloud (VPC). However, the Domain Name System (DNS)
queries for the private hosted zone remain unresolved.
As a Solutions Architect, can you identify the Amazon Virtual Private Cloud
(Amazon VPC) options to be configured in order to get the private hosted
zone to work?
Fix the Name server (NS) record and Start Of Authority (SOA)
records that may have been created with wrong configurations
Fix conflicts between your private hosted zone and any Resolver
rule that routes traffic to your network for the same domain name,
as it results in ambiguity over the route to be taken
Enable DNS hostnames and DNS resolution for private hosted zones
Overall explanation
Correct option:
Enable DNS hostnames and DNS resolution for private hosted zones
DNS hostnames and DNS resolution are required settings for private hosted
zones. DNS queries for private hosted zones can be resolved by the Amazon-
provided VPC DNS server only. As a result, these options must be enabled for
your private hosted zone to work.
DNS hostnames: For non-default virtual private clouds that aren't created
using the Amazon VPC wizard, this option is disabled by default. If you create
a private hosted zone for a domain and create records in the zone without
enabling DNS hostnames, private hosted zones aren't enabled. To use a
private hosted zone, this option must be enabled.
DNS resolution: Private hosted zones accept DNS queries only from a VPC
DNS server. The IP address of the VPC DNS server is the reserved IP address
at the base of the VPC IPv4 network range plus two. Enabling DNS resolution
allows you to use the VPC DNS server as a Resolver for performing DNS
resolution. Keep this option disabled if you're using a custom DNS server in
the DHCP Options set, and you're not using a private hosted zone.
Incorrect options:
Fix the Name server (NS) record and Start Of Authority (SOA)
records that may have been created with wrong configurations -
When you create a hosted zone, Amazon Route 53 automatically creates a
name server (NS) record and a start of authority (SOA) record for the zone for
public hosted zone. However, this issue is about the private hosted zone,
hence this is an incorrect option.
Fix conflicts between your private hosted zone and any Resolver
rule that routes traffic to your network for the same domain name,
as it results in ambiguity over the route to be taken - If you have a
private hosted zone (example.com) and a Resolver rule that routes traffic to
your network for the same domain name, the Resolver rule takes
precedence. It won't result in unresolved queries.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-enable-
private-hosted-zone/
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-
private-considerations.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-
public-considerations.html
Domain
Question 3Correct
Overall explanation
Correct option:
Incorrect options:
Use Identity and Access Management (IAM) policies - AWS IAM enables
organizations with many employees to create and manage multiple users
under a single AWS account. IAM policies are attached to the users, enabling
centralized control of permissions for users under your AWS Account to
access buckets or objects. With IAM policies, you can only grant users within
your own AWS account permission to access your Amazon S3 resources. So,
this is not the right choice for the current requirement.
Use Access Control Lists (ACLs) - Within Amazon S3, you can use ACLs to
give read or write access on buckets or objects to groups of users. With ACLs,
you can only grant other AWS accounts (not specific users) access to your
Amazon S3 resources. So, this is not the right choice for the current
requirement.
Use Security Groups - A security group acts as a virtual firewall for Amazon
EC2 instances to control incoming and outgoing traffic. Amazon S3 does not
support Security Groups, this option just acts as a distractor.
Reference:
https://d1.awsstatic.com/whitepapers/aws-security-whitepaper.pdf
Domain
Question 4Correct
You would like to store a database password in a secure place, and enable
automatic rotation of that password every 90 days. What do you
recommend?
AWS CloudHSM
Overall explanation
Correct option:
AWS Secrets Manager helps you protect secrets needed to access your
applications, services, and IT resources. The service enables you to easily
rotate, manage, and retrieve database credentials, API keys, and other
secrets throughout their lifecycle. Users and applications retrieve secrets
with a call to Secrets Manager APIs, eliminating the need to hardcode
sensitive information in plain text. Secrets Manager offers secret rotation
with built-in integration for Amazon RDS, Amazon Redshift, and Amazon
DocumentDB. The correct answer here is Secrets Manager
Incorrect options:
AWS Systems Manager Parameter Store can serve as a secrets store, but you
must rotate the secrets yourself, it doesn't have an automatic capability for
this. So this option is incorrect.
References:
https://aws.amazon.com/secrets-manager/
https://docs.aws.amazon.com/AmazonS3/latest/dev/
UsingKMSEncryption.html
https://aws.amazon.com/cloudhsm/
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-
manager-parameter-store.html
https://aws.amazon.com/blogs/mt/the-right-way-to-store-secrets-using-
parameter-store/
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-
manager-parameter-store.html
Domain
Question 5Correct
Overall explanation
Correct option:
Please see this detailed overview of various types of Amazon EC2 instances
from a pricing perspective:
via - https://aws.amazon.com/ec2/pricing/
Incorrect options:
Reference:
https://aws.amazon.com/ec2/pricing/
Domain
Which of the following solutions would have the LEAST amount of downtime?
Overall explanation
Correct option:
AWS Storage Gateway is a hybrid cloud storage service that gives you on-
premises access to virtually unlimited cloud storage. It provides low-latency
performance by caching frequently accessed data on-premises while storing
data securely and durably in Amazon cloud storage services. Storage
Gateway optimizes data transfer to AWS by sending only changed data and
compressing data. Storage Gateway also integrates natively with Amazon S3
cloud storage which makes your data available for in-cloud processing.
Incorrect options:
References:
https://aws.amazon.com/route53/
https://aws.amazon.com/storagegateway/
Domain
Question 7Correct
You would like to use AWS Snowball to move on-premises backups into a long
term archival tier on AWS. Which solution provides the MOST cost savings?
Overall explanation
Correct option:
AWS Snowball, a part of the AWS Snow Family, is a data migration and edge
computing device that comes in two options. Snowball Edge Storage
Optimized devices provide both block storage and Amazon S3-compatible
object storage, and 40 vCPUs. They are well suited for local storage and
large scale data transfer. AWS Snowball Edge Compute Optimized devices
provide 52 vCPUs, block and object storage, and an optional GPU for use
cases like advanced machine learning and full-motion video analysis in
disconnected environments.
AWS Snowball Edge Storage Optimized is the optimal choice if you need to
securely and quickly transfer dozens of terabytes to petabytes of data to
AWS. It provides up to 80 terabytes of usable HDD storage, 40 vCPUs, 1
terabyte of SATA SSD storage, and up to 40 gigabytes network connectivity
to address large scale data transfer and pre-processing use cases.
The original AWS Snowball devices were transitioned out of service and AWS
Snowball Edge Storage Optimized are now the primary devices used for data
transfer. You may see the AWS Snowball device on the exam, just remember
that the original AWS Snowball device had 80 terabytes of storage space.
For this scenario, you will want to minimize the time spent in Amazon S3
Standard for all files to avoid unintended Amazon S3 Standard storage
charges. To do this, AWS recommends using a zero-day lifecycle policy. From
a cost perspective, when using a zero-day lifecycle policy, you are only
charged Amazon S3 Glacier Deep Archive rates. When billed, the lifecycle
policy is accounted for first, and if the destination is Amazon S3 Glacier Deep
Archive, you are charged Amazon S3 Glacier Deep Archive rates for the
transferred files.
You can't move data directly from AWS Snowball into Amazon S3 Glacier, you
need to go through Amazon S3 first, and then use a lifecycle policy. So this
option is correct.
Incorrect options:
Amazon S3 Glacier and S3 Glacier Deep Archive are a secure, durable, and
extremely low-cost Amazon S3 cloud storage classes for data archiving and
long-term backup. They are designed to deliver 99.999999999% durability
and provide comprehensive security and compliance capabilities that can
help meet even the most stringent regulatory requirements. Finally, Amazon
S3 Glacier Deep Archive provides more cost savings than Amazon S3 Glacier.
Both these options are incorrect as you can't move data directly from AWS
Snowball into a Amazon S3 Glacier Vault or a Glacier Deep Archive Vault. You
need to go through Amazon S3 first and then use a lifecycle policy.
References:
https://aws.amazon.com/snowball/features/
https://aws.amazon.com/glacier/
Domain
Question 8Incorrect
Correct selection
Overall explanation
Correct options:
A security group acts as a virtual firewall that controls the traffic for one or
more instances. When you launch an instance, you can specify one or more
security groups; otherwise, we use the default security group. You can add
rules to each security group that allows traffic to or from its associated
instances. You can modify the rules for a security group at any time; the new
rules are automatically applied to all instances that are associated with the
security group. When we decide whether to allow traffic to reach an instance,
we evaluate all the rules from all the security groups that are associated with
the instance.
The traffic goes like this : The client sends an HTTPS request to ALB on port
443. This is handled by the rule - "The security group of the Application Load
Balancer should have an inbound rule from anywhere on port 443"
The Application Load Balancer then forwards the request to one of the
Amazon EC2 instances. This is handled by the rule - "The security group of
the Amazon EC2 instances should have an inbound rule from the security
group of the Application Load Balancer on port 80"
Incorrect options:
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-
groups.html
Domain
Question 9Correct
Overall explanation
Correct option:
To get access to the data files, an AWS Identity and Access Management
(IAM) role with cross-account permissions must run the UNLOAD command
again. Follow these steps to set up the Amazon Redshift cluster with cross-
account permissions to the bucket:
1. From the account of the Amazon S3 bucket, create an IAM role (Bucket
Role) with permissions to the bucket.
2. From the account of the Amazon Redshift cluster, create another IAM
role (Cluster Role) with permissions to assume the Bucket Role.
3. Update the Bucket Role to grant bucket access and create a trust
relationship with the Cluster Role.
4. From the Amazon Redshift cluster, run the UNLOAD command using
the Cluster Role and Bucket Role.
Incorrect options:
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-access-
denied-redshift-unload/
Domain
Question 10Correct
An e-commerce company operates multiple AWS accounts and has
interconnected these accounts in a hub-and-spoke style using the AWS
Transit Gateway. Amazon Virtual Private Cloud (Amazon VPCs) have been
provisioned across these AWS accounts to facilitate network isolation.
Use Transit VPC to reduce cost and share the resources across
Amazon Virtual Private Cloud (Amazon VPCs)
Overall explanation
Correct option:
A VPC endpoint allows you to privately connect your VPC to supported AWS
services without requiring an Internet gateway, NAT device, VPN connection,
or AWS Direct Connect connection. Endpoints are virtual devices that are
horizontally scaled, redundant, and highly available VPC components. They
allow communication between instances in your VPC and services without
imposing availability risks or bandwidth constraints on your network traffic.
VPC endpoints enable you to reduce data transfer charges resulting from
network communication between private VPC resources (such as Amazon
Elastic Cloud Compute—or EC2—instances) and AWS Services (such as
Amazon Quantum Ledger Database, or QLDB). Without VPC endpoints
configured, communications that originate from within a VPC destined for
public AWS services must egress AWS to the public Internet in order to
access AWS services. This network path incurs outbound data transfer
charges. Data transfer charges for traffic egressing from Amazon EC2 to the
Internet vary based on volume. With VPC endpoints configured,
communication between your VPC and the associated AWS service does not
leave the Amazon network. If your workload requires you to transfer
significant volumes of data between your VPC and AWS, you can reduce
costs by leveraging VPC endpoints.
Incorrect options:
Use Transit VPC to reduce cost and share the resources across
Amazon Virtual Private Cloud (Amazon VPCs) - Transit VPC uses
customer-managed Amazon Elastic Compute Cloud (Amazon EC2) VPN
instances in a dedicated transit VPC with an Internet gateway. This design
requires the customer to deploy, configure, and manage EC2-based VPN
appliances, which will result in additional EC2, and potentially third-party
product and licensing charges. Note that this design will generate additional
data transfer charges for traffic traversing the transit VPC: data is charged
when it is sent from a spoke VPC to the transit VPC, and again from the
transit VPC to the on-premises network or a different AWS Region. Transit
VPC is not the right choice here.
via
- https://d0.awsstatic.com/aws-answers/AWS_Single_Region_Multi_VPC_Conne
ctivity.pdf
Use VPCs connected with AWS Direct Connect - This approach is a good
alternative for customers who need to connect a high number of VPCs to a
central VPC or on-premises resources, or who already have an AWS Direct
Connect connection in place. This design also offers customers the ability to
incorporate transitive routing into their network design. For example, if VPC A
and VPC B are both connected to an on-premises network using AWS Direct
Connect connections, then the two VPCs can be connected to each other via
AWS Direct Connect. AWS Direct Connect requires physical cables and takes
about a month for setting up, this is not an ideal solution for the given
scenario.
References:
https://aws.amazon.com/blogs/architecture/reduce-cost-and-increase-
security-with-amazon-vpc-endpoints/
https://d0.awsstatic.com/aws-answers/
AWS_Single_Region_Multi_VPC_Connectivity.pdf
Domain
Question 11Correct
Use Amazon Aurora Global Database to enable fast local reads with
low latency in each region
Overall explanation
Correct option:
Use Amazon Aurora Global Database to enable fast local reads with
low latency in each region
Incorrect options:
Global Tables builds upon DynamoDB’s global footprint to provide you with a
fully managed, multi-region, and multi-master database that provides fast,
local, read, and write performance for massively scaled, global applications.
Global Tables replicates your Amazon DynamoDB tables automatically across
your choice of AWS regions. Given that the use-case wants you to continue
with the underlying schema of the relational database, DynamoDB is not the
right choice as it's a NoSQL database.
References:
https://aws.amazon.com/rds/aurora/global-database/
https://aws.amazon.com/dynamodb/global-tables/
Domain
Design High-Performing Architectures
Question 12Correct
Which of the following options would allow the engineering team to provision
the instances for this use-case?
Overall explanation
Correct option:
With launch templates, you can provision capacity across multiple instance
types using both On-Demand Instances and Spot Instances to achieve the
desired scale, performance, and cost. Hence this is the correct option.
Incorrect options:
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/
LaunchTemplates.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/
LaunchConfiguration.html
Domain
Overall explanation
Correct option:
AWS Certificate Manager is a service that lets you easily provision, manage,
and deploy public and private Secure Sockets Layer/Transport Layer Security
(SSL/TLS) certificates for use with AWS services and your internal connected
resources. SSL/TLS certificates are used to secure network communications
and establish the identity of websites over the Internet as well as resources
on private networks.
via - https://docs.aws.amazon.com/config/latest/developerguide/how-does-
config-work.html
via - https://docs.aws.amazon.com/config/latest/developerguide/how-does-
config-work.html
Incorrect options:
References:
https://docs.aws.amazon.com/config/latest/developerguide/
WhatIsConfig.html
https://docs.aws.amazon.com/config/latest/developerguide/how-does-config-
work.html
https://docs.aws.amazon.com/config/latest/developerguide/evaluate-
config.html
https://docs.aws.amazon.com/config/latest/developerguide/acm-certificate-
expiration-check.html
https://aws.amazon.com/blogs/security/how-to-monitor-expirations-of-
imported-certificates-in-aws-certificate-manager-acm/
Domain
Overall explanation
Correct option:
You can use AWS DataSync to migrate data located on-premises, at the edge,
or in other clouds to Amazon S3, Amazon EFS, Amazon FSx for Windows File
Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx
for NetApp ONTAP.
AWS DataSync:
via - https://aws.amazon.com/datasync/
AWS Direct Connect provides three types of virtual interfaces: public, private,
and transit.
via - https://aws.amazon.com/premiumsupport/knowledge-center/public-
private-interface-dx/
For the given use case, you can send data over the Direct Connect
connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by
using a private VIF.
Using task scheduling in AWS DataSync, you can periodically execute a
transfer task from your source storage system to the destination. You can
use the DataSync scheduled task to send the video files to the Amazon EFS
file system every 24 hours.
Incorrect options:
References:
https://aws.amazon.com/datasync/
https://aws.amazon.com/blogs/storage/transferring-files-from-on-premises-
to-aws-and-back-without-leaving-your-vpc-using-aws-datasync/
https://docs.aws.amazon.com/efs/latest/ug/efs-vpc-endpoints.html
https://aws.amazon.com/datasync/faqs/
https://aws.amazon.com/premiumsupport/knowledge-center/public-private-
interface-dx/
https://docs.aws.amazon.com/datasync/latest/userguide/task-
scheduling.html
Domain
Question 15Correct
Which of the following options represents the best solution for the given
requirements?
Overall explanation
Correct option:
Incorrect options:
Reference:
https://docs.aws.amazon.com/efs/latest/ug/storage-classes.html
Domain
Question 16Correct
A retail company wants to rollout and test a blue-green deployment for its
global application in the next 48 hours. Most of the customers use mobile
phones which are prone to Domain Name System (DNS) caching. The
company has only two days left for the annual Thanksgiving sale to
commence.
As a Solutions Architect, which of the following options would you
recommend to test the deployment on as many users as possible in the
given time frame?
Overall explanation
Correct option:
With AWS Global Accelerator, you can shift traffic gradually or all at once
between the blue and the green environment and vice-versa without being
subject to DNS caching on client devices and internet resolvers, traffic dials
and endpoint weights changes are effective within seconds.
Incorrect options:
References:
https://aws.amazon.com/blogs/networking-and-content-delivery/using-aws-
global-accelerator-to-achieve-blue-green-deployments
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-
policy.html#routing-policy-weighted
Domain
Question 17Correct
A developer has configured inbound traffic for the relevant ports in both the
Security Group of the Amazon EC2 instance as well as the network access
control list (network ACL) of the subnet for the Amazon EC2 instance. The
developer is, however, unable to connect to the service running on the
Amazon EC2 instance.
IAM Role defined in the Security Group is different from the IAM
Role that is given access in the network access control list (network
ACL)
Your answer is correct
Overall explanation
Correct option:
The designated ephemeral port then becomes the destination port for return
traffic from the service, so outbound traffic from the ephemeral port must be
allowed in the network ACL.
By default, network ACLs allow all inbound and outbound traffic. If your
network ACL is more restrictive, then you need to explicitly allow traffic from
the ephemeral port range.
If you accept traffic from the internet, then you also must establish a route
through an internet gateway. If you accept traffic over VPN or AWS Direct
Connect, then you must establish a route through a virtual private gateway.
Incorrect options:
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/resolve-
connection-sg-acl-inbound/
Domain
Question 18Incorrect
A big data consulting firm needs to set up a data lake on Amazon S3 for a
Health-Care client. The data lake is split in raw and refined zones. For
compliance reasons, the source data needs to be kept for a minimum of 5
years. The source data arrives in the raw zone and is then processed via an
AWS Glue based extract, transform, and load (ETL) job into the refined zone.
The business analysts run ad-hoc queries only on the data in the refined
zone using Amazon Athena. The team is concerned about the cost of data
storage in both the raw and refined zones as the data is increasing at a rate
of 1 terabyte daily in each zone.
Correct selection
Setup a lifecycle policy to transition the raw zone data into Amazon
S3 Glacier Deep Archive after 1 day of object creation
Use AWS Glue ETL job to write the transformed data in the refined
zone using a compressed file format
Create an AWS Lambda function based job to delete the raw zone
data after 1 day
Your selection is incorrect
Use AWS Glue ETL job to write the transformed data in the refined
zone using CSV format
Overall explanation
Correct options:
Setup a lifecycle policy to transition the raw zone data into Amazon
S3 Glacier Deep Archive after 1 day of object creation
You can manage your objects so that they are stored cost-effectively
throughout their lifecycle by configuring their Amazon S3 Lifecycle. An S3
Lifecycle configuration is a set of rules that define actions that Amazon S3
applies to a group of objects. For example, you might choose to transition
objects to the Amazon S3 Standard-IA storage class 30 days after you
created them, or archive objects to the Amazon S3 Glacier storage class one
year after creating them.
For the given use-case, the raw zone consists of the source data, so it cannot
be deleted due to compliance reasons. Therefore, you should use a lifecycle
policy to transition the raw zone data into Amazon S3 Glacier Deep Archive
after 1 day of object creation.
Use AWS Glue ETL job to write the transformed data in the refined
zone using a compressed file format
AWS Glue is a fully managed extract, transform, and load (ETL) service that
makes it easy for customers to prepare and load their data for analytics. You
cannot transition the refined zone data into Amazon S3 Glacier Deep Archive
because it is used by the business analysts for ad-hoc querying. Therefore,
the best optimization is to have the refined zone data stored in a compressed
format via the Glue job. The compressed data would reduce the storage cost
incurred on the data in the refined zone.
Incorrect options:
Create an AWS Lambda function based job to delete the raw zone
data after 1 day - As mentioned in the use-case, the source data needs to
be kept for a minimum of 5 years for compliance reasons. Therefore the data
in the raw zone cannot be deleted after 1 day.
Use AWS Glue ETL job to write the transformed data in the refined
zone using CSV format - It is cost-optimal to write the data in the refined
zone using a compressed format instead of CSV format. The compressed
data would reduce the storage cost incurred on the data in the refined zone.
So, this option is incorrect.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-
mgmt.html
https://aws.amazon.com/glue/
Domain
Design High-Performing Architectures
Question 19Incorrect
Which of the following services would you use for building a solution with the
LEAST amount of development effort? (Select two)
AWS Lambda
Correct selection
Amazon CloudWatch
Overall explanation
Correct options:
Amazon CloudWatch
You can use Amazon CloudWatch Alarms to send an email via Amazon SNS
whenever any of the Amazon EC2 instances breaches a certain threshold.
Hence both these options are correct.
Incorrect options:
AWS Lambda - With AWS Lambda, you can run code without provisioning or
managing servers. You pay only for the compute time that you consume—
there’s no charge when your code isn’t running. You can run code for
virtually any type of application or backend service—all with zero
administration. You cannot use AWS Lambda to monitor CPU utilization of
Amazon EC2 instances or send notification emails, hence this option is
incorrect.
AWS Step Functions - AWS Step Functions lets you coordinate multiple
AWS services into serverless workflows so you can build and update apps
quickly. Using Step Functions, you can design and run workflows that stitch
together services, such as AWS Lambda, AWS Fargate, and Amazon
SageMaker, into feature-rich applications. You cannot use Step Functions to
monitor CPU utilization of Amazon EC2 instances or send notification emails,
hence this option is incorrect.
References:
https://aws.amazon.com/cloudwatch/faqs/
https://aws.amazon.com/sns/
Domain
Question 20Correct
You would like to mount a network file system on Linux instances, where files
will be stored and accessed frequently at first, and then infrequently. What
solution is the MOST cost-effective?
Overall explanation
Correct option:
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully
managed elastic NFS file system for use with AWS Cloud services and on-
premises resources. Amazon EFS is a regional service storing data within and
across multiple Availability Zones (AZs) for high availability and durability.
Amazon EFS Infrequent Access (EFS IA) is a storage class that provides
price/performance that is cost-optimized for files, not accessed every day,
with storage prices up to 92% lower compared to Amazon EFS Standard.
Therefore, this is the correct option.
via - https://aws.amazon.com/efs/
Incorrect options:
Amazon FSx for Lustre - Amazon FSx for Lustre makes it easy and cost-
effective to launch and run the world’s most popular high-performance file
system. It is used for workloads such as machine learning, high-performance
computing (HPC), video processing, and financial modeling. Amazon FSx
enables you to use Lustre file systems for any workload where storage speed
matters.
Amazon FSx for Lustre is a file system better suited for distributed computing
for HPC (high-performance computing) and is very expensive
References:
https://aws.amazon.com/efs/
https://aws.amazon.com/efs/features/infrequent-access/
Domain
Question 21Incorrect
{
"Version": "2012-10-17",
"Statement": [
"Action": [
"ec2:RunInstances"
],
"Effect": "Allow",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": "eu-west-1"
It allows running Amazon EC2 instances in any region when the API
call is originating from the eu-west-1 region
Correct answer
Overall explanation
Correct option:
You manage access in AWS by creating policies and attaching them to IAM
identities (users, groups of users, or roles) or AWS resources. A policy is an
object in AWS that, when associated with an identity or resource, defines
their permissions. AWS evaluates these policies when an IAM principal (user
or role) makes a request. Permissions in the policies determine whether the
request is allowed or denied. Most policies are stored in AWS as JSON
documents. AWS supports six types of policies: identity-based policies,
resource-based policies, permissions boundaries, Organizations service
control policy (SCPs), access control lists (ACLs), and session policies.
You can use the aws:RequestedRegion key to compare the AWS Region that
was called in the request with the Region that you specify in the policy. You
can use this global condition key to control which Regions can be requested.
Incorrect options:
It allows running Amazon EC2 instances in any region when the API
call is originating from the eu-west-1 region
These three options contradict the earlier details provided in the explanation.
To summarize, aws:RequestedRegion represents the target of the API call. So,
we can only launch an Amazon EC2 instance in eu-west-1 region and we can
do this API call from anywhere. Hence these options are incorrect.
References:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/
reference_policies_condition-keys.html
Domain
Question 22Correct
Which of the following solutions would you suggest as the best fit for the
given use-case?
Overall explanation
Correct option:
A user pool is a user directory in Amazon Cognito. You can leverage Amazon
Cognito User Pools to either provide built-in user management or integrate
with external identity providers, such as Facebook, Twitter, Google+, and
Amazon. Whether your users sign-in directly or through a third party, all
members of the user pool have a directory profile that you can access
through a Software Development Kit (SDK).
via - https://docs.aws.amazon.com/wellarchitected/latest/serverless-
applications-lens/identity-and-access-management.html
Incorrect options:
Use AWS Lambda authorizer for Amazon API Gateway - If you have an
existing Identity Provider (IdP), you can use an AWS Lambda authorizer for
Amazon API Gateway to invoke a Lambda function to authenticate/validate a
given user against your Identity Provider. You can use a Lambda authorizer
for custom validation logic based on identity metadata.
References:
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-
lens/identity-and-access-management.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-
enable-cognito-user-pool.html
https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-
identity-pools.html
Domain
Question 23Correct
Overall explanation
Correct option:
You can use Amazon CloudFront to improve the performance of your website.
CloudFront makes your website files (such as HTML, images, and video)
available from data centers around the world (called edge locations). When a
visitor requests a file from your website, CloudFront automatically redirects
the request to a copy of the file at the nearest edge location. This results in
faster download times than if the visitor had requested the content from a
data center that is located farther away. Therefore, this option is correct.
Incorrect options:
With AWS Lambda, you can run code without provisioning or managing
servers. You can't host a website on Lambda. Also, you can't have CloudFront
in front of Lambda. So this option is incorrect.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-
domain-walkthrough.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-
cloudfront-walkthrough.html
Domain
Question 24Correct
A financial services company wants a single log processing model for all the
log files (consisting of system logs, application logs, database logs, etc) that
can be processed in a serverless fashion and then durably stored for
downstream analytics. The company wants to use an AWS managed service
that automatically scales to match the throughput of the log data and
requires no ongoing administration.
Amazon EMR
AWS Lambda
Overall explanation
Correct option:
Amazon Kinesis Data Firehose is the easiest way to reliably load streaming
data into data lakes, data stores, and analytics tools. It can capture,
transform, and load streaming data into Amazon S3, Amazon Redshift,
Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics
with existing business intelligence tools and dashboards you’re already using
today. It is a fully managed service that automatically scales to match the
throughput of your data and requires no ongoing administration. Therefore,
this is the correct option.
Please see this overview of how Kinesis Firehose works:
via - https://aws.amazon.com/kinesis/data-firehose/
Incorrect options:
Amazon EMR - Amazon EMR is the industry-leading cloud big data platform
for processing vast amounts of data using open source tools such as Apache
Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto.
With EMR you can run Petabyte-scale analysis at less than half of the cost of
traditional on-premises solutions and over 3x faster than standard Apache
Spark. Amazon EMR uses Hadoop, an open-source framework, to distribute
your data and processing across a resizable cluster of Amazon EC2
instances.
AWS Lambda - AWS Lambda lets you run code without provisioning or
managing servers. It cannot be used for production-grade serverless log
analytics.
Reference:
https://aws.amazon.com/kinesis/data-firehose/
Domain
Question 25Correct
Create a deny rule for the malicious IP in the network access control
list (network ACL) associated with each of the instances
Overall explanation
Correct option:
AWS WAF is a web application firewall that helps protect your web
applications or APIs against common web exploits that may affect
availability, compromise security, or consume excessive resources. AWS WAF
gives you control over how traffic reaches your applications by enabling you
to create security rules that block common attack patterns, such as SQL
injection or cross-site scripting, and rules that filter out specific traffic
patterns you define.
via - https://aws.amazon.com/waf/
If you want to allow or block web requests based on the IP addresses that the
requests originate from, create one or more IP match conditions. An IP match
condition lists up to 10,000 IP addresses or IP address ranges that your
requests originate from. So, this option is correct.
Incorrect options:
Create a deny rule for the malicious IP in the network access control
list (network ACL) associated with each of the instances - Network
access control list (network ACL) are not associated with instances. So this
option is also ruled out.
Reference:
https://docs.aws.amazon.com/waf/latest/developerguide/classic-web-acl-ip-
conditions.html
Domain
Question 26Correct
A company is developing a global healthcare application that requires the
least possible latency for database read/write operations from users in
several geographies across the world. The company has hired you as an AWS
Certified Solutions Architect Associate to build a solution using Amazon
Aurora that offers an effective recovery point objective (RPO) of seconds and
a recovery time objective (RTO) of a minute.
Overall explanation
Correct option:
Incorrect options:
Both these options work in a single AWS Region, so these options are
incorrect.
Reference:
https://aws.amazon.com/rds/aurora/global-database/
Domain
Question 27Correct
Kinesis Agent cannot write to Amazon Kinesis Firehose for which the
delivery stream source is already set as Amazon Kinesis Data
Streams
Kinesis Agent can only write to Amazon Kinesis Data Streams, not to
Amazon Kinesis Firehose
Overall explanation
Correct option:
Kinesis Agent cannot write to Amazon Kinesis Firehose for which the
delivery stream source is already set as Amazon Kinesis Data
Streams
Amazon Kinesis Data Firehose is the easiest way to reliably load streaming
data into data lakes, data stores, and analytics tools. It is a fully managed
service that automatically scales to match the throughput of your data and
requires no ongoing administration. It can also batch, compress, transform,
and encrypt the data before loading it, minimizing the amount of storage
used at the destination and increasing security. When an Amazon Kinesis
Data Stream is configured as the source of a Kinesis Firehose delivery
stream, Firehose’s PutRecord and PutRecordBatch operations are disabled
and Kinesis Agent cannot write to Kinesis Firehose Delivery Stream directly.
Data needs to be added to the Amazon Kinesis Data Stream through the
Kinesis Data Streams PutRecord and PutRecords operations instead.
Therefore, this option is correct.
Incorrect options:
Kinesis Agent can only write to Amazon Kinesis Data Streams, not to
Amazon Kinesis Firehose - Kinesis Agent is a stand-alone Java software
application that offers an easy way to collect and send data to Amazon
Kinesis Data Streams or Amazon Kinesis Firehose. So this option is incorrect.
Amazon Kinesis Firehose delivery stream has reached its limit and
needs to be scaled manually - Amazon Kinesis Firehose is a fully managed
service that automatically scales to match the throughput of your data and
requires no ongoing administration. Therefore this option is not correct.
via - https://aws.amazon.com/kinesis/data-firehose/
References:
https://aws.amazon.com/kinesis/data-firehose/
https://docs.aws.amazon.com/streams/latest/dev/writing-with-agents.html
https://docs.aws.amazon.com/firehose/latest/dev/writing-with-agents.html
Domain
Question 28Correct
You are establishing a monitoring solution for desktop systems, that will be
sending telemetry data into AWS every 1 minute. Data for each system must
be processed in order, independently, and you would like to scale the
number of consumers to be possibly equal to the number of desktop systems
that are being monitored.
Use an Amazon Kinesis Data Stream, and send the telemetry data
with a Partition ID that uses the value of the Desktop ID
Use an Amazon Simple Queue Service (Amazon SQS) standard
queue, and send the telemetry data as is
Overall explanation
Correct option:
We, therefore, need to use an SQS FIFO queue. If we don't specify a GroupID,
then all the messages are in absolute order, but we can only have 1
consumer at most. To allow for multiple consumers to read data for each
Desktop application, and to scale the number of consumers, we should use
the "Group ID" attribute. So this is the correct option.
Incorrect options:
Use an Amazon Kinesis Data Stream, and send the telemetry data
with a Partition ID that uses the value of the Desktop ID - Amazon
Kinesis Data Streams (KDS) is a massively scalable and durable real-time
data streaming service. KDS can continuously capture gigabytes of data per
second from hundreds of thousands of sources such as website clickstreams,
database event streams, financial transactions, social media feeds, IT logs,
and location-tracking events. A Kinesis Data Stream would work and would
give us the data for each desktop application within shards, but we can only
have as many consumers as shards in Kinesis (which is in practice, much less
than the number of producers).
References:
https://aws.amazon.com/blogs/compute/solving-complex-ordering-
challenges-with-amazon-sqs-fifo-queues/
https://aws.amazon.com/sqs/faqs/
https://aws.amazon.com/kinesis/data-streams/faqs/
Domain
Question 29Correct
Overall explanation
Correct option:
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-
groups.html
Incorrect options:
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-
groups.html
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-
groups.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-
groups.html
Domain
Question 30Correct
Overall explanation
Correct option:
The correct option is to deploy the instances in three Availability Zones (AZs)
and launch two instances in each Availability Zone (AZ). Even if one of the
AZs goes out of service, still we shall have 4 instances available and the
application can maintain an acceptable level of end-user experience.
Therefore, we can achieve high availability with just 6 instances in this case.
Incorrect options:
Deploy the instances in two Availability Zones (AZs). Launch two
instances in each Availability Zone (AZ) - When we launch two instances
in two AZs, we run the risk of falling below the minimum acceptable
threshold of 4 instances if one of the AZs fails. So this option is ruled out.
Domain
Question 31Correct
A company has recently launched a new mobile gaming application that the
users are adopting rapidly. The company uses Amazon RDS MySQL as the
database. The engineering team wants an urgent solution to this issue where
the rapidly increasing workload might exceed the available database
storage.
Overall explanation
Correct option:
Enable storage auto-scaling for Amazon RDS MySQL
At least six hours have passed since the last storage modification.
The maximum storage threshold is the limit that you set for autoscaling the
DB instance. You can't set the maximum storage threshold for autoscaling-
enabled instances to a value greater than the maximum allocated storage.
Incorrect options:
Create read replica for Amazon RDS MySQL - Read replicas make it easy
to take advantage of supported engines' built-in replication functionality to
elastically scale out beyond the capacity constraints of a single DB instance
for read-heavy database workloads. You can create multiple read replicas for
a given source DB Instance and distribute your application’s read traffic
amongst them. This option acts as a distractor as read replicas cannot help
to automatically scale storage for the primary database.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
USER_PIOPS.StorageTypes.html
Domain
Question 32Correct
Your company has a monthly big data workload, running for about 2 hours,
which can be efficiently distributed across multiple servers of various sizes,
with a variable number of CPUs. The solution for the workload should be able
to withstand server failures.
Overall explanation
Correct option:
The Spot Fleet selects the Spot Instance pools that meet your needs and
launches Spot Instances to meet the target capacity for the fleet. By default,
Spot Fleets are set to maintain target capacity by launching replacement
instances after Spot Instances in the fleet are terminated.
A Spot Instance is an unused Amazon EC2 instance that is available for less
than the On-Demand price. Spot Instances provide great cost efficiency, but
we need to select an instance type in advance. In this case, we want to use
the most cost-optimal option and leave the selection of the cheapest spot
instance to a Spot Fleet request, which can be optimized with
the lowestPrice strategy. So this is the correct option.
Incorrect options:
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-
instances.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-
fleet.html#spot-fleet-allocation-strategy
Domain
Question 33Incorrect
A silicon valley based startup has a two-tier architecture using Amazon EC2
instances for its flagship application. The web servers (listening on port 443),
which have been assigned security group A, are in public subnets across two
Availability Zones (AZs) and the MSSQL based database instances (listening
on port 1433), which have been assigned security group B, are in two private
subnets across two Availability Zones (AZs). The DevOps team wants to
review the security configurations of the application architecture.
For security group B: Add an inbound rule that allows traffic only
from security group A on port 443
For security group B: Add an inbound rule that allows traffic only
from all sources on port 1433
For security group A: Add an inbound rule that allows traffic from all
sources on port 443. Add an outbound rule with the destination as
security group B on port 443
For security group A: Add an inbound rule that allows traffic from all
sources on port 443. Add an outbound rule with the destination as
security group B on port 1433
Correct selection
For security group B: Add an inbound rule that allows traffic only
from security group A on port 1433
Overall explanation
Correct options:
For security group A: Add an inbound rule that allows traffic from all
sources on port 443. Add an outbound rule with the destination as
security group B on port 1433
For security group B: Add an inbound rule that allows traffic only
from security group A on port 1433
A security group acts as a virtual firewall that controls the traffic for one or
more instances. When you launch an instance, you can specify one or more
security groups; otherwise, we use the default security group. You can add
rules to each security group that allows traffic to or from its associated
instances. You can modify the rules for a security group at any time; the new
rules are automatically applied to all instances that are associated with the
security group. When we decide whether to allow traffic to reach an instance,
we evaluate all the rules from all the security groups that are associated with
the instance.
Security group rules are always permissive; you can't create rules that deny
access.
The MOST secure configuration for the given use case is:
For security group A: Add an inbound rule that allows traffic from all sources
on port 443. Add an outbound rule with the destination as security group B
on port 1433
The above rules make sure that web servers are listening for traffic on all
sources on the HTTPS protocol on port 443. The web servers only allow
outbound traffic to MSSQL servers in Security Group B on port 1433.
For security group B: Add an inbound rule that allows traffic only from
security group A on port 1433. The above rule makes sure that the MSSQL
servers only accept traffic from web servers in security group A on port 1433.
Incorrect options:
For security group A: Add an inbound rule that allows traffic from all
sources on port 443. Add an outbound rule with the destination as
security group B on port 443 - As the MSSQL based database instances
are listening on port 1433, therefore for security group A, the outbound rule
should be added on port 443 with the destination as security group B.
For security group B: Add an inbound rule that allows traffic only
from all sources on port 1433 - The inbound rule should allow traffic only
from security group A on port 1433. Allowing traffic from all sources will
compromise security.
For security group B: Add an inbound rule that allows traffic only
from security group A on port 443 - The inbound rule should allow traffic
only from security group A on port 1433 because the MSSQL based database
instances are listening on port 1433.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-
groups.html
Domain
Question 34Incorrect
Correct selection
Mount Amazon Elastic File System (Amazon EFS) on all Amazon EC2
instances. Write a one time job to copy the videos from all Amazon
EBS volumes to Amazon EFS. Modify the application to use Amazon
EFS for storing the videos
Write a one time job to copy the videos from all Amazon EBS
volumes to Amazon S3 Glacier Deep Archive and then modify the
application to use Amazon S3 Glacier Deep Archive for storing the
videos
Write a one time job to copy the videos from all Amazon EBS
volumes to Amazon S3 and then modify the application to use
Amazon S3 standard for storing the videos
Write a one time job to copy the videos from all Amazon EBS
volumes to Amazon DynamoDB and then modify the application to
use Amazon DynamoDB for storing the videos
Write a one time job to copy the videos from all Amazon EBS
volumes to Amazon RDS and then modify the application to use
Amazon RDS for storing the videos
Overall explanation
Correct options:
Write a one time job to copy the videos from all Amazon EBS
volumes to Amazon S3 and then modify the application to use
Amazon S3 standard for storing the videos
Mount Amazon Elastic File System (Amazon EFS) on all Amazon EC2
instances. Write a one time job to copy the videos from all Amazon
EBS volumes to Amazon EFS. Modify the application to use Amazon
EFS for storing the videos
Amazon Elastic Block Store (EBS) is an easy to use, high-performance block
storage service designed for use with Amazon Elastic Compute Cloud (EC2)
for both throughput and transaction-intensive workloads at any scale.
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully
managed elastic NFS file system for use with AWS Cloud services and on-
premises resources. It is built to scale on-demand to petabytes without
disrupting applications, growing and shrinking automatically as you add and
remove files, eliminating the need to provision and manage capacity to
accommodate growth.
As Amazon EBS volumes are attached locally to the Amazon EC2 instances,
therefore the uploaded videos are tied to specific Amazon EC2 instances.
Every time the user logs in, they are directed to a different instance and
therefore their videos get dispersed across multiple EBS volumes. The
correct solution is to use either Amazon S3 or Amazon EFS to store the user
videos.
Incorrect options:
Write a one time job to copy the videos from all Amazon EBS
volumes to Amazon S3 Glacier Deep Archive and then modify the
application to use Amazon S3 Glacier Deep Archive for storing the
videos - Amazon S3 Glacier Deep Archive is meant to be used for long term
data archival. It cannot be used to serve static content such as videos or
images via a web application. So this option is incorrect.
Write a one time job to copy the videos from all Amazon EBS
volumes to Amazon RDS and then modify the application to use
Amazon RDS for storing the videos - Amazon RDS is a relational
database and not the right candidate for storing videos.
Write a one time job to copy the videos from all Amazon EBS
volumes to Amazon DynamoDB and then modify the application to
use Amazon DynamoDB for storing the videos - Amazon DynamoDB is a
NoSQL database and not the right candidate for storing videos.
Reference:
https://aws.amazon.com/ebs/
Domain
Question 35Correct
Use Amazon Cognito Authentication via Cognito User Pools for your
Amazon CloudFront distribution
Use Amazon Cognito Authentication via Cognito User Pools for your
Application Load Balancer
Overall explanation
Correct option:
Use Amazon Cognito Authentication via Cognito User Pools for your
Application Load Balancer
Exam Alert:
Incorrect options:
Use Amazon Cognito Authentication via Cognito User Pools for your
Amazon CloudFront distribution - You cannot directly integrate Cognito
User Pools with CloudFront distribution as you have to create a separate AWS
Lambda@Edge function to accomplish the authentication via Cognito User
Pools. This involves additional development effort, so this option is not the
best fit for the given use-case.
References:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-
authenticate-users.html
https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-
identity-pools.html
https://aws.amazon.com/blogs/networking-and-content-delivery/
authorizationedge-using-cookies-protect-your-amazon-cloudfront-content-
from-being-downloaded-by-unauthenticated-users/
Domain
Question 36Correct
Which of the following IAM policies provides read-only access to the Amazon
S3 bucket mybucket and its content?
"Version":"2012-10-17",
"Statement":[
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket"
}
]
"Version":"2012-10-17",
"Statement":[
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket/*"
"Version":"2012-10-17",
"Statement":[
"Effect":"Allow",
"Action":[
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::mybucket/*"
},
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket"
"Version":"2012-10-17",
"Statement":[
"Effect":"Allow",
"Action":[
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::mybucket"
},
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket/*"
}
Overall explanation
Correct option:
"Version":"2012-10-17",
"Statement":[
"Effect":"Allow",
"Action":[
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::mybucket"
},
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket/*"
You manage access in AWS by creating policies and attaching them to IAM
identities (users, groups of users, or roles) or AWS resources. A policy is an
object in AWS that, when associated with an identity or resource, defines
their permissions. AWS evaluates these policies when an IAM principal (user
or role) makes a request. Permissions in the policies determine whether the
request is allowed or denied. Most policies are stored in AWS as JSON
documents. AWS supports six types of policies: identity-based policies,
resource-based policies, permissions boundaries, service control policy
(SCP)of AWS Organizations, access control list (ACL), and session policies.
Incorrect options:
"Version":"2012-10-17",
"Statement":[
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket"
This option is incorrect as it provides read-only access only to the bucket, not
its contents.
"Version":"2012-10-17",
"Statement":[
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket/*"
"Version":"2012-10-17",
"Statement":[
"Effect":"Allow",
"Action":[
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::mybucket/*"
},
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket"
}
References:
https://aws.amazon.com/blogs/security/writing-iam-policies-how-to-grant-
access-to-an-amazon-s3-bucket/
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Domain
Question 37Correct
Instance B
Instance A
Instance C
Instance D
Overall explanation
Correct option:
Instance B
Per the default termination policy, the first priority is given to any allocation
strategy for On-Demand vs Spot instances. As no such information has been
provided for the given use-case, so this criterion can be ignored. The next
priority is to consider any instance with the oldest launch template unless
there is an instance that uses a launch configuration. So this rules out
Instance A. Next, you need to consider any instance which has the oldest
launch configuration. This implies Instance B will be selected for termination
and Instance C will also be ruled out as it has the newest launch
configuration. Instance D, which is closest to the next billing hour, is not
selected as this criterion is last in the order of priority.
Please see this note for a deep-dive on the default termination policy:
via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-
termination.html
Incorrect options:
Instance A
Instance C
Instance D
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-
termination.html
Domain
Question 38Correct
Provisioned Throughput
Bursting Throughput
General Purpose
Max I/O
Overall explanation
Correct option:
Max I/O
via - https://docs.aws.amazon.com/efs/latest/ug/performance.html
Incorrect options:
Provisioned Throughput
Bursting Throughput
These two options have been added as distractors as these refer to the
throughput mode of Amazon EFS and not the performance mode. There are
two throughput modes to choose from for your file system, Bursting
Throughput and Provisioned Throughput. With Bursting Throughput mode,
throughput on Amazon EFS scales as the size of your file system in the
standard storage class grows. With Provisioned Throughput mode, you can
instantly provision the throughput of your file system (in MiB/s) independent
of the amount of data stored.
References:
https://docs.aws.amazon.com/efs/latest/ug/performance.html
https://aws.amazon.com/efs/
Domain
Question 39Correct
A company has historically operated only in the us-east-1 region and stores
encrypted data in Amazon S3 using SSE-KMS. As part of enhancing its
security posture as well as improving the backup and recovery architecture,
the company wants to store the encrypted data in Amazon S3 that is
replicated into the us-west-1 AWS region. The security policies mandate that
the data must be encrypted and decrypted using the same key in both AWS
regions.
Change the AWS KMS single region key used for the current Amazon
S3 bucket into an AWS KMS multi-region key. Enable Amazon S3
batch replication for the existing data in the current bucket in us-
east-1 region into another bucket in us-west-1 region
Overall explanation
Correct option:
AWS KMS supports multi-region keys, which are AWS KMS keys in different
AWS regions that can be used interchangeably – as though you had the same
key in multiple regions. Each set of related multi-region keys has the same
key material and key ID, so you can encrypt data in one AWS region and
decrypt it in a different AWS region without re-encrypting or making a cross-
region call to AWS KMS.
You can use multi-region AWS KMS keys in Amazon S3. However, Amazon S3
currently treats multi-region keys as though they were single-region keys,
and does not use the multi-region features of the key.
For the given use case, you must create a new bucket in the us-east-1 region
with replication enabled from this new bucket into another bucket in us-west-
1 region. This would ensure that the data is available in another region for
backup and recovery purposes. You should also enable SSE-KMS encryption
on the new bucket in us-east-1 region by using an AWS KMS multi-region key
so that the data can be encrypted and decrypted using the same key in both
AWS regions. Since the existing data in the current bucket was encrypted
using the AWS KMS key restricted to the us-east-1 region, so data must be
copied to the new bucket in us-east-1 region for replication as well as multi-
region KMS key based encryption to kick-in.
"Version":"2012-10-17",
"Id":"PutObjectPolicy",
"Statement":[{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*",
"Condition":{
"StringNotEquals":{
"s3:x-amz-server-side-encryption":"aws:kms"
}
}
The following example IAM policies show statements for using AWS KMS
server-side encryption with replication.
In this example, the encryption context is the object ARN. If you use SSE-KMS
with an Amazon S3 Bucket Key enabled, you must use the bucket ARN as the
encryption context.
"Version": "2012-10-17",
"Statement": [{
"Action": ["kms:Decrypt"],
"Effect": "Allow",
"Condition": {
"StringLike": {
"kms:ViaService": "s3.source-bucket-region.amazonaws.com",
"kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::source-
bucket-name/key-prefix1/*"
},
"Action": ["kms:Encrypt"],
"Effect": "Allow",
"Resource": "AWS KMS key ARNs (for the AWS Region of the
destination bucket 1). Used to encrypt object replicas created in destination
bucket 1.",
"Condition": {
"StringLike": {
"kms:ViaService": "s3.destination-bucket-1-
region.amazonaws.com",
"kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::destination-
bucket-name-1/key-prefix1/*"
},
"Action": ["kms:Encrypt"],
"Effect": "Allow",
"Resource": "AWS KMS key ARNs (for the AWS Region of destination
bucket 2). Used to encrypt object replicas created in destination bucket 2.",
"Condition": {
"StringLike": {
"kms:ViaService": "s3.destination-bucket-2-
region.amazonaws.com",
"kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::destination-
bucket-2-name/key-prefix1*"
Incorrect options:
Change the AWS KMS single region key used for the current Amazon
S3 bucket into an AWS KMS multi-region key. Enable Amazon S3
batch replication for the existing data in the current bucket in us-
east-1 region into another bucket in us-west-1 region - Amazon S3
batch replication can certainly be used to replicate the existing data in the
current bucket in us-east-1 region into another bucket in us-west-1 region.
References:
https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-
overview.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-config-
for-kms-objects.html
Domain
Question 40Correct
A weather forecast agency collects key weather metrics across multiple cities
in the US and sends this data in the form of key-value pairs to AWS Cloud at
a one-minute frequency.
As a solutions architect, which of the following AWS services would you use
to build a solution for processing and then reliably storing this data with high
availability? (Select two)
Amazon DynamoDB
Amazon RDS
Amazon Redshift
Amazon ElastiCache
AWS Lambda
Overall explanation
Correct options:
AWS Lambda
With AWS Lambda, you can run code without provisioning or managing
servers. You pay only for the compute time that you consume—there’s no
charge when your code isn’t running. You can run code for virtually any type
of application or backend service—all with zero administration.
Amazon DynamoDB
AWS Lambda can be combined with DynamoDB to process and capture the
key-value data from the IoT sources described in the use-case. So both these
options are correct.
Incorrect options:
References:
https://aws.amazon.com/dynamodb/
https://aws.amazon.com/lambda/faqs/
Domain
Question 41Correct
Overall explanation
Correct option:
You have the following options for protecting data at rest in Amazon S3:
For the given use-case, the company wants to manage the encryption keys
via its custom application and let Amazon S3 manage the encryption,
therefore you must use Server-Side Encryption with Customer-Provided Keys
(SSE-C).
Please review these three options for Server Side Encryption on Amazon S3:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-
encryption.html
Incorrect options:
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-
encryption.html
Domain
Question 42Correct
Given that all end users of the web application would be located in the US,
which of the following would be the MOST resource-efficient solution?
Overall explanation
Correct option:
Deploy the web-tier Amazon EC2 instances in two Availability Zones
(AZs), behind an Elastic Load Balancer. Deploy the Amazon RDS
MySQL database in Multi-AZ configuration
Incorrect options:
Amazon RDS Read Replicas provide enhanced performance and durability for
RDS database (DB) instances. They make it easy to elastically scale out
beyond the capacity constraints of a single DB instance for read-heavy
database workloads. Read replicas are meant to address scalability issues.
You cannot use read replicas for improving availability, so both these options
are incorrect.
Exam Alert:
Please review this comparison vis-a-vis Multi-AZ vs Read Replica for Amazon
RDS:
via - https://aws.amazon.com/rds/features/multi-az/
Reference:
https://aws.amazon.com/rds/features/multi-az/
Domain
Question 43Correct
Upon a security review of your AWS account, an AWS consultant has found
that a few Amazon RDS databases are unencrypted. As a Solutions Architect,
what steps must be taken to encrypt the Amazon RDS databases?
Enable Multi-AZ for the database, and make sure the standby
instance is encrypted. Stop the main database to that the standby
database kicks in, then disable Multi-AZ
Enable encryption on the Amazon RDS database using the AWS
Console
Create a Read Replica of the database, and encrypt the read replica.
Promote the read replica as a standalone database, and terminate
the previous database
Overall explanation
Correct option:
Amazon Relational Database Service (Amazon RDS) makes it easy to set up,
operate, and scale a relational database in the cloud. It provides cost-
efficient and resizable capacity while automating time-consuming
administration tasks such as hardware provisioning, database setup,
patching and backups.
You can encrypt your Amazon RDS DB instances and snapshots at rest by
enabling the encryption option for your Amazon RDS DB instances. Data that
is encrypted at rest includes the underlying storage for DB instances, its
automated backups, read replicas, and snapshots.
You can only enable encryption for an Amazon RDS DB instance when you
create it, not after the DB instance is created. However, because you can
encrypt a copy of an unencrypted DB snapshot, you can effectively add
encryption to an unencrypted DB instance. That is, you can create a
snapshot of your DB instance, and then create an encrypted copy of that
snapshot. So this is the correct option.
Incorrect options:
Create a Read Replica of the database, and encrypt the read replica.
Promote the read replica as a standalone database, and terminate
the previous database - If the master is not encrypted, the read replicas
cannot be encrypted. So this option is incorrect.
Enable Multi-AZ for the database, and make sure the standby
instance is encrypted. Stop the main database to that the standby
database kicks in, then disable Multi-AZ - Multi-AZ is to help with High
Availability, not encryption. So this option is incorrect.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
Overview.Encryption.html
Domain
Question 44Correct
Correct option:
The processes that were previously running on the instance are resumed
Previously attached data volumes are reattached and the instance retains its
instance ID
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html
Incorrect options:
Use Amazon EC2 User-Data - Amazon EC2 instance user data is the data
that you specified in the form of a configuration script while launching your
instance. Here, the problem is that the application takes 3 minutes to launch,
no matter what. EC2 user data won't help us because it's just here to help us
execute a list of commands, not speed them up.
Use Amazon EC2 Meta-Data - Amazon EC2 instance metadata is data
about your instance that you can use to configure or manage the running
instance. Instance metadata is divided into categories, for example, host
name, events, and security groups. The EC2 meta-data is a distractor and
can only help us determine some metadata attributes on our EC2 instances.
Creating an AMI may help with all the system dependencies, but it won't help
us with speeding up the application start time.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-
metadata.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
Domain
Question 45Correct
Setup another fleet of Amazon EC2 instances for the web tier in
the eu-west-1 region. Enable latency routing policy in Amazon Route
53
Setup another fleet of Amazon EC2 instances for the web tier in
the eu-west-1 region. Enable failover routing policy in Amazon
Route 53
Setup another fleet of Amazon EC2 instances for the web tier in
the eu-west-1 region. Enable geolocation routing policy in Amazon
Route 53
Overall explanation
Correct options:
Setup another fleet of Amazon EC2 instances for the web tier in
the eu-west-1 region. Enable latency routing policy in Amazon Route
53
Amazon Aurora read replicas can be used to scale out reads across regions.
This will improve the application performance for users in Europe. Therefore,
this is also a correct option for the given use-case.
Incorrect options:
Setup another fleet of Amazon EC2 instances for the web tier in
the eu-west-1 region. Enable geolocation routing policy in Amazon
Route 53 - Geolocation routing lets you choose the resources that serve
your traffic based on the geographic location of your users, meaning the
location that DNS queries originate from. For example, you might want all
queries from Europe to be routed to an ELB load balancer in the Frankfurt
region. You can also use geolocation routing to restrict the distribution of
content to only the locations in which you have distribution rights. You
cannot use geolocation routing to reduce latency, hence this option is
incorrect.
Setup another fleet of Amazon EC2 instances for the web tier in
the eu-west-1 region. Enable failover routing policy in Amazon
Route 53 - Failover routing lets you route traffic to a resource when the
resource is healthy or to a different resource when the first resource is
unhealthy. The primary and secondary records can route traffic to anything
from an Amazon S3 bucket that is configured as a website to a complex tree
of records. You cannot use failover routing to reduce latency, hence this
option is incorrect.
References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-
policy.html
https://aws.amazon.com/blogs/aws/new-cross-region-read-replicas-for-
amazon-aurora/
Domain
Question 46Correct
Overall explanation
Correct option:
AWS KMS is a service that combines secure, highly available hardware and
software to provide a key management system scaled for the cloud. Amazon
S3 uses server-side encryption with AWS KMS (SSE-KMS) to encrypt your S3
object data. Also, when SSE-KMS is requested for the object, the S3
checksum as part of the object's metadata, is stored in encrypted form.
If you use KMS keys, you can use AWS KMS through the AWS Management
Console or the AWS KMS API to do the following:
2. Define the policies that control how and by whom KMS keys can be
used.
3. Audit their usage to prove that they are being used correctly. Auditing
is supported by the AWS KMS API, but not by the AWS KMSAWS
Management Console.
When you enable automatic key rotation for a KMS key, AWS KMS generates
new cryptographic material for the KMS key every year.
via - https://docs.aws.amazon.com/kms/latest/developerguide/rotate-
keys.html
For the given use case, you can set up server-side encryption with AWS KMS
Keys (SSE-KMS) with automatic key rotation.
Incorrect options:
References:
https://docs.aws.amazon.com/kms/latest/developerguide/
concepts.html#master_keys
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-
encryption.html
Domain
Question 47Correct
A company has many Amazon Virtual Private Cloud (Amazon VPC) in various
accounts, that need to be connected in a star network with one another and
connected with on-premises networks through AWS Direct Connect.
AWS PrivateLink
Overall explanation
Correct option:
via - https://aws.amazon.com/transit-gateway/
Incorrect options:
VPC Peering helps connect two VPCs and is not transitive. It would require to
create many peering connections between all the VPCs to have them
connect. This alone wouldn't work, because we would need to also connect
the on-premises data center through Direct Connect and Direct Connect
Gateway, but that's not mentioned in this answer.
References:
https://aws.amazon.com/transit-gateway/
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/
API_CreateVpnGateway.html
Domain
Question 48Correct
Overall explanation
Correct option:
Amazon FSx for Windows File Server provides fully managed, highly reliable
file storage that is accessible over the industry-standard Service Message
Block (SMB) protocol. It is built on Windows Server, delivering a wide range
of administrative features such as user quotas, end-user file restore, and
Microsoft Active Directory (AD) integration. The Distributed File System
Replication (DFSR) service is a new multi-master replication engine that is
used to keep folders synchronized on multiple servers. Amazon FSx supports
the use of Microsoft’s Distributed File System (DFS) to organize shares into a
single folder structure up to hundreds of PB in size.
Amazon FSx for Windows is a perfect distributed file system, with replication
capability, and can be mounted on Windows.
via - https://aws.amazon.com/fsx/windows/
Incorrect options:
Amazon FSx for Lustre - Amazon FSx for Lustre makes it easy and cost-
effective to launch and run the world’s most popular high-performance file
system. It is used for workloads such as machine learning, high-performance
computing (HPC), video processing, and financial modeling. The open-source
Lustre file system is designed for applications that require fast storage –
where you want your storage to keep up with your compute. Amazon FSx
enables you to use Lustre file systems for any workload where storage speed
matters. FSx for Lustre integrates with Amazon S3, making it easy to process
data sets with the Lustre file system. Amazon FSx for Lustre is for Linux only,
so this option is incorrect.
Amazon Elastic File System (Amazon EFS) - Amazon Elastic File System
(Amazon EFS) provides a simple, scalable, fully managed elastic NFS file
system for use with AWS Cloud services and on-premises resources. It is built
to scale on-demand to petabytes without disrupting applications, growing
and shrinking automatically as you add and remove files, eliminating the
need to provision and manage capacity to accommodate growth. Amazon
EFS is a network file system but for Linux only, so this option is incorrect.
Amazon Simple Storage Service (Amazon S3) - Amazon Simple Storage
Service (Amazon S3) is an object storage service that offers industry-leading
scalability, data availability, security, and performance. Amazon S3 cannot
be mounted as a file system on Windows, so this option is incorrect.
References:
https://docs.microsoft.com/en-us/previous-versions/windows/desktop/dfsr/
dfsr-overview
https://aws.amazon.com/fsx/windows/
https://aws.amazon.com/fsx/lustre/
Domain
Question 49Incorrect
Which option below helps change this default behavior to ensure that the
volume persists even after the instance terminates?
Correct answer
Overall explanation
Correct option:
Incorrect options:
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
RootDeviceStorage.html
Domain
Question 50Incorrect
Correct answer
Create an IAM role for the AWS Lambda function that grants access
to the Amazon S3 bucket. Set the IAM role as the AWS Lambda
function's execution role. Make sure that the bucket policy also
grants access to the AWS Lambda function's execution role
The Amazon S3 bucket owner should make the bucket public so that
it can be accessed by the AWS Lambda function in the other AWS
account
Overall explanation
Correct option:
Create an IAM role for the AWS Lambda function that grants access
to the Amazon S3 bucket. Set the IAM role as the AWS Lambda
function's execution role. Make sure that the bucket policy also
grants access to the AWS Lambda function's execution role
If the IAM role that you create for the Lambda function is in the same AWS
account as the bucket, then you don't need to grant Amazon S3 permissions
on both the IAM role and the bucket policy. Instead, you can grant the
permissions on the IAM role and then verify that the bucket policy doesn't
explicitly deny access to the Lambda function role. If the IAM role and the
bucket are in different accounts, then you need to grant Amazon S3
permissions on both the IAM role and the bucket policy. Therefore, this is the
right way of giving access to AWS Lambda for the given use-case.
via - https://aws.amazon.com/premiumsupport/knowledge-center/lambda-
execution-role-s3-bucket/
Incorrect options:
AWS Lambda cannot access resources across AWS accounts. Use
Identity federation to work around this limitation of Lambda - This is
an incorrect statement, used only as a distractor.
Create an IAM role for the AWS Lambda function that grants access
to the Amazon S3 bucket. Set the IAM role as the Lambda function's
execution role and that would give the AWS Lambda function cross-
account access to the Amazon S3 bucket - When the execution role of
AWS Lambda and Amazon S3 bucket to be accessed are from different
accounts, then you need to grant Amazon S3 bucket access permissions to
the IAM role and also ensure that the bucket policy grants access to the AWS
Lambda function's execution role.
The Amazon S3 bucket owner should make the bucket public so that
it can be accessed by the AWS Lambda function in the other AWS
account - Making the Amazon S3 bucket public for the given use-case will be
considered as a security bad practice. It's usually done for very few use-
cases such as hosting a website on Amazon S3. Therefore this option is
incorrect.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-
execution-role-s3-bucket/
Domain
Question 51Incorrect
Correct answer
Overall explanation
Correct option:
Primary DB instance – Supports read and write operations, and performs all
of the data modifications to the cluster volume. Each Aurora DB cluster has
one primary DB instance.
Aurora Replicas have two main purposes. You can issue queries to them to
scale the read operations for your application. You typically do so by
connecting to the reader endpoint of the cluster. That way, Aurora can
spread the load for read-only connections across as many Aurora Replicas as
you have in the cluster. Aurora Replicas also help to increase availability. If
the writer instance in a cluster becomes unavailable, Aurora automatically
promotes one of the reader instances to take its place as the new writer.
Incorrect options:
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/
Aurora.Overview.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/
Concepts.AuroraHighAvailability.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/
Aurora.Overview.Endpoints.html
Domain
Question 52Correct
Which of the following would you identify as correct regarding the data
transfer charges for Amazon RDS read replicas?
There are no data transfer charges for replicating data across AWS
Regions
There are data transfer charges for replicating data across AWS
Regions
There are data transfer charges for replicating data within the same
AWS Region
There are data transfer charges for replicating data within the same
Availability Zone (AZ)
Overall explanation
Correct option:
There are data transfer charges for replicating data across AWS
Regions
Amazon RDS Read Replicas provide enhanced performance and durability for
Amazon RDS database (DB) instances. They make it easy to elastically scale
out beyond the capacity constraints of a single DB instance for read-heavy
database workloads.
A read replica is billed as a standard DB Instance and at the same rates. You
are not charged for the data transfer incurred in replicating data between
your source DB instance and read replica within the same AWS Region.
via - https://aws.amazon.com/rds/faqs/
Incorrect options:
There are data transfer charges for replicating data within the same
Availability Zone (AZ)
There are data transfer charges for replicating data within the same
AWS Region
There are no data transfer charges for replicating data across AWS
Regions
Reference:
https://aws.amazon.com/rds/faqs/
Domain
Question 53Correct
Overall explanation
Correct option:
Amazon RDS Read Replicas provide enhanced performance and durability for
RDS database (DB) instances. They make it easy to elastically scale out
beyond the capacity constraints of a single DB instance for read-heavy
database workloads. For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL
Server database engines, Amazon RDS creates a second DB instance using a
snapshot of the source DB instance. It then uses the engines' native
asynchronous replication to update the read replica whenever there is a
change to the source DB instance. read replicas can be within an Availability
Zone, Cross-AZ, or Cross-Region.
via - https://aws.amazon.com/rds/features/read-replicas/
Incorrect options:
References:
https://aws.amazon.com/rds/features/read-replicas/
Domain
Question 54Correct
Which of the following would you recommend to securely share the database
with the auditor?
Export the database contents to text files, store the files in Amazon
S3, and create a new IAM user for the auditor with access to that
bucket
Overall explanation
Correct option:
Making an encrypted snapshot of the database will give the auditor a copy of
the database, as required for the given use case.
Incorrect options:
Export the database contents to text files, store the files in Amazon
S3, and create a new IAM user for the auditor with access to that
bucket - This solution is feasible though not optimal. It requires a lot of
unnecessary work and is difficult to audit when such bulk data is exported
into text files.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
USER_ShareSnapshot.html
Domain
Question 55Correct
Overall explanation
Correct option:
Aurora backs up your cluster volume automatically and retains restore data
for the length of the backup retention period. Aurora backups are continuous
and incremental so you can quickly restore to any point within the backup
retention period. No performance impact or interruption of database service
occurs as backup data is being written.
via
- https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.M
anaging.Backups.html
Automated backups occur daily during the preferred backup window. If the
backup requires more time than allotted to the backup window, the backup
continues after the window ends, until it finishes. The backup window can't
overlap with the weekly maintenance window for the DB cluster. Aurora
backups are continuous and incremental, but the backup window is used to
create a daily system backup that is preserved within the backup retention
period. The latest restorable time for a DB cluster is the most recent point at
which you can restore your DB cluster, typically within 5 minutes of the
current time.
For the given use case, you can create the dev database by restoring from
the automated backups of Amazon Aurora.
Incorrect options:
A read replica is only meant to serve read traffic. The primary purpose of the
read replica is to replicate the data in the primary DB instance. A read replica
cannot be used as a dev database because it does not allow any database
write operations.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/
Aurora.Managing.Backups.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
SQLServer.ReadReplicas.html
Domain
Design Resilient Architectures
Question 56Correct
Save the AWS credentials (access key Id and secret access token) in
a configuration file within the application code on the Amazon EC2
instances. Amazon EC2 instances can use these credentials to
access Amazon S3 and Amazon DynamoDB
Configure AWS CLI on the Amazon EC2 instances using a valid IAM
user's credentials. The application code can then invoke shell scripts
to access Amazon S3 and Amazon DynamoDB via AWS CLI
Attach the appropriate IAM role to the Amazon EC2 instance profile
so that the instance can access Amazon S3 and Amazon DynamoDB
Overall explanation
Correct option:
Attach the appropriate IAM role to the Amazon EC2 instance profile
so that the instance can access Amazon S3 and Amazon DynamoDB
Instead, you should use an IAM role to manage temporary credentials for
applications that run on an Amazon EC2 instance. When you use a role, you
don't have to distribute long-term credentials (such as a username and
password or access keys) to an Amazon EC2 instance. The role supplies
temporary permissions that applications can use when they make calls to
other AWS resources. When you launch an Amazon EC2 instance, you specify
an IAM role to associate with the instance. Applications that run on the
instance can then use the role-supplied temporary credentials to sign API
requests. Therefore, this option is correct.
via
- https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-
role-ec2.html
Incorrect options:
Save the AWS credentials (access key Id and secret access token) in
a configuration file within the application code on the Amazon EC2
instances. Amazon EC2 instances can use these credentials to
access Amazon S3 and Amazon DynamoDB
Configure AWS CLI on the Amazon EC2 instances using a valid IAM
user's credentials. The application code can then invoke shell scripts
to access Amazon S3 and Amazon DynamoDB via AWS CLI
Keeping the AWS credentials (encrypted or plain text) on the Amazon EC2
instance is a bad security practice, therefore these three options using the
AWS credentials are incorrect.
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-
ec2.html
Domain
Question 57Correct
You have multiple AWS accounts within a single AWS Region managed by
AWS Organizations and you would like to ensure all Amazon EC2 instances in
all these accounts can communicate privately. Which of the following
solutions provides the capability at the CHEAPEST cost?
Create an AWS Transit Gateway and link all the virtual private cloud
(VPCs) in all the accounts together
Overall explanation
Correct option:
AWS Resource Access Manager (RAM) is a service that enables you to easily
and securely share AWS resources with any AWS account or within your AWS
Organization. You can share AWS Transit Gateways, Subnets, AWS License
Manager configurations, and Amazon Route 53 Resolver rules resources with
RAM. RAM eliminates the need to create duplicate resources in multiple
accounts, reducing the operational overhead of managing those resources in
every single account you own. You can create resources centrally in a multi-
account environment, and use RAM to share those resources across accounts
in three simple steps: create a Resource Share, specify resources, and
specify accounts. RAM is available to you at no additional charge.
The correct solution is to share the subnet(s) within a VPC using RAM. This
will allow all Amazon EC2 instances to be deployed in the same VPC
(although from different accounts) and easily communicate with one another.
via - https://aws.amazon.com/ram/
Incorrect options:
Create a Private Link between all the Amazon EC2 instances - AWS
PrivateLink simplifies the security of data shared with cloud-based
applications by eliminating the exposure of data to the public Internet. AWS
PrivateLink provides private connectivity between VPCs, AWS services, and
on-premises applications, securely on the Amazon network. Private Link is a
distractor in this question. Private Link is leveraged to create a private
connection between an application that is fronted by an NLB in an account,
and an Elastic Network Interface (ENI) in another account, without the need
of VPC peering and allowing the connections between the two to remain
within the AWS network.
Create an AWS Transit Gateway and link all the virtual private cloud
(VPCs) in all the accounts together - AWS Transit Gateway is a service
that enables customers to connect their Amazon Virtual Private Clouds
(VPCs) and their on-premises networks to a single gateway. A Transit
Gateway will work but will be an expensive solution. Here we want to
minimize cost.
References:
https://aws.amazon.com/ram/
https://aws.amazon.com/privatelink/
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
https://aws.amazon.com/transit-gateway/
Domain
Question 58Correct
By default, user data runs only during the boot cycle when you first
launch an instance
By default, scripts entered as user data are executed with root user
privileges
Overall explanation
Correct options:
By default, scripts entered as user data are executed with root user
privileges
Scripts entered as user data are executed as the root user, hence do not
need the sudo command in the script. Any files you create will be owned by
root; if you need non-root users to have file access, you should modify the
permissions accordingly in the script.
By default, user data runs only during the boot cycle when you first
launch an instance
By default, user data scripts and cloud-init directives run only during the boot
cycle when you first launch an instance. You can update your configuration to
ensure that your user data scripts and cloud-init directives run every time
you restart your instance.
Incorrect options:
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Domain
Question 59Correct
Amazon ElastiCache
Amazon Neptune
Overall explanation
Correct option:
Amazon Relational Database Service (Amazon RDS) makes it easy to set up,
operate, and scale a relational database in the cloud. It provides cost-
efficient and resizable capacity while automating time-consuming
administration tasks such as hardware provisioning, database setup,
patching, and backups. RDS allows you to create, read, update, and delete
records without any item lock or ambiguity. All RDS transactions must be
ACID compliant or be Atomic, Consistent, Isolated, and Durable to ensure
data integrity.
Incorrect options:
References:
https://aws.amazon.com/relational-database/
https://aws.amazon.com/rds/
https://aws.amazon.com/neptune/
https://docs.aws.amazon.com/AmazonS3/latest/dev/
Introduction.html#ConsistencyModel
Domain
Question 60Incorrect
"Version":"2012-10-17",
"Id":"EC2TerminationPolicy",
"Statement":[
"Effect":"Deny",
"Action":"ec2:*",
"Resource":"*",
"Condition":{
"StringNotEquals":{
"ec2:Region":"us-west-1"
},
"Effect":"Allow",
"Action":"ec2:TerminateInstances",
"Resource":"*",
"Condition":{
"IpAddress":{
"aws:SourceIp":"10.200.200.0/24"
Correct answer
Overall explanation
Correct option:
The given policy denies all EC2 specification actions on all resources when
the region of the underlying resource is not us-west-1. The policy allows the
terminate EC2 action on all resources when the source IP address is in the
CIDR range 10.200.200.0/24, therefore it would allow the user with the
source IP 10.200.200.200 to terminate the Amazon EC2 instance.
Incorrect options:
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/
reference_policies_evaluation-logic.html
Domain
Design Secure Architectures
Question 61Incorrect
Spot Instances
Dedicated Hosts
On-Demand Instances
Correct answer
Dedicated Instances
Overall explanation
Correct option:
Dedicated Instances
Dedicated Instances are Amazon EC2 instances that run in a virtual private
cloud (VPC) on hardware that's dedicated to a single customer. Dedicated
Instances that belong to different AWS accounts are physically isolated at a
hardware level, even if those accounts are linked to a single-payer account.
However, Dedicated Instances may share hardware with other instances
from the same AWS account that are not Dedicated Instances.
A Dedicated Host is also a physical server that's dedicated for your use. With
a Dedicated Host, you have visibility and control over how instances are
placed on the server.
Incorrect options:
via - https://aws.amazon.com/ec2/pricing/
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-
instance.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-
purchasing-options.html
Domain
You have a team of developers in your company, and you would like to
ensure they can quickly experiment with AWS Managed Policies by attaching
them to their accounts, but you would like to prevent them from doing an
escalation of privileges, by granting themselves
the AdministratorAccess managed policy. How should you proceed?
Put the developers into an IAM group, and then define an IAM
permission boundary on the group that will restrict the managed
policies they can attach to themselves
Correct answer
Overall explanation
Correct option:
Incorrect options:
References:
https://docs.aws.amazon.com/IAM/latest/UserGuide/
access_policies_boundaries.html
https://docs.aws.amazon.com/organizations/latest/userguide/
orgs_manage_policies_scp.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/
access_policies_boundaries.html
Domain
Question 63Correct
Send an invite to the new organization. Accept the invite to the new
organization from the member account. Remove the member
account from the old organization
Overall explanation
Correct option:
Remove the member account from the old organization. Send an
invite to the member account from the new Organization. Accept the
invite to the new organization from the member account
AWS Organizations helps you centrally govern your environment as you grow
and scale your workloads on AWS. Using AWS Organizations, you can
automate account creation, create groups of accounts to reflect your
business needs, and apply policies for these groups for governance. You can
also simplify billing by setting up a single payment method for all of your
AWS accounts. Through integrations with other AWS services, you can use
Organizations to define central configurations and resource sharing across
accounts in your organization.
To migrate accounts from one organization to another, you must have root or
IAM access to both the member and master accounts. Here are the steps to
follow: 1. Remove the member account from the old organization 2. Send an
invite to the member account from the new Organization 3. Accept the invite
to the new organization from the member account
Incorrect options:
Send an invite to the new organization. Accept the invite to the new
organization from the member account. Remove the member
account from the old organization
These two options contradict the steps described earlier for account
migration from one organization to another.
References:
https://aws.amazon.com/organizations/
https://aws.amazon.com/premiumsupport/knowledge-center/organizations-
move-accounts/
Domain
Overall explanation
Correct option:
Please see this detailed overview of various types of Amazon EC2 instances
from a pricing perspective:
via - https://aws.amazon.com/ec2/pricing/
Incorrect options:
Reference:
https://aws.amazon.com/ec2/pricing/
Domain
Question 65Correct
The engineering team at a logistics company has noticed that the Auto
Scaling group (ASG) is not terminating an unhealthy Amazon EC2 instance.
A custom health check might have failed. The Auto Scaling group
(ASG) does not terminate instances that are set unhealthy by
custom checks
The health check grace period for the instance has not expired
The instance has failed the Elastic Load Balancing (ELB) health
check status
Overall explanation
Correct options:
The health check grace period for the instance has not expired
Amazon EC2 Auto Scaling doesn't terminate an instance that came into
service based on Amazon EC2 status checks and Elastic Load Balancing (ELB)
health checks until the health check grace period expires.
via
- https://docs.aws.amazon.com/autoscaling/ec2/userguide/healthcheck.html#
health-check-grace-period
Amazon EC2 Auto Scaling does not immediately terminate instances with an
Impaired status. Instead, Amazon EC2 Auto Scaling waits a few minutes for
the instance to recover. Amazon EC2 Auto Scaling might also delay or not
terminate instances that fail to report data for status checks. This usually
happens when there is insufficient data for the status check metrics in
Amazon CloudWatch.
The instance has failed the Elastic Load Balancing (ELB) health
check status
By default, Amazon EC2 Auto Scaling doesn't use the results of ELB health
checks to determine an instance's health status when the group's health
check configuration is set to EC2. As a result, Amazon EC2 Auto Scaling
doesn't terminate instances that fail ELB health checks. If an instance's
status is OutofService on the ELB console, but the instance's status is
Healthy on the Amazon EC2 Auto Scaling console, confirm that the health
check type is set to ELB.
Incorrect options:
A custom health check might have failed. The Auto Scaling group
(ASG) does not terminate instances that are set unhealthy by
custom checks - This statement is incorrect. You can define custom health
checks in Amazon EC2 Auto Scaling. When a custom health check
determines that an instance is unhealthy, the check manually triggers
SetInstanceHealth and then sets the instance's state to Unhealthy. Amazon
EC2 Auto Scaling then terminates the unhealthy instance.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-
terminate-instance/
https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-
instance-how-terminated/
Domain