AWS Exam Dumps for Architects
AWS Exam Dumps for Architects
https://www.2passeasy.com/dumps/AWS-Solution-Architect-Associate/
NEW QUESTION 1
- (Exam Topic 1)
A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an
Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against
large-scale DDoS attacks.
Which solution meets these requirements?
Answer: D
Explanation:
https://aws.amazon.com/shield/faqs/
NEW QUESTION 2
- (Exam Topic 1)
A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration changes on its
AWS resources and record a history of API calls made to these resources.
What should a solutions architect do to meet these requirements?
A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls
B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls
C. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls
D. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls
Answer: B
NEW QUESTION 3
- (Exam Topic 1)
A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on an Amazon EC2
instance in a public subnet of a VPC A solutions architect needs to connect from the on-premises network, through the company's internet connection to the
bastion host and to the application servers The solutions architect must make sure that the security groups of all the EC2 instances will allow that access
Which combination of steps should the solutions architect take to meet these requirements? (Select TWO)
A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances
B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company
C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company
D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of the bastion host
E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of the bastion host
Answer: CD
Explanation:
https://digitalcloud.training/ssh-into-ec2-in-private-subnet/
NEW QUESTION 4
- (Exam Topic 1)
A company needs to store data in Amazon S3 and must prevent the data from being changed. The company wants new objects that are uploaded to Amazon S3
to remain unchangeable for a nonspecific amount of time until the company decides to modify the objects. Only specific users in the company’s AWS account can
have the ability to delete the objects. What should a solutions architect do to meet these requirements?
A. Create an S3 Glacier vault Apply a write-once, read-many (WORM) vault lock policy to the objects
B. Create an S3 bucket with S3 Object Lock enabled Enable versioning Set a retention period of 100 years Use governance mode as the S3 bucket's default
retention mode for new objects
C. Create an S3 bucket Use AWS CloudTrail to (rack any S3 API events that modify the objects Upon notification, restore the modified objects from any backup
versions that the company has
D. Create an S3 bucket with S3 Object Lock enabled Enable versioning Add a legal hold to the objects Add the s3 PutObjectLegalHold permission to the IAM
policies of users who need to delete the objects
Answer: D
Explanation:
"The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold prevents an object version
from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until removed."
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-hold.html
NEW QUESTION 5
- (Exam Topic 1)
A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A solutions architect
needs to share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner's AWS account. The AMI is backed by Amazon Elastic
Block Store (Amazon EBS) and uses a customer managed customer master key (CMK) to encrypt EBS volume snapshots.
What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner's AWS account?
B. Modify the CMK's key policy to allow the MSP Partner's AWS account to use the key
C. Modify the launchPermission property of the AM
D. Share the AMI with the MSP Partner's AWS account onl
E. Modify the CMK's key policy to allow the MSP Partner's AWS account to use the key.
F. Modify the launchPermission property of the AMI Share the AMI with the MSP Partner's AWS account onl
G. Modify the CMK's key policy to trust a new CMK that is owned by the MSP Partner for encryption.
H. Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner's AWS account.Encrypt the S3 bucket with a CMK that is owned by the
MSP Partner Copy and launch the AMI in the MSP Partner's AWS account.
Answer: B
Explanation:
Share the existing KMS key with the MSP external account because it has already been used to encrypt the AMI snapshot.
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
NEW QUESTION 6
- (Exam Topic 1)
A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is stored in Amazon
EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the production environment. The software
that accesses this data requires consistently high I/O performance.
A solutions architect needs to minimize the time that is required to clone the production data into the test environment.
Which solution will meet these requirements?
Answer: C
NEW QUESTION 7
- (Exam Topic 1)
A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon EC2 instances
for the development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10% CPU utilization during non-
peak hours.
The production EC2 instances run 24 hours a day. The development and test EC2 instances run for at least 8 hours each day. The company plans to implement
automation to stop the development and test EC2 instances when they are not in use.
Which EC2 instance purchasing solution will meet the company's requirements MOST cost-effectively?
Answer: B
NEW QUESTION 8
- (Exam Topic 1)
A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple processing to
transform the data and save the data in JSON format for later analysis.
Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files. On other days,
users will upload a few files or no files.
Which solution meets these requirements with the LEAST operational overhead?
A. Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the dat
B. Store the resulting JSON file in an Amazon Aurora DB cluster.
C. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queu
D. Use Amazon EC2 instances to read from the queue and process the dat
E. Store the resulting JSON file in Amazon DynamoDB.
F. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queu
G. Use an AWS Lambda function to read from the queue and process the dat
H. Store the resulting JSON file in Amazon DynamoD
I. Most Voted
J. Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is uploade
K. Use an AWS Lambda function to consume the event from the stream and process the dat
L. Store the resulting JSON file in Amazon Aurora DB cluster.
Answer: C
Explanation:
Amazon S3 sends event notifications about S3 buckets (for example, object created, object removed, or object restored) to an SNS topic in the same Region.
The SNS topic publishes the event to an SQS queue in the central Region.
The SQS queue is configured as the event source for your Lambda function and buffers the event messages for the Lambda function.
The Lambda function polls the SQS queue for messages and processes the Amazon S3 event notifications according to your application’s requirements.
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/subscribe-a-lambda-function-to-event-notific
NEW QUESTION 9
- (Exam Topic 1)
An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24 hours. The
company wants to be able to handle millions of requests each hour with millisecond latency during peak hours.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon S3 to host the full website in different S3 buckets Add Amazon CloudFront distributions Set the S3 buckets as origins for the distributions Store the
order data in Amazon S3
B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones Add an Application Load Balancer (ALB) to
distribute the website traffic Add another ALB for the backend APIs Store the data in Amazon RDS for MySQL
C. Migrate the full application to run in containers Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use the Kubernetes Cluster
Autoscaler to increase and decrease the number of pods to process bursts in traffic Store the data in Amazon RDS for MySQL
D. Use an Amazon S3 bucket to host the website's static content Deploy an Amazon CloudFront distributio
E. Set the S3 bucket as the origin Use Amazon API Gateway and AWS Lambda functions for the backend APIs Store the data in Amazon DynamoDB
Answer: D
NEW QUESTION 10
- (Exam Topic 1)
A company hosts an application on multiple Amazon EC2 instances The application processes messages from an Amazon SQS queue writes to an Amazon RDS
table and deletes the message from the queue Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate
messages.
What should a solutions architect do to ensure messages are being processed once only?
Answer: D
Explanation:
The visibility timeout begins when Amazon SQS returns a message. During this time, the consumer processes and deletes the message. However, if the
consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the
message becomes visible to other consumers and the message is received again. If a message must be received only once, your consumer should delete it within
the duration of the visibility timeout. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
Keyword: SQS queue writes to an Amazon RDS From this, Option D best suite & other Options ruled out [Option A - You can't intruduce one more Queue in
the existing one; Option B - only Permission & Option C - Only Retrieves Messages] FIF O queues are designed to never introduce duplicate messages.
However, your message producer might introduce duplicates in certain scenarios: for example, if the producer sends a message, does not receive a response, and
then resends the same message. Amazon SQS APIs provide deduplication functionality that prevents your message producer from sending duplicates. Any
duplicates introduced by the message producer are removed within a 5-minute deduplication interval. For standard queues, you might occasionally receive a
duplicate copy of a message (at-least- once delivery). If you use a standard queue, you must design your applications to be idempotent (that is, they must not be
affected adversely when processing the same message more than once).
NEW QUESTION 10
- (Exam Topic 1)
A company has an application that ingests incoming messages. These messages are then quickly consumed by dozens of other applications and microservices.
The number of messages varies drastically and sometimes spikes as high as 100,000 each second. The
company wants to decouple the solution and increase scalability. Which solution meets these requirements?
Answer: D
Explanation:
https://aws.amazon.com/sqs/features/
By routing incoming requests to Amazon SQS, the company can decouple the job requests from the processing instances. This allows them to scale the number of
instances based on the size of the queue, providing more resources when needed. Additionally, using an Auto Scaling group based on the queue size will
automatically scale the number of instances up or down depending on the workload. Updating the software to read from the queue will allow it to process the job
requests in a more efficient manner, improving the performance of the system.
NEW QUESTION 14
- (Exam Topic 1)
A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week.
What should the company do to guarantee the EC2 capacity?
Answer: D
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html
Reserve instances: You will have to pay for the whole term (1 year or 3years) which is not cost effective
NEW QUESTION 17
- (Exam Topic 1)
A company's dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it wants to optimize
site loading times for new European users. The site's backend must remain in the United States. The product is being launched in a few days, and an immediate
solution is needed.
What should the solutions architect recommend?
A. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it.
B. Move the website to Amazon S3. Use cross-Region replication between Regions.
C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.
D. Use an Amazon Route 53 geo-proximity routing policy pointing to on-premises servers.
Answer: C
Explanation:
https://aws.amazon.com/pt/blogs/aws/amazon-cloudfront-support-for-custom-origins/
You can now create a CloudFront distribution using a custom origin. Each distribution will can point to an S3 or to a custom origin. This could be another storage
service, or it could be something more interesting and more dynamic, such as an EC2 instance or even an Elastic Load Balancer
NEW QUESTION 21
- (Exam Topic 1)
A solutions architect is developing a multiple-subnet VPC architecture. The solution will consist of six subnets in two Availability Zones. The subnets are defined as
public, private and dedicated for databases. Only the Amazon EC2 instances running in the private subnets should be able to access a database.
Which solution meets these requirements?
A. Create a now route table that excludes the route to the public subnets' CIDR block
B. Associate the route table to the database subnets.
C. Create a security group that denies ingress from the security group used by instances in the public subnet
D. Attach the security group to an Amazon RDS DB instance.
E. Create a security group that allows ingress from the security group used by instances in the private subnet
F. Attach the security group to an Amazon RDS DB instance.
G. Create a new peering connection between the public subnets and the private subnet
H. Create a different peering connection between the private subnets and the database subnets.
Answer: C
Explanation:
Security groups are stateful. All inbound traffic is blocked by default. If you create an inbound rule allowing traffic in, that traffic is automatically allowed back out
again. You cannot block specific IP address using Security groups (instead use Network Access Control Lists).
"You can specify allow rules, but not deny rules." "When you first create a security group, it has no inbound rules. Therefore, no inbound traffic originating from
another host to your instance is allowed until you add inbound rules to the security group." Source:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#VPCSecurityGroups
NEW QUESTION 23
- (Exam Topic 1)
A company is developing a two-tier web application on AWS. The company's developers have deployed the application on an Amazon EC2 instance that connects
directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The company must also implement a
solution to automatically rotate the database credentials on a regular basis.
Which solution will meet these requirements with the LEAST operational overhead?
Answer: C
Explanation:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html
NEW QUESTION 28
- (Exam Topic 1)
A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket Queries will be
simple and will run on-demand A solutions architect needs to perform the analysis with minimal changes to the existing architecture
What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?
A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed
B. Use Amazon CloudWatch Logs to store the logs Run SQL queries as needed from the Amazon CloudWatch console
C. Use Amazon Athena directly with Amazon S3 to run the queries as needed
D. Use AWS Glue to catalog the logs Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed
Answer: C
Explanation:
Amazon Athena can be used to query JSON in S3
NEW QUESTION 29
- (Exam Topic 1)
A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The
testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the
compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?
Answer: A
NEW QUESTION 31
- (Exam Topic 1)
A company runs multiple Windows workloads on AWS. The company's employees use Windows file shares that are hosted on two Amazon EC2 instances. The
file shares synchronize data between themselves and
maintain duplicate copies. The company wants a highly available and durable storage solution that preserves how users currently access the files.
What should a solutions architect do to meet these requirements?
A. Migrate all the data to Amazon S3 Set up IAM authentication for users to access files
B. Set up an Amazon S3 File Gatewa
C. Mount the S3 File Gateway on the existing EC2 Instances.
D. Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuratio
E. Migrate all the data to FSx for Windows File Server.
F. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuratio
G. Migrate all the data to Amazon EFS.
Answer: A
NEW QUESTION 32
- (Exam Topic 1)
A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an
Availability Zone Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the
costs of storing and retrieving the media files.
Which storage option meets these requirements?
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access {S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answer: B
Explanation:
S3 Intelligent-Tiering - Perfect use case when you don't know the frequency of access or irregular patterns of usage.
Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed
data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent
Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier
Deep Archive) for long-term archive and digital preservation. If you have data residency requirements that can’t be met by an existing AWS Region, you can use
the S3 Outposts storage class to store your S3 data on-premises. Amazon S3 also offers capabilities to manage your data throughout its lifecycle. Once an S3
Lifecycle policy is set, your data will automatically transfer to a different storage class without any changes to your application.
https://aws.amazon.com/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/?nc1=h_ls
NEW QUESTION 36
- (Exam Topic 1)
A company uses 50 TB of data for reporting. The company wants to move this data from on premises to AWS A custom application in the company's data center
runs a weekly data transformation job. The company plans to pause the application until the data transfer is complete and needs to begin the transfer process as
soon as possible.
The data center does not have any available network bandwidth for additional workloads A solutions architect must transfer the data and must configure the
transformation job to continue to run in the AWS Cloud
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS DataSync to move the data Create a custom transformation job by using AWS Glue
B. Order an AWS Snowcone device to move the data Deploy the transformation application to the device
C. Order an AWS Snowball Edge Storage Optimized devic
D. Copy the data to the devic
E. Create a custom transformation job by using AWS Glue
F. Order an AWS
G. Snowball Edge Storage Optimized device that includes Amazon EC2 compute Copy the data to the device Create a new EC2 instance on AWS to run the
transformation application
Answer: C
NEW QUESTION 41
- (Exam Topic 1)
A company is preparing to deploy a new serverless workload. A solutions architect must use the principle of least privilege to configure permissions that will be
used to run an AWS Lambda function. An Amazon EventBridge (Amazon CloudWatch Events) rule will invoke the function.
Which solution meets these requirements?
A. Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal.
B. Add an execution role to the function with lambda:InvokeFunction as the action and Service:amazonaws.com as the principal.
C. Add a resource-based policy to the function with lambda:'* as the action and Service:events.amazonaws.com as the principal.
D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.
Answer: D
Explanation:
https://docs.aws.amazon.com/eventbridge/latest/userguide/resource-based-policies-eventbridge.html#lambda-pe
NEW QUESTION 44
- (Exam Topic 1)
A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years.
What is the MOST operationally efficient solution that meets these requirements?
Answer: C
NEW QUESTION 48
- (Exam Topic 1)
A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
Answer: AB
NEW QUESTION 52
- (Exam Topic 1)
A company has more than 5 TB of file data on Windows file servers that run on premises Users and applications interact with the data each day
The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-premises file
storage with minimum latency The company needs a solution that minimizes operational overhead and requires no significant changes to the existing file access
patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS
What should a solutions architect do to meet these requirements?
Answer: D
NEW QUESTION 53
- (Exam Topic 1)
A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes.
What should a solutions architect do to accomplish this goal?
Answer: D
NEW QUESTION 57
- (Exam Topic 1)
A company is migrating a distributed application to AWS The application serves variable workloads The legacy platform consists of a primary server trial
coordinates jobs across multiple compute nodes The company wants to modernize the application with a solution that maximizes resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?
A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 instances
that are managed in an Auto Scaling grou
B. Configure EC2 Auto Scaling to use scheduled scaling
C. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 Instances
that are managed in an Auto Scaling group Configure EC2 Auto Scaling based on the size of the queue
D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed In an Auto Scaling grou
E. Configure AWS CloudTrail as a destination for the fobs Configure EC2 Auto Scaling based on the load on the primary server
F. implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group Configure Amazon EventBridge
(Amazon CloudWatch Events) as a destination for the jobs Configure EC2 Auto Scaling based on the load on the compute nodes
Answer: B
NEW QUESTION 58
- (Exam Topic 1)
A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company must use an
AWS Key Management Service (AWS KMS) customer managed key to encrypt all data that is stored in the S3 buckets. The data in both S3 buckets must be
encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure
replication between the S3 buckets.
B. Create a customer managed multi-Region KMS ke
C. Create an S3 bucket in each Regio
D. Configure replication between the S3 bucket
E. Configure the application to use the KMS key with client-side encryption.
F. Create a customer managed KMS key and an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed
encryption keys (SSE-S3) Configure replication between the S3 buckets.
G. Create a customer managed KMS key and an S3 bucket m each Region Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-
KMS) Configure replication between the S3 buckets.
Answer: B
Explanation:
From https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.html
For most users, the default AWS KMS key store, which is protected by FIPS 140-2 validated cryptographic modules, fulfills their security requirements. There is no
need to add an extra layer of maintenance responsibility or a dependency on an additional service. However, you might consider creating a custom key store if
your organization has any of the following requirements: Key material cannot be stored in a shared environment. Key material must be subject to a secondary,
independent audit path. The HSMs that generate and store key material must be certified at FIPS 140-2 Level 3.
https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.html
https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html
NEW QUESTION 63
- (Exam Topic 1)
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours The company wants to use these
data points in its existing analytics platform A solutions architect must determine the most viable multi-tier option to support this architecture The data points must
be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
Answer: D
Explanation:
https://aws.amazon.com/solutions/implementations/aws-streaming-data-solution-for-amazon-kinesis/
NEW QUESTION 65
- (Exam Topic 1)
A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly available and that
the catalog is stored in a durable location.
What should a solutions architect do to meet these requirements?
Answer: A
NEW QUESTION 69
- (Exam Topic 1)
A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect
needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Ad
ditionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?
A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3
bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts Create a script on the EC2
instances that will store tne alerts m an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon
Elasticsearch Service (Amazon ES) duster Set up the Amazon ES cluster to take manual snapshots every day and delete data from the duster that is older than 14
days
D. Create an Amazon Simple Queue Service (Amazon SQS i standard queue to ingest the alerts and set the message retention period to 14 days Configure
consumers to poll the SQS queue check the age of the message and analyze the message data as needed If the message is 14 days old the consumer should
copy the message to an Amazon S3 bucket and delete the message from the SQS queue
Answer: A
Explanation:
https://aws.amazon.com/kinesis/data-firehose/features/?nc=sn&loc=2#:~:text=into%20Amazon%20S3%2C%20
NEW QUESTION 70
- (Exam Topic 2)
A large media company hosts a web application on AWS. The company wants to start caching confidential media files so that users around the world will have
reliable access to the files. The content is stored in Amazon S3 buckets. The company must deliver the content quickly, regardless of where the requests originate
geographically.
Which solution will meet these requirements?
Answer: C
Explanation:
CloudFront uses a local cache to provide the response, AWS Global accelerator proxies requests and connects to the application all the time for the response.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3
NEW QUESTION 73
- (Exam Topic 2)
A company is migrating an application from on-premises servers to Amazon EC2 instances. As part of the migration design requirements, a solutions architect
must implement infrastructure metric alarms. The company does not need to take action if CPU utilization increases to more than 50% for a short burst of time.
However, if the CPU utilization increases to more than 50% and read IOPS on the disk are high at the same time, the company needs to act as soon as possible.
The solutions architect also must reduce false alarms.
What should the solutions architect do to meet these requirements?
Answer: A
NEW QUESTION 78
- (Exam Topic 2)
A company uses a popular content management system (CMS) for its corporate website. However, the required patching and maintenance are burdensome. The
company is redesigning its website and wants anew solution. The website will be updated four times a year and does not need to have any dynamic content
available. The solution must provide high scalability and enhanced security.
Which combination of changes will meet these requirements with the LEAST operational overhead? (Choose two.)
A. Deploy an AWS WAF web ACL in front of the website to provide HTTPS functionality
B. Create and deploy an AWS Lambda function to manage and serve the website content
C. Create the new website and an Amazon S3 bucket Deploy the website on the S3 bucket with static website hosting enabled
D. Create the new websit
E. Deploy the website by using an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer.
Answer: AD
NEW QUESTION 79
- (Exam Topic 2)
A company wants to run applications in containers in the AWS Cloud. These applications are stateless and can tolerate disruptions within the underlying
infrastructure. The company needs a solution that minimizes cost and operational overhead.
What should a solutions architect do to meet these requirements?
A. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers.
B. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
C. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers.
D. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
Answer: A
Explanation:
https://aws.amazon.com/cn/blogs/compute/cost-optimization-and-resilience-eks-with-spot-instances/
NEW QUESTION 82
- (Exam Topic 2)
A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be started and
stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has asked a solutions architect to
design a scalable and cost-effective solution that meets the requirements of the job.
What should the solutions architect recommend?
Answer: A
NEW QUESTION 84
- (Exam Topic 2)
A company wants to build a scalable key management Infrastructure to support developers who need to encrypt data in their applications.
What should a solutions architect do to reduce the operational burden?
Answer: B
Explanation:
https://aws.amazon.com/kms/faqs/#:~:text=If%20you%20are%20a%20developer%20who%20needs%20to%20d
NEW QUESTION 85
- (Exam Topic 2)
A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts connected to a
PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning is limiting the company's
growth. A solutions architect must improve the application's infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
Answer: AE
NEW QUESTION 89
- (Exam Topic 2)
Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from users around the
world. The files are stored In an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
Answer: D
NEW QUESTION 93
- (Exam Topic 2)
An application runs on Amazon EC2 instances across multiple Availability Zones The instances run in an Amazon EC2 Auto Scaling group behind an Application
Load Balancer The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group
B. Use a target tracking policy to dynamically scale the Auto Scaling group
C. Use an AWS Lambda function to update the desired Auto Scaling group capacity.
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group
Answer: B
Explanation:
https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html
NEW QUESTION 95
- (Exam Topic 2)
A company wants to migrate its existing on-premises monolithic application to AWS.
The company wants to keep as much of the front- end code and the backend code as possible. However, the company wants to break the application into smaller
applications. A different team will manage each application. The company needs a highly scalable solution that minimizes operational overhead.
Which solution will meet these requirements?
A. Host the application on AWS Lambda Integrate the application with Amazon API Gateway.
B. Host the application with AWS Amplif
C. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.
D. Host the application on Amazon EC2 instance
E. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as targets.
F. Host the application on Amazon Elastic Container Service (Amazon ECS) Set up an Application Load Balancer with Amazon ECS as the target.
Answer: D
Explanation:
https://aws.amazon.com/blogs/compute/microservice-delivery-with-amazon-ecs-and-application-load-balancers/
Answer: A
Answer: C
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-custom.html and https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/working-with-
custom-oracle.html
A. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages Set up an AWS Lambda function to process messages from the queue
B. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process Configure an AWS Lambda function
as a subscriber.
C. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to hold message
D. Set up an AWS Lambda function to process messages from the queue independently
E. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to proces
F. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a subscriber.
Answer: A
Explanation:
The details are revealed in below url: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html
FIFO (First-In-First-Out) queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates
can't be tolerated. Examples of situations where you might use FIFO queues include the following: To make sure that user-entered commands are run in the right
order. To display the correct product price by sending price modifications in the right order. To prevent a student from enrolling in a course before registering for an
account.
A. Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations.
B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached.
D. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only.
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM
role.
Answer: DE
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html
A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleAm in the task definition.
C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster.
D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.
Answer: B
A. Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects.
B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days.
C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
Answer: D
A. Configure the security group tor the web servers lo allow inbound traffic on port 443 from 0.0.0. 0/0) Configure the security group for the DB instance to allow
inbound traffic on port 3306 from the security group of the web servers
B. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses of the customers Configure the security group for the
DB instance lo allow inbound traffic on port 3306 from the security group of the web servers
C. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses of the customers Configure the security group for the
DB instance to allow inbound traffic on port 3306 from the IP addresses of the customers
D. Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0.0 Configure the security group for the DB instance to allow
inbound traffic on port 3306 from 0.0.0.0/0)
Answer: A
Answer: A
Explanation:
https://aws.amazon.com/rds/aurora/serverless/
A. S3 Intelligent-Tiering
B. S3 Glacier Instant Retrieval
C. S3 Standard
D. S3 Standard-Infrequent Access (S3 Standard-IA)
Answer: D
Explanation:
S3 Intelligent-Tiering is a cost-optimized storage class that automatically moves data to the most cost-effective access tier based on changing access patterns.
Although it offers cost savings, it also introduces additional latency and retrieval time into the data retrieval process, which may not meet the requirement of
"immediately available" data. On the other hand, S3 Standard-Infrequent Access (S3 Standard-IA) provides low cost storage
with low latency and high throughput performance. It is designed for infrequently accessed data that can be recreated if lost, and can be retrieved in a timely
manner if required. It is a cost-effective solution that meets the requirement of immediately available data and remains accessible for up to 3 months.
Answer: D
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
Answer: A
Answer: BC
Explanation:
AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront
distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators.
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources. You
can protect the following resource types:
Amazon CloudFront distribution Amazon API Gateway REST API Application Load Balancer
AWS AppSync GraphQL API Amazon Cognito user pool
https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
A. Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send data to the audit system.
B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated
C. Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send data to the audit system when instances are launched and
terminated
D. Run a custom script on the instance operating system to send data to the audit system Configure the script to be invoked by the EC2 Auto Scaling group when
the instance starts and is terminated
Answer: B
A. Increase the size of the DB instance to an instance type that has more available memory.
B. Modify the DB instance to be a Multi-AZ DB instanc
C. Configure the application to write to all active RDS DB instances.
D. Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queu
E. Use an AWS Lambda function that Amazon SQS invokes to write data from the queue to the database.
F. Modify the API to write incoming data to an Amazon Simple Notification Service (Amazon SNS) topic.Use an AWS Lambda function that Amazon SNS invokes
to write data from the topic to the database.
Answer: C
Explanation:
Using Amazon SQS will help minimize the number of connections to the database, as the API will write data to a queue instead of directly to the database.
Additionally, using an AWS Lambda function that Amazon SQS invokes to write data from the queue to the database will help ensure that data is not lost during
periods of heavy traffic, as the queue will serve as a buffer between the API and the database.
A cloud engineer is added as an IAM user to the IAM group. Which action will the cloud engineer be able to perform?
Answer: C
Explanation:
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ds/index.html
A. Store the images and geographic codes in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance
B. Store the images in Amazon S3 buckets Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value
C. Store the images and geographic codes in an Amazon DynamoDB table Configure DynamoDB Accelerator (DAX) during times of high load
D. Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a database table Use Oracle running on an Amazon RDS Multi-AZ DB
instance.
Answer: A
D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information
E. Use AWS Systems Manager Application Manager in the application to manage user session information
Answer: AB
A. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.
B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage.
C. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage.
D. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data
storage.
Answer: D
Explanation:
Amazon DocumentDB (with MongoDB compatibility) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up,
operate, and scale MongoDB-compatible databases in the cloud. With Amazon DocumentDB, you can run the same application code and use the same drivers
and tools that you use with MongoDB.
https://docs.aws.amazon.com/documentdb/latest/developerguide/what-is.html
A. Use standard SQL queries in Amazon Athena to analyze the CloudFront togs in the S3 bucket Visualize the results with AWS Glue
B. Use standard SQL queries in Amazon Athena to analyze the CloudFront togs in the S3 bucket Visualize the results with Amazon QuickSight
C. Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs m the S3 bucket Visualize the results with AWS Glue
D. Use standard SQL queries in Amazon DynamoDB to analyze the CtoudFront logs m the S3 bucket Visualize the results with Amazon QuickSight
Answer: D
A. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-east-1 Region
B. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-west-1 Region.
C. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-east-1 Region
D. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-west-1 Regon.
Answer: B
A. Create an AWS DataSync task that shares the data as a mountable file system Mount the file system to the application server
B. Create an Amazon EC2 Windows instance Install and configure a Windows file share role on the instance Connect the application server to the file share
C. Create an Amazon FSx for Windows File Server file system Attach the file system to the origin server Connect the application server to the file system
D. Create an Amazon S3 bucket Assign an IAM role to the application to grant access to the S3 bucket Mount the S3 bucket to the application server
Answer: C
Answer: C
Explanation:
https://aws.amazon.com/blogs/big-data/manage-fine-grained-access-control-using-aws-lake-formation/
Answer: CE
Explanation:
"An active, long-running transaction can slow the process of creating the read replica. We recommend that you wait for long-running transactions to complete
before creating a read replica. If you create multiple read replicas in parallel from the same source DB instance, Amazon RDS takes only one snapshot at the start
of the first create action. When creating a read replica, there are a few things to consider. First, you must enable automatic backups on the source DB instance by
setting the backup retention period to a value other than 0. This requirement also applies to a read replica that is the source DB instance for another read replica"
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
A. Amazon DynamoDB
B. Amazon RDS for MySQL
C. MySQL-compatible Amazon Aurora Serverless
D. MySQL deployed on Amazon EC2 in an Auto Scaling group
Answer: B
Answer: C
Answer: C
Explanation:
By configuring scheduled scaling, the solutions architect can set the Auto Scaling group to automatically scale up to the desired compute level at a specific time
(IAM) when the batch job starts and then automatically scale down after the job is complete. This will allow the desired EC2 capacity to be reached quickly and
also help in reducing the cost.
Answer: CE
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/multivalue-versus-simple-policies/
Answer: C
A. Build out the workflow in AWS Glue Use AWS Glue to invoke AWS Lambda functions to process the workflow slaps
B. Build out the workflow in AWS Step Functions Deploy the application on Amazon EC2 Instances Use Step Functions to invoke the workflow steps on the EC2
instances
C. Build out the workflow in Amazon EventBridg
D. Use EventBridge to invoke AWS Lambda functions on a schedule to process the workflow steps.
E. Build out the workflow m AWS Step Functions Use Step Functions to create a stale machine Use the stale machine to invoke AWS Lambda functions to
process the workflow steps
Answer: C
Answer: C
C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header.
D. Configure AWS Global Accelerato
E. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up host-based routing to different EC2 instances.
F. Configure AWS Global Accelerato
G. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances.
Answer: AC
Explanation:
For C: IMPROVED USER EXPERIENCE Lambda@Edge can help improve your users' experience with your websites and web applications across the world, by
letting you personalize content for them without sacrificing performance. Real-time Image Transformation You can customize your users' experience by
transforming images on the fly based on the user characteristics. For example, you can resize images based on the viewer's device type—mobile, desktop, or
tablet. You can also cache the transformed images at CloudFront Edge locations to further improve performance when delivering images.
https://aws.amazon.com/lambda/edge/
A. Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages Configure the EC2 instance to save the results to an Amazon
S3 bucket.
B. Create an HTTPS endpoint in Amazon API Gatewa
C. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table.
D. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda functio
E. Configure the Lambda function to process the messages and save the results to an Amazon DynamoDB table.
F. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor data can be written
directly to an S3 bucket by way of the VPC endpoint.
Answer: B
Answer: B
Explanation:
https://aws.amazon.com/pinpoint/product-details/sms/ Two-Way Messaging: Receive SMS messages from your customers and reply back to them in a chat-like
interactive experience. With Amazon Pinpoint, you can create automatic responses when customers send you messages that contain certain keywords. You can
even use Amazon Lex to create conversational bots. A majority of mobile phone users read incoming SMS messages almost immediately after receiving them. If
you need to be able to provide your customers with urgent or important information, SMS messaging may be the right solution for you. You can use Amazon
Pinpoint to create targeted groups of customers, and then send them campaign-based messages. You can also use Amazon Pinpoint to send direct messages,
such as appointment confirmations, order updates, and one-time passwords.
Answer: B
A. Update the bucket policy to deny if the PutObject does not have an s3 x-amz-acl header set
B. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-aci header set to private.
C. Update the bucket policy to deny if the PutObject does not have an aws SecureTransport header set to true
D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
Answer: D
Explanation:
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/#:~:text=
A. Scale the EC2 instances by using elastic resize Scale the DB instances to zero outside of business hours
B. Explore AWS Marketplace for partner solutions that will automatically start and stop the EC2 Instances and OB instances on a schedule
C. Launch another EC2 instanc
D. Configure a crontab schedule to run shell scripts that will start and stop the existing EC2 instances and DB instances on a schedule.
E. Create an AWS Lambda function that will start and stop the EC2 instances and DB instances Configure Amazon EventBridge to invoke the Lambda function on
a schedule
Answer: D
Answer: A
A. Create a network ACL for the public subnet Add a rule to deny outbound traffic to 0 0 0 0/0 on port3306
B. Create a security group for the DB instance Add a rule to allow traffic from the public subnet CIDR block on port 3306
C. Create a security group for the web servers in the public subnet Add a rule to allow traffic from 0 0 0 O'O on port 443
D. Create a security group for the DB instance Add a rule to allow traffic from the web servers' security group on port 3306
E. Create a security group for the DB instance Add a rule to deny all traffic except traffic from the web servers' security group on port 3306
Answer: CD
A. Create an AWS Backup plan to back up the DynamoDB table on the first day of each mont
B. Specify a lifecycle policy that transitions the backup to cold storage after 6 month
C. Set the retention period foreach backup to 7 years.
D. Create a DynamoDB on-damand backup of the DynamoDB table on the first day of each month Transition the backup to Amazon S3 Glacier Flexible Retrieval
after 6 month
E. Create an S3 Lifecycle policy to delete backups that are older than 7 years.
F. Use the AWS SDK to develop a script that creates an on-demand backup of the DynamoDB tabl
G. Set up an Amzon EvenlBridge rule that runs the script on the first day of each mont
H. Create a second script that will run on the second day of each month to transition DynamoDB backups that are older than 6 months to cold storage and to
delete backups that are older than 7 years.
I. Use the AWS CLI to create an on-demand backup of the DynamoDB table Set up an Amazon EventBridge rule that runs the command on the first day of each
month with a cron expression Specify in the command to transition the backups to cold storage after 6 months and to delete the backups after 7 years.
Answer: A
A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data Streams.
C. Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis Data Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
Answer: A
A. Use the Amazon S3 Standard storage class Create an S3 Lifecycle policy to move infrequently accessed data to S3 Glacier
B. Use the Amazon S3 Standard storage clas
C. Create an S3 Lifecycle policy to move infrequently accessed data to S3 Standard-Infrequent Access (EF3 Standard-IA).
D. Use the Amazon Elastic File System (Amazon EFS) Standard storage clas
E. Create a Lifecycle management policy to move infrequently accessed data to EFS Standard-Infrequent Access (EFS Standard-IA)
F. Use the Amazon Elastic File System (Amazon EFS) One Zone storage clas
G. Create a Lifecycle management policy to move infrequently accessed data to EFS One Zone-Infrequent Access (EFS One Zone-IA).
Answer: C
Answer: A
A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Regio
B. Use an Aurora global database to deploy the database in the primary Region and the second Regio
C. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
D. Deploy the web tier and the application tier to a second Regio
E. Add an Aurora PostgreSQL cross-Region Aurara Replica in the second Regio
F. Use Amazon Route 53 health checks with afailovers routing policy to the second Region, Promote the secondary to primary as needed.
G. Deploy the web tier and the applicatin tier to a second Regio
H. Create an Aurora PostSQL database in the second Regio
I. Use AWS Database Migration Service (AWS DMS) to replicate the primary database to the second Regio
J. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
K. Deploy the web tier and the application tier to a second Regio
L. Use an Amazon Aurora global database to deploy the database in the primary Region and the second Regio
M. Use Amazon Route 53 health checks with a failover routing policy to the second Regio
N. Promote the secondary to primary as needed.
Answer: A
C. Create an Amazon EC2 in stance-based Docker cluster to handle the dynamic application load.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.
E. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load.
Answer: AD
Explanation:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
* 1. Relational database: RDS
* 2. Container-based applications: ECS
"Amazon ECS enables you to launch and stop your container-based applications by using simple API calls. You can also retrieve the state of your cluster from a
centralized service and have access to many familiar Amazon EC2 features."
* 3. Little manual intervention: Fargate
You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. Alternatively, for more control over your infrastructure, you
can run your tasks and services on a cluster of Amazon EC2 instances that you manage.
Answer: A
Explanation:
https://aws.amazon.com/fsx/lustre/
Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Many workloads such
as machine learning, high performance computing (HPC), video rendering, and financial simulations depend on compute instances accessing the same set of data
through high-performance shared storage.
Answer: D
Explanation:
Use AWS Batch on Amazon EC2. AWS Batch is a fully managed batch processing service that can be used to easily run batch jobs on Amazon EC2 instances. It
can scale the number of instances to match the workload, allowing the batch job to be completed in the desired time frame with minimal operational overhead.
Using AWS Lambda with Amazon API Gateway - AWS Lambda https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html AWS Lambda FAQs
https://aws.amazon.com/lambda/faqs/
Answer: C
during peak traffic hours. The current architecture includes the following:
• A group of Amazon EC2 instances that run in an Amazon EC2 Auto Scaling group to collect orders from the application
• Another group of EC2 instances that run in an Amazon EC2 Auto Scaling group to fulfill orders
The order collection process occurs quickly, but the order fulfillment process can take longer. Data must not be lost because of a scaling event.
A solutions architect must ensure that the order collection process and the order fulfillment process can both scale properly during peak traffic hours. The solution
must optimize utilization of the company’s AWS resources.
Which solution meets these requirements?
A. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups.Configure each Auto Scaling group’s minimum capacity
according to peak workload values.
B. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups.Configure a CloudWatch alarm to invoke an Amazon Simple
Notification Service (Amazon SNS) topic that creates additional Auto Scaling groups on demand.
C. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillmen
D. Configure the EC2 instances to poll their respective queu
E. Scale the Auto Scaling groups based on notifications that the queues send.
F. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillmen
G. Configure the EC2 instances to poll their respective queu
H. Create a metric based on a backlog per instance calculatio
I. Scale the Auto Scaling groups based on this metric.
Answer: D
Explanation:
The number of instances in your Auto Scaling group can be driven by how long it takes to process a message and the acceptable amount of latency (queue delay).
The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.
A. Create a security group for the web servers and allow port 443 from 0.0.0.0/0 Create a security group for the MySQL servers and allow port 3306 from the web
servers security group
B. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0 Create a network ACL (or the MySQL servers and allow port 3306 from the web
servers security group
C. Create a security group for the web servers and allow port 443 from the load balancer Create a security group for the MySQL servers and allow port 3306 from
the web servers security group
D. Create a network ACL 'or the web servers and allow port 443 from the load balancer Create a network ACL for the MySQL servers and allow port 3306 from the
web servers security group
Answer: C
A. Create a database snapshot Copy the snapshot to a new unencrypted snapshot Share the new snapshot with the acquiring company's AWS account
B. Create a database snapshot Add the acquiring company's AWS account to the KMS key policy Share the snapshot with the acquiring company's AWS account
C. Create a database snapshot that uses a different AWS managed KMS key Add the acquiring company's AWS account to the KMS key alia
D. Share the snapshot with the acquiring company's AWS account.
E. Create a database snapshot Download the database snapshot Upload the database snapshot to an Amazon S3 bucket Update the S3 bucket policy to allow
access from the acquiring company's AWS account
Answer: A
A. Create an encryption key and store the key in AWS Secrets Manager Use the key to encrypt the DB instances
B. Generate a certificate in AWS Certificate Manager (ACM). Enable SSL/TLS on the DB instances by using the certificate
C. Create a customer master key (CMK) in AWS Key Management Service (AWS KMS) Enable encryption for the DB instances
D. Generate a certificate in AWS Identity and Access Management {IAM) Enable SSUTLS on the DB instances by using the certificate
Answer: C
A. Host a dynamic contact form page in Amazon Elastic Container Service (Amazon ECS) Set up Amazon Simple Email Service (Amazon SES) to connect to any
third-party email provider.
B. Create an Amazon API Gateway endpoinl with an AWS Lambda backend that makes a call to Amazon Simple Email Service (Amazon SES)
C. Convert the static webpage to dynamic by deploying Amazon Ughtsail Use client-side scnpting to build the contact form Integrate the form with Amazon
WorkMail
D. Create a Q micro Amazon EC2 instance Deploy a LAMP (Linux Apache MySQ
E. PHP/Perl/Python) stack to host the webpage Use client-side scripting to buiW the contact form Integrate the form with Amazon WorkMail
Answer: D
Explanation:
Create a t2 micro Amazon EC2 instance. Deploy a LAMP (Linux Apache MySQL, PHP/Perl/Python) stack to host the webpage. Use client-side scripting to build
the contact form. Integrate the form with Amazon WorkMail. This solution will provide the company with the necessary components to host the contact form page
and integrate it with Amazon WorkMail at the lowest cost. Option A requires the use of Amazon ECS, which is more expensive than EC2, and Option B requires
the use of Amazon API Gateway, which is also more expensive than EC2. Option C requires the use of Amazon Lightsail, which is more expensive than EC2.
Using AWS Lambda with Amazon API Gateway - AWS Lambda https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html AWS Lambda FAQs
https://aws.amazon.com/lambda/faqs/
A. Write a custom AWS Lambda function to generate the thumbnail and alert the use
B. Use the image upload process as an event source to invoke the Lambda function.
C. Create an AWS Step Functions workflow Configure Step Functions to handle the orchestration between the application tiers and alert the user when thumbnail
generation is complete
D. Create an Amazon Simple Queue Service (Amazon SQS) message queu
E. As images are uploaded, place a message on the SQS queue for thumbnail generatio
F. Alert the user through an application message that the image was received
G. Create Amazon Simple Notification Service (Amazon SNS) notification topics and subscriptions Use one subscription with the application to generate the
thumbnail after the image upload is complet
H. Use a second subscription to message the user's mobile app by way of a push notification after thumbnail generation is complete.
Answer: C
Answer: B
Answer: C
Visit Our Site to Purchase the Full Set of Actual AWS-Solution-Architect-Associate Exam Questions With
Answers.
We Also Provide Practice Exam Software That Simulates Real Exam Environment And Has Many Self-Assessment Features. Order the AWS-
Solution-Architect-Associate Product From:
https://www.2passeasy.com/dumps/AWS-Solution-Architect-Associate/
* AWS-Solution-Architect-Associate Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* AWS-Solution-Architect-Associate Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year