(Nov-2023) New PassLeader SAA-C03 Exam Dumps
(Nov-2023) New PassLeader SAA-C03 Exam Dumps
➢ Vendor: Amazon
Visit PassLeader and Download Full Version AWS-Associate (SAA-C03) Exam Dumps
A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
B. Use AWS Glue to deliver streaming data to Amazon S3.
C. Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
D. Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.
Answer: A
Explanation:
Amazon Kinesis Data Firehose: Capture, transform, and load data streams into AWS data stores (S3) in near real-time.
A. Use AWS Systems Manager templates to control which AWS services each department can use.
B. Create organization units (OUs) for each department in AWS Organizations. Attach service control policies (SCPs)
to the OUs.
C. Use AWS CloudFormation to automatically provision only the AWS services that each department can use.
D. Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the usage of
specific AWS services.
Answer: B
A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound
traffic to the NAT gateway.
C. Configure an internet gateway and attach it to the VPModify the private subnet route table to direct internet-bound
traffic to the internet gateway.
D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct
internet-bound traffic to the virtual private gateway.
Answer: B
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
Answer: BD
Explanation:
To decrypt environment variables encrypted with AWS KMS, Lambda needs to be granted permissions to call KMS APIs.
This is done in two places:
- The Lambda execution role needs kms:Decrypt and kms:GenerateDataKey permissions added. The execution role
governs what AWS services the function code can access.
- The KMS key policy needs to allow the Lambda execution role to have kms:Decrypt and kms:GenerateDataKey
permissions for that specific key. This allows the execution role to use that particular key.
A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
B. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3
Standard-IA) after 7 days.
C. Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-Infrequent
Access (S3 Standard-IA) and S3 Glacier.
D. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after 7 days.
Answer: A
Explanation:
Amazon S3 Glacier:
- Expedited Retrieval: Provides access to data within 1-5 minutes.
- Standard Retrieval: Provides access to data within 3-5 hours.
- Bulk Retrieval: Provides access to data within 5-12 hours.
Amazon S3 Glacier Deep Archive:
- Standard Retrieval: Provides access to data within 12 hours.
- Bulk Retrieval: Provides access to data within 48 hours.
Answer: B
Explanation:
EC2 Instance Savings Plans give you the flexibility to change your usage between instances WITHIN a family in that
region.
https://aws.amazon.com/savingsplans/compute-pricing/
A. Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3.
B. Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze the data that is in Amazon
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
S3.
C. Configure Amazon Inspector to analyze the data that is in Amazon S3.
D. Configure Amazon GuardDuty to analyze the data that is in Amazon S3.
Answer: A
Explanation:
Amazon Macie is designed specifically for discovering and classifying sensitive data like PII in S3. This makes it the
optimal service to use. Macie can be enabled directly in the required Regions rather than enabling it across all Regions
which is unnecessary. This minimizes overhead. Macie can be set up to automatically scan the specified S3 buckets on
a schedule. No need to create separate jobs. Security Hub is for security monitoring across AWS accounts, not specific
for PII discovery. More overhead than needed. Inspector and GuardDuty are not built for PII discovery in S3 buckets.
They provide broader security capabilities.
A. Use the compute optimized instance family for the application. Use the memory optimized instance family for the
database.
B. Use the storage optimized instance family for both the application and the database.
C. Use the memory optimized instance family for both the application and the database.
D. Use the high performance computing (HPC) optimized instance family for the application. Use the memory
optimized instance family for the database.
Answer: C
Explanation:
Since both the app and database have high memory needs, the memory optimized family like R5 instances meet those
requirements well. Using the same instance family simplifies management and operations, rather than mixing instance
types. Compute optimized instances may not provide enough memory for the SAP app's needs. Storage optimized is
overkill for the database's compute and memory needs. HPC is overprovisioned for the SAP app.
A. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the private subnets. Add
to the endpoint a security group that has an inbound access rule that allows traffic from the EC2 instances that are in the
private subnets.
B. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public subnets. Attach
to the interface endpoint a VPC endpoint policy that allows access from the EC2 instances that are in the private subnets.
C. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public subnets. Attach
an Amazon SQS access policy to the interface VPC endpoint that allows requests from only a specified VPC endpoint.
D. Implement a gateway endpoint for Amazon SQS. Add a NAT gateway to the private subnets. Attach an IAM role
to the EC2 instances that allows access to the SQS queue.
Answer: A
Explanation:
An interface VPC endpoint is a private way to connect to AWS services without having to expose your VPC to the public
internet. This is the most secure way to connect to Amazon SQS from the private subnets. Configuring the endpoint to
use the private subnets ensures that the traffic between the EC2 instances and the SQS queue is only within the VPC.
This helps to protect the traffic from being intercepted by a malicious actor. Adding a security group to the endpoint that
has an inbound access rule that allows traffic from the EC2 instances that are in the private subnets further restricts the
traffic to only the authorized sources. This helps to prevent unauthorized access to the SQS queue.
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
A. Create an IAM role to read the DynamoDB tables. Associate the role with the application instances by referencing
an instance profile.
B. Create an IAM role that has the required permissions to read and write from the DynamoDB tables. Add the role
to the EC2 instance profile, and associate the instance profile with the application instances.
C. Use the parameter section in the AWS CloudFormation template to have the user input access and secret keys
from an already-created IAM user that has the required permissions to read and write from the DynamoDB tables.
D. Create an IAM user in the AWS CloudFormation template that has the required permissions to read and write
from the DynamoDB tables. Use the GetAtt function to retrieve the access and secret keys, and pass them to the
application instances through the user data.
Answer: B
A. Use Amazon Athena to process the S3 data. Use AWS Glue with the Amazon Redshift data to enrich the S3
data.
B. Use Amazon EMR to process the S3 data. Use Amazon EMR with the Amazon Redshift data to enrich the S3
data.
C. Use Amazon EMR to process the S3 data. Use Amazon Kinesis Data Streams to move the S3 data into Amazon
Redshift so that the data can be enriched.
D. Use AWS Glue to process the S3 data. Use AWS Lake Formation with the Amazon Redshift data to enrich the
S3 data.
Answer: B
Explanation:
Use Amazon EMR to process the semi-structured data in Amazon S3. EMR provides a managed Hadoop framework
optimized for processing large datasets in S3. EMR supports parallel data processing across multiple nodes to speed up
the processing. EMR can integrate directly with Amazon Redshift using the EMR-Redshift integration. This allows
querying the Redshift data from EMR and joining it with the S3 data. This enables enriching the semi-structured S3 data
with the information stored in Redshift.
A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit
gateway for inter-VPC communication.
B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the
VPN tunnel for inter-VPC communication.
C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC
peering connection for inter-VPC communication.
D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use
the Direct Connect connection for inter-VPC communication.
Answer: C
Explanation:
VPC peering provides private connectivity between VPCs without using public IP space. Data transferred between peered
VPCs is free as long as they are in the same region. 500 GB/month inter-VPC data transfer fits within peering free tier.
Transit Gateway (Option A) incurs hourly charges plus data transfer fees. More costly than peering. Site-to-Site VPN
(Option B) incurs hourly charges and data transfer fees. More expensive than peering. Direct Connect (Option D) has
high hourly charges and would be overkill for this use case.
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
A. Select a specific AWS generated tag in the AWS Billing console.
B. Select a specific user-defined tag in the AWS Billing console.
C. Select a specific user-defined tag in the AWS Resource Groups console.
D. Activate the selected tag from each AWS account.
E. Activate the selected tag from the Organizations management account.
Answer: BE
Explanation:
User-defined tags were created by each product team to identify resources. Selecting the relevant tag in the Billing
console will group costs. The tag must be activated from the Organizations management account to consolidate billing
across all accounts. AWS generated tags are predefined by AWS and won't align to product lines. Resource Groups
(Option C) helps manage resources but not billing. Activating the tag from each account (Option D) is not needed since
Organizations centralizes billing.
A. Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to identify the changes
to the OU hierarchy.
B. Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to identify the
changes to the OU hierarchy.
C. Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail organization trail to
identify the changes to the OU hierarchy.
D. Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection operation on a
stack to identify the changes to the OU hierarchy.
Answer: A
Explanation:
The key advantages you highlight of Control Tower are convincing:
- Fully managed service simplifies multi-account setup.
- Built-in account drift notifications detect OU changes automatically.
- More scalable and less complex than Config rules or CloudTrail.
- Better security and compliance guardrails than custom options.
- Lower operational overhead compared to other solutions.
A. Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.
B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application. Route all read
requests through Redis.
C. Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web application. Route all
read requests through Memcached.
D. Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table and populate
Amazon ElastiCache. Route all read requests through ElastiCache.
Answer: A
Explanation:
DAX provides a DynamoDB-compatible caching layer to reduce read latency. It is purpose-built for accelerating
DynamoDB workloads. Using DAX requires minimal application changes - only read requests are routed through it. DAX
handles caching logic automatically without needing complex integration code. ElastiCache Redis/Memcached (Options
B/C) require more integration work to sync DynamoDB data. Using Lambda and Streams to populate ElastiCache (Option
D) is a complex event-driven approach requiring ongoing maintenance. DAX plugs in seamlessly to accelerate
DynamoDB with very little operational overhead.
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
Answer: AB
Explanation:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-ddb.html
A. Use Amazon CloudWatch Container Insights to collect and group the cluster information.
B. Use Amazon EKS Connector to register and connect all Kubernetes clusters.
C. Use AWS Systems Manager to collect and view the cluster information.
D. Use Amazon EKS Anywhere as the primary cluster to view the other clusters with native Kubernetes commands.
Answer: B
Explanation:
You can use Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and visualize
it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and workloads for that
cluster in the Amazon EKS console.
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html
A. Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS encryption to encrypt
the data. Use an IAM instance role to restrict access.
B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS KMS) client-side
encryption to encrypt the data.
C. Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-side encryption to
encrypt the data. Use S3 bucket policies to restrict access.
D. Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application servers. Use
Windows file permissions to restrict access.
Answer: B
Explanation:
RDS MySQL provides a fully managed database service well suited for an ecommerce application. AWS KMS client-side
encryption allows encrypting sensitive data before it hits the database. The data remains encrypted at rest. This protects
sensitive customer data from database admins and privileged users. EBS encryption (Option A) protects data at rest but
not in use. IAM roles don't prevent admin access. S3 (Option C) encrypts data at rest on the server side. Bucket policies
don't restrict admin access. FSx file permissions (Option D) don't prevent admin access to unencrypted data.
A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configure elastic storage scaling.
B. Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on Auto Scaling for the Amazon
Redshift cluster.
C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora. Turn on Aurora
Auto Scaling.
D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB. Configure
an Auto Scaling policy.
Answer: C
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
Explanation:
DMS provides an easy migration path from MySQL to Aurora while minimizing downtime. Aurora is a MySQL-compatible
relational database service that will maintain compatibility with the company's applications. Aurora Auto Scaling allows
the database to automatically scale up and down based on demand to handle increased workloads. RDS MySQL (Option
A) does not scale as well as the Aurora architecture. Redshift (Option B) is for analytics, not transactional data, and may
not be compatible. DynamoDB (Option D) is a NoSQL datastore and lacks MySQL compatibility.
A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2
instance.
C. Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume. Attach
the EBS volume to all the EC2 instances.
D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2
instance. Synchronize the EBS volumes across the different EC2 instances.
Answer: B
Explanation:
How is Amazon EFS different than Amazon S3? Amazon EFS provides shared access to data using a traditional file
sharing permissions model and hierarchical directory structure via the NFSv4 protocol. Applications that access data
using a standard file system interface provided through the operating system can use Amazon EFS to take advantage of
the scalability and reliability of file storage in the cloud without writing any new code or adjusting applications. Amazon
S3 is an object storage platform that uses a simple API for storing and accessing data. Applications that do not require a
file system structure and are designed to work with object storage can use Amazon S3 as a massively scalable, durable,
low-cost object storage solution.
A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data,
and store the data in an Amazon DynamoDB table.
B. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive
and process the data from the sensors. Use an Amazon S3 bucket to store the processed data.
C. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data,
and store the data in a Microsoft SQL Server Express database on an Amazon EC2 instance.
D. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive
and process the data from the sensors. Use an Amazon Elastic File System (Amazon EFS) shared file system to store
the processed data.
Answer: A
Answer: A
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
Answer: A
Answer: B
Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
A. Use a local machine to create a certificate that is signed by the third-party CImport the certificate into AWS
Certificate Manager (ACM). Create an HTTP API in Amazon API Gateway with a custom domain. Configure the custom
domain to use the certificate.
B. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create an HTTP API
in Amazon API Gateway with a custom domain. Configure the custom domain to use the certificate.
C. Use AWS Certificate Manager (ACM) to create a certificate that is signed by the third-party CA. Import the
certificate into AWS Certificate Manager (ACM). Create an AWS Lambda function with a Lambda function URL. Configure
the Lambda function URL to use the certificate.
D. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create an AWS
Lambda function with a Lambda function URL. Configure the Lambda function URL to use the certificate.
Answer: B
Explanation:
AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy SSL/TLS certificates for
use with AWS services and your internal resources. By creating a certificate in ACM that is signed by the third-party CA,
the company can meet its requirement for a specific public third-party CA to sign the TLS certificate.
A. Provision an Amazon DynamoDB database with default read and write capacity settings.
B. Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity unit (ACU).
D. Provision an Amazon RDS for MySQL database with 2 GiB of memory.
Answer: C
Explanation:
Aurora Serverless v2 provides auto-scaling so the database can handle inconsistent workloads and spikes automatically
without admin intervention. It can scale down to zero when not in use to minimize costs. The minimum 1 ACU capacity is
sufficient to replace the on-prem 2 GiB database based on the info given. Serverless capabilities reduce admin overhead
for capacity management. DynamoDB lacks MySQL compatibility and requires more hands-on management. RDS and
provisioned Aurora require manually resizing instances to scale, increasing admin overhead.
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
NEW QUESTION 727
A company wants to use an event-driven programming model with AWS Lambda. The company wants to reduce startup
latency for Lambda functions that run on Java 11. The company does not have strict latency requirements for the
applications. The company wants to reduce cold starts and outlier latencies when a function scales up. Which solution
will meet these requirements MOST cost-effectively?
Answer: D
Explanation:
Lambda SnapStart for Java can improve startup performance for latency-sensitive applications by up to 10x at no extra
cost, typically with no changes to your function code.
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
A. Migrate the existing RDS for MySQL database to an Aurora Serverless v2 MySQL database cluster.
B. Migrate the existing RDS for MySQL database to an Aurora MySQL database cluster.
C. Migrate the existing RDS for MySQL database to an Amazon EC2 instance that runs MySQL. Purchase an
instance reservation for the EC2 instance.
D. Migrate the existing RDS for MySQL database to an Amazon Elastic Container Service (Amazon ECS) cluster
that uses MySQL container images to run tasks.
Answer: A
Explanation:
Aurora Serverless v2 scales compute capacity automatically based on actual usage, down to zero when not in use. This
minimizes costs for intermittent usage. Since it only runs for 2 hours per week, the application is ideal for a serverless
architecture like Aurora Serverless. Aurora Serverless v2 charges per second when the database is active, unlike RDS
which charges hourly. Aurora Serverless provides higher availability than self-managed MySQL on EC2 or ECS. Using
reserved EC2 instances or ECS still incurs charges when not in use versus the fine-grained scaling of serverless.
Standard Aurora clusters have a minimum capacity unlike the auto-scaling serverless architecture.
Answer: C
Explanation:
DB cluster deployment can scale read workloads by adding read replicas. This provides increased capacity for read
workloads without impacting the write workload.
A. Private endpoint.
B. Regional endpoint.
C. Interface VPC endpoint.
D. Edge-optimized endpoint.
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
Answer: D
Explanation:
An edge-optimized API endpoint typically routes requests to the nearest CloudFront Point of Presence (POP), which
could help in cases where your clients are geographically distributed. This is the default endpoint type for API Gateway
REST APIs.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html
Answer: C
Explanation:
AWS Certificate Manager (ACM) provides free public TLS/SSL certificates and handles certificate renewals automatically.
Using DNS validation with ACM is operationally efficient since it automatically makes changes to Route 53 rather than
requiring manual validation steps. ACM integrates natively with CloudFront distributions for delivering HTTPS content.
CloudFront security policies and origin access controls do not issue TLS certificates. Email validation requires manual
steps to approve the domain validation emails for each renewal.
Answer: A
Explanation:
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB
that delivers up to a 10 times performance improvement - from milliseconds to microseconds - even at millions of requests
per second.
A. Use the Instance Scheduler on AWS to configure start and stop schedules.
B. Turn off automatic backups. Create weekly manual snapshots of the database.
C. Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.
D. Purchase All Upfront reserved DB instances.
Answer: A
Explanation:
The Instance Scheduler on AWS solution automates the starting and stopping of Amazon Elastic Compute Cloud
(Amazon EC2) and Amazon Relational Database Service (Amazon RDS) instances. This solution helps reduce
operational costs by stopping resources that are not in use and starting them when they are needed. The cost savings
can be significant if you leave all of your instances running at full utilization continuously.
https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
A. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for Lustre file system to
run the application.
B. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS) GP2
volume to run the application.
C. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for OpenZFS file system
to run the application.
D. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS) GP3
volume to run the application.
Answer: D
A. Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone
and one On-Demand Instance in a second Availability Zone.
B. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one Availability
Zone and two On-Demand Instances in a second Availability Zone.
C. Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one Availability Zone.
D. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one Availability
Zone and two Spot Instances in a second Availability Zone.
Answer: B
Explanation:
By setting the Auto Scaling group's minimum capacity to four, the architect ensures that there are always at least two
running instances. Deploying two On-Demand Instances in each of two Availability Zones ensures that the application is
highly available and fault-tolerant. If one Availability Zone becomes unavailable, the application can still run in the other
Availability Zone.
A. Set up a geolocation routing policy. Send the traffic that is near us-west-1 to the on-premises data center. Send
the traffic that is near eu-central-1 to eu-central-1.
B. Set up a simple routing policy that routes all traffic that is near eu-central-1 to eu-central-1 and routes all traffic
that is near the on-premises datacenter to the on-premises data center.
C. Set up a latency routing policy. Associate the policy with us-west-1.
D. Set up a weighted routing policy. Split the traffic evenly between eu-central-1 and the on-premises data center.
Answer: A
Explanation:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-geo.html
A. Read the data from the tapes on premises. Stage the data in a local NFS storage. Use AWS DataSync to migrate
the data to Amazon S3 Glacier Flexible Retrieval.
B. Use an on-premises backup application to read the data from the tapes and to write directly to Amazon S3 Glacier
Deep Archive.
C. Order multiple AWS Snowball devices that have Tape Gateway. Copy the physical tapes to virtual tapes in
Snowball. Ship the Snowball devices to AWS. Create a lifecycle policy to move the tapes to Amazon S3 Glacier Deep
Archive.
D. Configure an on-premises Tape Gateway. Create virtual tapes in the AWS Cloud. Use backup software to copy
the physical tape to the virtual tape.
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
Answer: C
Answer: C
Explanation:
Configuring the EC2 instances with dedicated tenancy ensures that each instance will run on isolated, single-tenant
hardware. This meets the requirement to prevent groups of nodes from sharing underlying hardware. A spread placement
group only provides isolation at the Availability Zone level. Instances could still share hardware within an AZ.
Answer: D
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/reserved-instances-scope.html
A. Have the R&D AWS account be part of both organizations during the transition.
B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left the prior
organization.
C. Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D AWS account
to the new R&D AWS account.
D. Have the R&D AWS account join the new organization. Make the new management account a member of the
prior organization.
Answer: B
Explanation:
https://aws.amazon.com/blogs/mt/migrating-accounts-between-aws-organizations-with-consolidated-billing-to-all-
features/
A. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS)
container instance that stores the information that the company receives in an Amazon Elastic File System (Amazon EFS)
file system. Authorization is resolved at the GWLB.
B. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that stores the
information that the company receives in an Amazon S3 bucket. Use an AWS Lambda function to resolve authorization.
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the
information that the company receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve
authorization.
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
D. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS)
container instance that stores the information that the company receives on an Amazon Elastic File System (Amazon
EFS) file system. Use an AWS Lambda function to resolve authorization.
Answer: C
Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html
A. Create a cross-Region read replica and promote the read replica to the primary instance.
B. Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
C. Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket.
D. Copy automatic snapshots to another Region every 24 hours.
Answer: D
Explanation:
Amazon RDS creates and saves automated backups of your DB instance or Multi-AZ DB cluster during the backup
window of your DB instance. RDS creates a storage volume snapshot of your DB instance, backing up the entire DB
instance and not just individual databases. RDS saves the automated backups of your DB instance according to the
backup retention period that you specify. If necessary, you can recover your DB instance to any point in time during the
backup retention period.
A. Use an Amazon ElastiCache for Memcached instance to store the session data. Update the application to use
ElastiCache for Memcached to store the session state.
B. Use Amazon ElastiCache for Redis to store the session state. Update the application to use ElastiCache for
Redis to store the session state.
C. Use an AWS Storage Gateway cached volume to store session data. Update the application to use AWS Storage
Gateway cached volume to store the session state.
D. Use Amazon RDS to store the session state. Update the application to use Amazon RDS to store the session
state.
Answer: B
Explanation:
ElastiCache Redis provides in-memory caching that can deliver microsecond latency for session data. Redis supports
replication and multi-AZ which can provide high availability for the cache. The application can be updated to store session
data in ElastiCache Redis rather than locally on the web servers. If a web server fails, the user can be routed via the load
balancer to another web server which can retrieve their session data from the highly available ElastiCache Redis cluster.
A. Create a read replica of the database. Direct the queries to the read replica.
B. Create a backup of the database. Restore the backup to another DB instance. Direct the queries to the new
database.
C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
D. Resize the DB instance to accommodate the additional workload.
Answer: A
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
appropriate microservices. Which solution will meet this requirement MOST cost-effectively?
A. Use the AWS Load Balancer Controller to provision a Network Load Balancer.
B. Use the AWS Load Balancer Controller to provision an Application Load Balancer.
C. Use an AWS Lambda function to connect the requests to Amazon EKS.
D. Use Amazon API Gateway to connect the requests to Amazon EKS.
Answer: D
Explanation:
API Gateway provides an entry point to your microservices.
https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/
A. Use Amazon S3 to store the images. Turn on multi-factor authentication (MFA) and public bucket access. Provide
customers with a link to the S3 bucket.
B. Use Amazon S3 to store the images. Create an IAM user for each customer. Add the users to a group that has
permission to access the S3 bucket.
C. Use Amazon EC2 instances that are behind Application Load Balancers (ALBs) to store the images. Deploy the
instances only in the countries the company services. Provide customers with links to the ALBs for their specific country's
instances.
D. Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with geographic
restrictions. Provide a signed URL for each customer to access the data in CloudFront.
Answer: D
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
B. Use Redis shards that contain multiple nodes with Redis append only files (AOF) turned on.
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D. Use Redis shards that contain multiple nodes with Auto Scaling turned on.
Answer: A
Explanation:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.html
A. Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the EC2 On-Demand
Instances available during the next testing phase.
B. Launch EC2 Spot Instances to support the application and to scale the application so it is available during the
next testing phase.
C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling warm pools
during the next testing phase.
D. Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances during the next
testing phase.
Answer: C
Explanation:
With Amazon EC2 hibernation enabled, you can maintain your EC2 instances in a "pre-warmed" state so these can get
to a productive state faster.
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html
New VCE and PDF Exam Dumps from PassLeader
NEW QUESTION 749
A company's applications run on Amazon EC2 instances in Auto Scaling groups. The company notices that its
applications experience sudden traffic increases on random days of the week. The company wants to maintain application
performance during sudden traffic increases. Which solution will meet these requirements MOST cost-effectively?
A. Use manual scaling to change the size of the Auto Scaling group.
B. Use predictive scaling to change the size of the Auto Scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
D. Use schedule scaling to change the size of the Auto Scaling group.
Answer: C
Explanation:
Dynamic Scaling - This is yet another type of Auto Scaling in which the number of EC2 instances is changed automatically
depending on the signals received. Dynamic Scaling is a good choice when there is a high volume of unpredictable traffic.
A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the
application requires.
B. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the TCP port that
the application requires.
C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an
Application Load Balancer as the origin.
D. Deploy an Amazon API Gateway API that is configured with the TCP port that the application requires. Configure
AWS Lambda functions with provisioned concurrency to process the requests.
Answer: A
Explanation:
Since the company requires the same level of performance for the new public endpoint in AWS. A Network Load Balancer
functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per
second. After the load balancer receives a connection request, it selects a target from the target group for the default rule.
It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Visit PassLeader and Download Full Version AWS-Associate (SAA-C03) Exam Dumps
AWS SAA-C03 Exam Dumps AWS SAA-C03 Exam Questions AWS SAA-C03 PDF Dumps AWS SAA-C03 VCE Dumps
https://www.passleader.com/saa-c03.html