ExamTopics Professional Data Engineer - Preguntas Con Respuestas
ExamTopics Professional Data Engineer - Preguntas Con Respuestas
Question #1 Topic 1
Your company built a TensorFlow neutral-network model with a large number of neurons and layers. The model fits well for the training data.
However, when tested against new data, it performs poorly. What method can you employ to address this?
A. Threading
B. Serialization
D. Dimensionality Reduction
Correct Answer: C
Question #2 Topic 1
You are building a model to make clothing recommendations. You know a user's fashion preference is likely to change over time, so you build a
data pipeline to stream new data back to the model as it becomes available. How should you use this data to train the model?
B. Continuously retrain the model on a combination of existing data and the new data. Most Voted
C. Train on the existing data while using the new data as your test set.
D. Train on the new data while using the existing data as your test set.
Correct Answer: B
You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database
table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then,
the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because
they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?
A. Add capacity (memory and disk space) to the database server by the order of 200.
B. Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.
C. Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.
Most Voted
D. Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated
reports.
Correct Answer: C
Question #4 Topic 1
You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that
visualizations are not showing data that is less than 1 hour old. What should you do?
D. Clear your browser history for the past hour then reload the tab showing the virtualizations.
Correct Answer: A
Question #5 Topic 1
An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-
separated values
(CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How
A. Use federated data sources, and check data in the SQL query.
C. Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.
D. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.
Most Voted
Correct Answer: D
Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server
millions of users. How should you design the frontend to respond to a database failure?
B. Retry the query with exponential backoff, up to a cap of 15 minutes. Most Voted
C. Retry the query every second until it comes back online to minimize staleness of data.
D. Reduce the query frequency to once every hour until the database comes back online.
Correct Answer: B
Question #7 Topic 1
You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine.
B. Logistic classification
Correct Answer: A
Question #8 Topic 1
You are building new real-time data warehouse for your company and will use Google BigQuery streaming inserts. There is no guarantee that data
will only be sent in once but you do have a unique ID for each row of data and an event timestamp. You want to ensure that duplicates are not
included while interactively querying data. Which query type should you use?
B. Use GROUP BY on the unique ID column and timestamp column and SUM on the values.
C. Use the LAG window function with PARTITION by unique ID along with WHERE LAG IS NOT NULL.
D. Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row equals 1. Most Voted
Correct Answer: D
Your company is using WILDCARD tables to query data across multiple tables with similar names. The SQL statement is currently failing with the
following error:
Which table name will make the SQL statement work correctly?
A. 'bigquery-public-data.noaa_gsod.gsod'
B. bigquery-public-data.noaa_gsod.gsod*
C. 'bigquery-public-data.noaa_gsod.gsod'*
Correct Answer: D
Your company is in a highly regulated industry. One of your requirements is to ensure individual users have access only to the minimum amount of
information required to do their jobs. You want to enforce this requirement with Google BigQuery. Which three approaches can you take? (Choose
three.)
You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user based on these rules:
C. Use a session window with a gap time duration of 60 minutes. Most Voted
D. Use a global window with a time based trigger with a delay of 60 minutes.
Correct Answer: C
Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some
allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other's data. You want to ensure
B. Load data into a different dataset for each client. Most Voted
F. Use the appropriate identity and access management (IAM) roles for each client's users. Most Voted
You want to process payment transactions in a point-of-sale application that will run on Google Cloud Platform. Your user base could grow
A. Cloud SQL
B. BigQuery
C. Cloud Bigtable
Correct Answer: D
You want to use a database of information about tissue samples to classify future tissue samples as either normal or mutated. You are evaluating
an unsupervised anomaly detection method for classifying the tissue samples. Which two characteristic support this method? (Choose two.)
A. There are very few occurrences of mutations relative to normal samples. Most Voted
B. There are roughly equal occurrences of both normal and mutated samples in the database.
C. You expect future mutations to have different features from the mutated samples in the database. Most Voted
D. You expect future mutations to have similar features to the mutated samples in the database.
E. You already have labels for which samples are mutated and which are normal in the database.
Correct Answer: AC
You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design
the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming
inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight
B. Convert the streaming insert code to batch load for individual messages.
C. Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.
D. Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long. Most Voted
Correct Answer: D
Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in Google
BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure
the data warehouse. You need to discover what everyone is doing. What should you do first?
A. Use Google Stackdriver Audit Logs to review data access. Most Voted
B. Get the identity and access management IIAM) policy of each table
D. Use the Google Cloud Billing API to see what account the warehouse is being billed to.
Correct Answer: A
Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and
minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should
you do?
B. Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.
C. Create a Hadoop cluster on Google Compute Engine that uses persistent disks.
D. Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector. Most Voted
E. Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.
Correct Answer: D
Business owners at your company have given you a database of bank transactions. Each row contains the user ID, transaction type, transaction
location, and transaction amount. They ask you to investigate what type of machine learning can be applied to the data. Which three machine
B. Unsupervised learning to determine which transactions are most likely to be fraudulent. Most Voted
C. Clustering to divide the transactions into N categories based on feature similarity. Most Voted
Your company's on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud
Dataproc. A like-for- like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of
using that much block storage. You want to minimize the storage cost of the migration. What should you do?
B. Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
C. Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
D. Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.
Correct Answer: A
You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to capture anomalous sensor events. You are using
a push subscription in Cloud Pub/Sub that calls a custom HTTPS endpoint that you have created to take action of these anomalous events as they
HTTPS endpoint keeps getting an inordinate amount of duplicate messages. What is the most likely cause of these duplicate messages?
C. The Cloud Pub/Sub topic has too many messages published to it.
D. Your custom endpoint is not acknowledging messages within the acknowledgement deadline. Most Voted
Correct Answer: D
Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a
payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the data.
A. Assign global unique identifiers (GUID) to each data entry. Most Voted
B. Compute the hash value of each data entry, and compare it with all historical data.
C. Store each data entry as the primary key in a separate database and apply an index.
D. Maintain a database table to store the hash value and other metadata for each data entry.
Correct Answer: A
Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud
Storage and in a
Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with
some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her
D. Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine. Most Voted
Correct Answer: D
You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need to process, store and
analyze these very large datasets in real time. What should you do?
A. Send the data to Google Cloud Datastore and then export to BigQuery.
B. Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.
Most Voted
C. Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is
required.
D. Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run
an analysis as needed.
Correct Answer: B
You have spent a few days loading data from comma-separated values (CSV) files into the Google BigQuery table CLICK_STREAM. The column DT
stores the epoch time of click events. For convenience, you chose a simple schema where every field is treated as the STRING type. Now, you want
to compute web session durations of users who visit your site, and you want to change its data type to the TIMESTAMP. You want to minimize the
migration effort without making future queries computationally expensive. What should you do?
A. Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type. Reload the data.
B. Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from the column TS for each row.
C. Create a view CLICK_STREAM_V, where strings from the column DT are cast into TIMESTAMP values. Reference the view CLICK_STREAM_V
D. Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the BOOLEAN type. Reload all data in append
mode. For each appended row, set the value of IS_NEW to true. For future queries, reference the column TS instead of the column DT, with the
E. Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into
TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference
the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table
Correct Answer: E
You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool
when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you
do?
A. Make a call to the Stackdriver API to list all logs, and apply an advanced filter.
B. In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.
C. In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your
monitoring tool.
D. Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your
Correct Answer: D
You are working on a sensitive project involving private user data. You have set up a project on Google Cloud Platform to house your work
internally. An external consultant is going to assist with coding a complex transformation in a Google Cloud Dataflow pipeline for your project.
B. Grant the consultant the Cloud Dataflow Developer role on the project.
C. Create a service account and allow the consultant to log on with it.
D. Create an anonymized sample of the data for the consultant to work with in a different project. Most Voted
Correct Answer: D
You are building a model to predict whether or not it will rain on a given day. You have thousands of input features and want to see if you can
improve training speed by removing some features while having a minimum effect on model accuracy. What can you do?
B. Combine highly co-dependent features into one representative feature. Most Voted
D. Remove the features that have null values for more than 50% of the training records.
Correct Answer: B
Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow. Numerous data logs are being are being
generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially
every hour.
The data scientists have written the following code to read the data for a new key features in the logs.
You want to improve the performance of this data read. What should you do?
B. Use .fromQuery operation to read specific fields from the table. Most Voted
D. Call a transform that returns TableRow objects, where each element in the PCollection represents a single row in the table.
Correct Answer: B
Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How
should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards?
Correct Answer: D
Your company's customer and order databases are often under heavy load. This makes performing analytics against them difficult without
harming operations.
The databases are in a MySQL cluster, with nightly backups taken using mysqldump. You want to perform analytics with minimal impact on
A. Add a node to the MySQL cluster and build an OLAP cube there.
B. Use an ETL tool to load the data from MySQL into Google BigQuery. Most Voted
D. Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.
Correct Answer: B
You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an
update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when
A. Update the current pipeline and use the drain flag. Most Voted
B. Update the current pipeline and provide the transform mapping JSON object.
C. Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.
D. Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.
Correct Answer: A
Your company is running their first dynamic campaign, serving different offers by analyzing real-time data during the holiday season. The data
scientists are collecting terabytes of data that rapidly grows every hour during their 30-day campaign. They are using Google Cloud Dataflow to
preprocess the data and collect the feature (signals) data that is needed for the machine learning model in Google Cloud Bigtable. The team is
observing suboptimal performance with reads and writes of their initial load of 10 TB of data. They want to improve this performance while
A. Redefine the schema by evenly distributing reads and writes across the row space of the table. Most Voted
B. The performance issue should be resolved over time as the site of the BigDate cluster is increased.
C. Redesign the schema to use a single row key to identify values that need to be updated frequently in the cluster.
D. Redesign the schema to use row keys based on numeric IDs that increase sequentially per user viewing the offers.
Correct Answer: A
Your software uses a simple JSON format for all messages. These messages are published to Google Cloud Pub/Sub, then processed with Google
Cloud
Dataflow to create a real-time dashboard for the CFO. During testing, you notice that some messages are missing in the dashboard. You check the
logs, and all messages are being published to Cloud Pub/Sub successfully. What should you do next?
B. Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output. Most Voted
C. Use Google Stackdriver Monitoring on Cloud Pub/Sub to find the missing messages.
D. Switch Cloud Dataflow to pull messages from Cloud Pub/Sub instead of Cloud Pub/Sub pushing messages to Cloud Dataflow.
Correct Answer: B
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport
them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their
infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary
technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine
Solution Concept -
✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy
resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.
✑ Databases
8 physical servers in 2 clusters
3 physical servers
- Batch servers
✑ Storage appliances
- iSCSI for virtual machine (VM) hosts
- Fibre Channel storage area network (FC SAN) `" SQL server storage
✑ 20 miscellaneous servers
- Jenkins, monitoring, bastion hosts,
Business Requirements -
Technical Requirements -
SEO Statement -
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at
moving shipments around the world, but we are inefficient at moving data around.
We need to organize our information so we can more easily understand where our customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but
they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the
analytics, and figuring out how to implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times
has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.
Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they
cannot move to
BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?
C. Store the common data encoded as Avro in Google Cloud Storage. Most Voted
D. Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.
Correct Answer: C
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport
them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their
infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary
technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine
Solution Concept -
✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy
resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.
✑ Databases
8 physical servers in 2 clusters
3 physical servers
- Batch servers
✑ Storage appliances
- iSCSI for virtual machine (VM) hosts
- Fibre Channel storage area network (FC SAN) `" SQL server storage
✑ 20 miscellaneous servers
- Jenkins, monitoring, bastion hosts,
Business Requirements -
Technical Requirements -
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at
moving shipments around the world, but we are inefficient at moving data around.
We need to organize our information so we can more easily understand where our customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but
they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the
analytics, and figuring out how to implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times
has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.
Flowlogistic's management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory
tracking system.
You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to
ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products
Correct Answer: A
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport
them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their
infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary
technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine
Solution Concept -
Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy
resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.
✑ Databases
8 physical servers in 2 clusters
3 physical servers
- Batch servers
✑ Storage appliances
- iSCSI for virtual machine (VM) hosts
- Fibre Channel storage area network (FC SAN) `" SQL server storage
✑ 20 miscellaneous servers
- Jenkins, monitoring, bastion hosts,
Business Requirements -
Technical Requirements -
SEO Statement -
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at
moving shipments around the world, but we are inefficient at moving data around.
We need to organize our information so we can more easily understand where our customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but
they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the
analytics, and figuring out how to implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times
has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.
Flowlogistic's CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very
technical, so they've purchased a visualization tool to simplify the creation of BigQuery reports. However, they've been overwhelmed by all the
data in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-
C. Create a view on the table to present to the virtualization tool. Most Voted
D. Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.
Correct Answer: C
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport
them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their
infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary
technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine
Solution Concept -
✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy
resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.
✑ Databases
8 physical servers in 2 clusters
3 physical servers
- Batch servers
✑ Storage appliances
- iSCSI for virtual machine (VM) hosts
- Fibre Channel storage area network (FC SAN) `" SQL server storage
✑ 20 miscellaneous servers
- Jenkins, monitoring, bastion hosts,
Business Requirements -
Technical Requirements -
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at
moving shipments around the world, but we are inefficient at moving data around.
We need to organize our information so we can more easily understand where our customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but
they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the
analytics, and figuring out how to implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times
has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.
Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now
go to a single
Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting
Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.
A. Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.
B. Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub. Most Voted
D. Use the automatically generated timestamp from Cloud Pub/Sub to order the data.
Correct Answer: B
Company Overview -
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for
innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive
hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space.
Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine
learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to
account for the impact of dynamic regional politics on location availability and cost.
Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and
provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.
✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments `" development/test, staging, and production `" to meet the needs of running
Business Requirements -
✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed
telecom user community.
✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from distributed research workers
✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements -
CEO Statement -
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable,
which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity
commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in
which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our
CFO Statement -
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an
operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud's machine learning will allow our
quantitative researchers to work on our high-value problems instead of problems with our data pipelines.
MJTelco's Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to
scale its compute power up as required. Which Cloud Dataflow pipeline configuration setting should you update?
A. The zone
Company Overview -
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for
innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive
hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space.
Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine
learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to
account for the impact of dynamic regional politics on location availability and cost.
Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and
provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.
✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments `" development/test, staging, and production `" to meet the needs of running
Business Requirements -
✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed
telecom user community.
✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from distributed research workers
Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements -
CEO Statement -
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable,
which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity
commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in
which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our
CFO Statement -
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an
operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud's machine learning will allow our
quantitative researchers to work on our high-value problems instead of problems with our data pipelines.
You need to compose visualizations for operations teams with the following requirements:
✑ The report must include telemetry data from all 50,000 installations for the most resent 6 weeks (sampling once every minute).
✑ The report must not be more than 3 hours delayed from live data.
✑ The actionable report should only show suboptimal links.
✑ Most suboptimal links should be sorted to the top.
✑ Suboptimal links can be grouped and filtered by regional geography.
✑ User response time to load the report must be <5 seconds.
Which approach meets the requirements?
A. Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.
B. Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only
C. Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive
the metric, and then renders results in a table using the Google charts and visualization API.
D. Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then
uses a filter expression to show only suboptimal rows in a table. Most Voted
Correct Answer: D
Company Overview -
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for
innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive
hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space.
Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine
learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to
account for the impact of dynamic regional politics on location availability and cost.
Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and
provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.
✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments `" development/test, staging, and production `" to meet the needs of running
Business Requirements -
✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed
telecom user community.
✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.
Provide reliable and timely access to data for analysis from distributed research workers
✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements -
CEO Statement -
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable,
which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity
commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in
which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our
CFO Statement -
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an
operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud's machine learning will allow our
quantitative researchers to work on our high-value problems instead of problems with our data pipelines.
You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy
to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce
C. Adjust the settings for each table to allow a related region-based security group view access.
D. Adjust the settings for each view to allow a related region-based security group view access.
E. Adjust the settings for each dataset to allow a related region-based security group view access. Most Voted
Correct Answer: BE
Company Overview -
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for
innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive
hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space.
Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine
learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to
account for the impact of dynamic regional politics on location availability and cost.
Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and
provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.
✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments `" development/test, staging, and production `" to meet the needs of running
Business Requirements -
✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed
telecom user community.
✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from distributed research workers
✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements -
✑ Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.
✑ Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day
✑ Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production
learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable,
which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity
commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in
which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our
CFO Statement -
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an
operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud's machine learning will allow our
quantitative researchers to work on our high-value problems instead of problems with our data pipelines.
MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that
comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for
Correct Answer: A
Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch
MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to
recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you
D. Decrease the size of the Hadoop cluster but also rewrite the job in Hive.
Correct Answer: B
You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table
consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in
BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the
value of the LastName field for each employee. How can you make that data available while minimizing cost?
A. Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName. Most Voted
B. Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the
C. Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value
for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery.
D. Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new
CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.
Correct Answer: A
You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud
Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity 'Movie' the property
'tags' have multiple values but the property 'date released' does not. A typical query would ask for all movies with actor=<actorname> ordered by
date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of
indexes?
A. Manually configure the index in your index config as follows: Most Voted
Correct Answer: A
You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 AM. You have written a
Google Cloud
Dataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible. What should you
do?
B. Manually start the Cloud Dataflow job each morning when you get into the office.
C. Create a cron job with Google App Engine Cron Service to run the Cloud Dataflow job. Most Voted
D. Configure the Cloud Dataflow job as a streaming job so that it processes the log data immediately.
Correct Answer: C
You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use
Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and
others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it
A. Load the data every 30 minutes into a new partitioned table in BigQuery.
B. Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery Most Voted
C. Store the data in Google Cloud Datastore. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the
D. Store the data in a file in a regional Google Cloud Storage bucket. Use Cloud Dataflow to query BigQuery and combine the data
Correct Answer: B
You are designing the database schema for a machine learning-based food ordering service that will predict what users want to eat. Here is some
✑ The user profile: What the user likes and doesn't like to eat
✑ The user account information: Name, address, preferred meal times
✑ The order information: When orders are made, from where, to whom
The database will be used to store all the transactional data of the product. You want to optimize the data schema. Which Google Cloud Platform
B. Cloud SQL
C. Cloud Bigtable
D. Cloud Datastore
Correct Answer: A
Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported
data is not matching byte-to-byte to the source file. What is the most likely cause of this problem?
B. The CSV data has invalid rows that were skipped on import.
C. The CSV data loaded in BigQuery is not using BigQuery's default encoding. Most Voted
D. The CSV data has not gone through an ETL phase before loading into BigQuery.
Correct Answer: C
Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (CSV) file that is less than 4 KB. All files
must be ingested on Google Cloud Platform before they can be processed. Your company site has a 200 ms latency to Google Cloud, and your
Internet connection bandwidth is limited as 50 Mbps. You currently deploy a secure FTP (SFTP) server on a virtual machine in Google Compute
Engine as the data ingestion point. A local SFTP client runs on a dedicated machine to transmit the CSV files as is. The goal is to make reports
with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume,
You are told that due to seasonality, your company expects the number of files to double for the next three months. Which two actions should you
A. Introduce data compression for each file to increase the rate file of file transfer.
B. Contact your internet service provider (ISP) to increase your maximum bandwidth to at least 100 Mbps.
C. Redesign the data ingestion process to use gsutil tool to send the CSV files to a storage bucket in parallel. Most Voted
D. Assemble 1,000 files into a tape archive (TAR) file. Transmit the TAR files instead, and disassemble the CSV files in the cloud upon
E. Create an S3-compatible storage endpoint in your network, and use Google Cloud Storage Transfer Service to transfer on-premises data to
Correct Answer: CD
You are choosing a NoSQL database to handle telemetry data submitted from millions of Internet-of-Things (IoT) devices. The volume of data is
growing at 100
TB per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and
durability (ACID).
You need to analyze the data by querying against individual fields. Which three databases meet your requirements? (Choose three.)
A. Redis
C. MySQL
Next Questions
You are training a spam classifier. You notice that you are overfitting the training data. Which three actions can you take to resolve this problem?
(Choose three.)
You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the Project Owner. You want to
automate these jobs by taking nightly batch files containing non-public information from Google Cloud Storage, processing them with a Spark
A. Restrict the Google Cloud Storage bucket so only you can see the files
B. Grant the Project Owner role to a service account, and run the job with it
C. Use a service account with the ability to read the batch files and to write to BigQuery Most Voted
D. Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery
Correct Answer: C
You are using Google BigQuery as your data warehouse. Your users report that the following simple query is running very slowly, no matter when
You check the query plan for the query and see the following output in the Read section of Stage:1:
What is the most likely cause of the delay for this query?
C. Either the state or the city columns in the [myproject:mydataset.mytable] table have too many NULL values
D. Most rows in the [myproject:mydataset.mytable] table have the same value in the country column, causing data skew Most Voted
Correct Answer: D
Your globally distributed auction application allows users to bid on items. Occasionally, users place identical bids at nearly identical times, and
different application servers process those bids. Each bid event contains the item, amount, user, and timestamp. You want to collate those bid
events into a single location in real time to determine which user bid first. What should you do?
A. Create a file on a shared file and have the application servers write all bid events to that file. Process the file with Apache Hadoop to
B. Have each application server write the bid events to Cloud Pub/Sub as they occur. Push the events from Cloud Pub/Sub to a custom
endpoint that writes the bid event information into Cloud SQL.
C. Set up a MySQL database for each application server to write bid events into. Periodically query each of those distributed MySQL
databases and update a master MySQL database with bid event information.
D. Have each application server write the bid events to Google Cloud Pub/Sub as they occur. Use a pull subscription to pull the bid events
using Google Cloud Dataflow. Give the bid for each item to the user in the bid event that is processed first. Most Voted
Correct Answer: D
Your organization has been collecting and analyzing data in Google BigQuery for 6 months. The majority of the data analyzed is placed in a time-
partitioned table named events_partitioned. To reduce the cost of queries, your organization created a view called events, which queries only the
last 14 days of data. The view is described in legacy SQL. Next month, existing applications will be connecting to BigQuery to read the events data
via an ODBC connection. You need to ensure the applications can connect. Which two actions should you take? (Choose two.)
C. Create a new view over events_partitioned using standard SQL Most Voted
D. Create a service account for the ODBC connection to use for authentication Most Voted
E. Create a Google Cloud Identity and Access Management (Cloud IAM) role for the ODBC connection and shared ג€eventsג€
Correct Answer: CD
You have enabled the free integration between Firebase Analytics and Google BigQuery. Firebase now automatically creates a new table daily in
BigQuery in the format app_events_YYYYMMDD. You want to query all of the tables for the past 30 days in legacy SQL. What should you do?
Correct Answer: A
Your company is currently setting up data pipelines for their campaign. For all the Google Cloud Pub/Sub streaming data, one of the important
business requirements is to be able to periodically identify the inputs and their timings during their campaign. Engineers have decided to use
windowing and transformation in Google Cloud Dataflow for this purpose. However, when testing this feature, they find that the Cloud Dataflow job
fails for the all streaming insert. What is the most likely cause of this problem?
A. They have not assigned the timestamp, which causes the job to fail
B. They have not set the triggers to accommodate the data coming in late, which causes the job to fail
C. They have not applied a global windowing function, which causes the job to fail when the pipeline is created
D. They have not applied a non-global windowing function, which causes the job to fail when the pipeline is created Most Voted
Correct Answer: D
You architect a system to analyze seismic data. Your extract, transform, and load (ETL) process runs as a series of MapReduce jobs on an Apache
Hadoop cluster. The ETL process takes days to process a data set because some steps are computationally expensive. Then you discover that a
sensor calibration step has been omitted. How should you change your ETL process to carry out sensor calibration systematically in the future?
A. Modify the transformMapReduce jobs to apply sensor calibration before they do anything else.
B. Introduce a new MapReduce job to apply sensor calibration to raw data, and ensure all other MapReduce jobs are chained after this.
Most Voted
C. Add sensor calibration data to the output of the ETL process, and document that all users need to apply sensor calibration themselves.
D. Develop an algorithm through simulation to predict variance of data output from the last MapReduce job based on calibration factors, and
Correct Answer: B
An online retailer has built their current application on Google App Engine. A new initiative at the company mandates that they extend their
application to allow their customers to transact directly via the application. They need to manage their shopping transactions and analyze
combined data from multiple datasets using a business intelligence (BI) tool. They want to use only a single database for this purpose. Which
A. BigQuery
C. Cloud BigTable
D. Cloud Datastore
Correct Answer: B
You launched a new gaming app almost three years ago. You have been uploading log files from the previous day to a separate Google BigQuery
table with the table name format LOGS_yyyymmdd. You have been using table wildcard functions to generate daily and monthly reports for all time
ranges. Recently, you discovered that some queries that cover long date ranges are exceeding the limit of 1,000 tables and failing. How can you
B. Convert the sharded tables into a single partitioned table Most Voted
C. Enable query caching so you can cache data from previous months
D. Create separate views to cover each month, and query from these views
Correct Answer: B
Your analytics team wants to build a simple statistical model to determine which customers are most likely to work with your company again,
based on a few different metrics. They want to run the model on Apache Spark, using data housed in Google Cloud Storage, and you have
Dataproc to execute this job. Testing has shown that this workload can run in approximately 30 minutes on a 15-node cluster, outputting the
BigQuery. The plan is to run this workload weekly. How should you optimize the cluster for cost?
B. Use pre-emptible virtual machines (VMs) for the cluster Most Voted
D. Use SSDs on the worker nodes so that the job can run faster
Correct Answer: B
Your company receives both batch- and stream-based event data. You want to process the data using Google Cloud Dataflow over a predictable
time period.
However, you realize that in some instances data can arrive late or out of order. How should you design your Cloud Dataflow pipeline to handle
C. Use watermarks and timestamps to capture the lagged data. Most Voted
D. Ensure every datasource type (stream or batch) has a timestamp, and use the timestamps to define the logic for lagged data.
Correct Answer: C
You have some data, which is shown in the graphic below. The two dimensions are X and Y, and the shade of each dot represents what class it is.
You want to classify this data accurately using a linear algorithm. To do this you need to add a synthetic feature. What should the value of that
feature be?
B. X2
C. Y2
D. cos(X)
Correct Answer: A
You are integrating one of your internal IT applications and Google BigQuery, so users can query BigQuery from the application's interface. You do
not want individual users to authenticate to BigQuery and you do not want to give them access to the dataset. You need to securely access
A. Create groups for your users and give those groups access to the dataset
B. Integrate with a single sign-on (SSO) platform, and pass each user's credentials along with the query request
C. Create a service account and grant dataset access to that account. Use the service account's private key to access the dataset Most Voted
D. Create a dummy user and grant dataset access to that user. Store the username and password for that user in a file on the files system, and
Correct Answer: C
You are building a data pipeline on Google Cloud. You need to prepare data using a casual method for a machine-learning process. You want to
support a logistic regression model. You also need to monitor and adjust for null values, which must remain real-valued and cannot be removed.
A. Use Cloud Dataprep to find null values in sample source data. Convert all nulls to 'none' using a Cloud Dataproc job.
B. Use Cloud Dataprep to find null values in sample source data. Convert all nulls to 0 using a Cloud Dataprep job. Most Voted
C. Use Cloud Dataflow to find null values in sample source data. Convert all nulls to 'none' using a Cloud Dataprep job.
D. Use Cloud Dataflow to find null values in sample source data. Convert all nulls to 0 using a custom script.
Correct Answer: B
You set up a streaming data insert into a Redis cluster via a Kafka cluster. Both clusters are running on Compute Engine instances. You need to
encrypt data at rest with encryption keys that you can create, rotate, and destroy as needed. What should you do?
A. Create a dedicated service account, and use encryption at rest to reference your data stored in your Compute Engine cluster instances as
B. Create encryption keys in Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster
C. Create encryption keys locally. Upload your encryption keys to Cloud Key Management Service. Use those keys to encrypt your data in all of
D. Create encryption keys in Cloud Key Management Service. Reference those keys in your API service calls when accessing the data in your
Correct Answer: B
You are developing an application that uses a recommendation engine on Google Cloud. Your solution should display new videos to customers
based on past views. Your solution needs to generate labels for the entities in videos that the customer has viewed. Your design must be able to
provide very fast filtering suggestions based on data from other customer preferences on several TB of data. What should you do?
A. Build and train a complex classification model with Spark MLlib to generate labels and filter the results. Deploy the models using Cloud
B. Build and train a classification model with Spark MLlib to generate labels. Build and train a second classification model with Spark MLlib to
filter results to match customer preferences. Deploy the models using Cloud Dataproc. Call the models from your application.
C. Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud Bigtable, and filter the predicted
labels to match the user's viewing history to generate preferences. Most Voted
D. Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud SQL, and join and filter the predicted
Correct Answer: C
You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want
to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention.
A. Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster
B. Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the
C. Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the default autoscaling setting for worker
D. Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-
Correct Answer: C
Your infrastructure includes a set of YouTube channels. You have been tasked with creating a process for sending the YouTube channel data to
Google Cloud for analysis. You want to design a solution that allows your world-wide marketing teams to perform ANSI SQL and other types of
analysis on up-to-date YouTube channels log data. How should you set up the log data transfer into Google Cloud?
A. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
B. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Regional bucket as a final destination.
C. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final
D. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Regional storage bucket as a final destination.
Correct Answer: C
You are designing storage for very large text files for a data pipeline on Google Cloud. You want to support ANSI SQL queries. You also want to
support compression and parallel load from the input locations using Google recommended practices. What should you do?
A. Transform text files to compressed Avro using Cloud Dataflow. Use BigQuery for storage and query.
B. Transform text files to compressed Avro using Cloud Dataflow. Use Cloud Storage and BigQuery permanent linked tables for query.
Most Voted
C. Compress text files to gzip using the Grid Computing Tools. Use BigQuery for storage and query.
D. Compress text files to gzip using the Grid Computing Tools. Use Cloud Storage, and then import into Cloud Bigtable for query.
Correct Answer: B
You are developing an application on Google Cloud that will automatically generate subject labels for users' blog posts. You are under competitive
pressure to add this feature quickly, and you have no additional developer resources. No one on your team has experience with machine learning.
A. Call the Cloud Natural Language API from your application. Process the generated Entity Analysis as labels. Most Voted
B. Call the Cloud Natural Language API from your application. Process the generated Sentiment Analysis as labels.
C. Build and train a text classification model using TensorFlow. Deploy the model using Cloud Machine Learning Engine. Call the model from
D. Build and train a text classification model using TensorFlow. Deploy the model using a Kubernetes Engine cluster. Call the model from your
Correct Answer: A
You are designing storage for 20 TB of text files as part of deploying a data pipeline on Google Cloud. Your input data is in CSV format. You want
to minimize the cost of querying aggregate values for multiple users who will query the data in Cloud Storage with multiple engines. Which
A. Use Cloud Bigtable for storage. Install the HBase shell on a Compute Engine instance to query the Cloud Bigtable data.
B. Use Cloud Bigtable for storage. Link as permanent tables in BigQuery for query.
C. Use Cloud Storage for storage. Link as permanent tables in BigQuery for query. Most Voted
D. Use Cloud Storage for storage. Link as temporary tables in BigQuery for query.
Correct Answer: C
You are designing storage for two relational tables that are part of a 10-TB database on Google Cloud. You want to support transactions that scale
horizontally.
You also want to optimize data for range queries on non-key columns. What should you do?
A. Use Cloud SQL for storage. Add secondary indexes to support query patterns.
B. Use Cloud SQL for storage. Use Cloud Dataflow to transform data to support query patterns.
C. Use Cloud Spanner for storage. Add secondary indexes to support query patterns. Most Voted
D. Use Cloud Spanner for storage. Use Cloud Dataflow to transform data to support query patterns.
Correct Answer: C
Your financial services company is moving to cloud technology and wants to store 50 TB of financial time-series data in the cloud. This data is
updated frequently and new data will be streaming in all the time. Your company also wants to move their existing Apache Hadoop jobs to the
B. Google BigQuery
Correct Answer: A
An organization maintains a Google BigQuery dataset that contains tables with user-level data. They want to expose aggregates of this data to
other Google
Cloud projects, while still controlling access to the user-level data. Additionally, they need to minimize their overall storage cost and ensure the
analysis cost for other projects is assigned to those projects. What should they do?
A. Create and share an authorized view that provides the aggregate results. Most Voted
B. Create and share a new dataset and view that provides the aggregate results.
C. Create and share a new dataset and table that contains the aggregate results.
D. Create dataViewer Identity and Access Management (IAM) roles on the dataset to enable sharing.
Correct Answer: A
Government regulations in your industry mandate that you have to maintain an auditable record of access to certain types of data. Assuming that
all expiring logs will be archived correctly, where should you store data that is subject to that mandate?
A. Encrypted on Cloud Storage with user-supplied encryption keys. A separate decryption key will be given to each authorized user.
B. In a BigQuery dataset that is viewable only by authorized personnel, with the Data Access log used to provide the auditability. Most Voted
C. In Cloud SQL, with separate database user names to each user. The Cloud SQL Admin activity logs will be used to provide the auditability.
D. In a bucket on Cloud Storage that is accessible only by an AppEngine service that collects user information and logs the access before
Correct Answer: B
Your neural network model is taking days to train. You want to increase the training speed. What can you do?
Correct Answer: B
You are responsible for writing your company's ETL pipelines to run on an Apache Hadoop cluster. The pipeline will require some checkpointing
and splitting pipelines. Which method should you use to write the pipelines?
Correct Answer: A
Your company maintains a hybrid deployment with GCP, where analytics are performed on your anonymized customer data. The data are imported
to Cloud
Storage from your data center through parallel uploads to a data transfer server running on GCP. Management informs you that the daily transfers
take too long and have asked you to fix the problem. You want to maximize transfer speeds. Which action should you take?
C. Increase your network bandwidth from your datacenter to GCP. Most Voted
Correct Answer: C
Company Overview -
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for
innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive
hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space.
Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine
learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to
account for the impact of dynamic regional politics on location availability and cost.
Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and
provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.
✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments `" development/test, staging, and production `" to meet the needs of running
Business Requirements -
✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed
telecom user community.
✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from distributed research workers
✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements -
Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.
Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day
Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production
learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable,
which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity
commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in
which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our
CFO Statement -
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an
operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud's machine learning will allow our
quantitative researchers to work on our high-value problems instead of problems with our data pipelines.
MJTelco is building a custom interface to share data. They have these requirements:
2. They need to scan specific time range rows with a very fast response time (milliseconds).
Correct Answer: C
Company Overview -
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for
innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive
hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space.
Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine
learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to
account for the impact of dynamic regional politics on location availability and cost.
Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and
provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.
Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments `" development/test, staging, and production `" to meet the needs of running
Business Requirements -
✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed
telecom user community.
✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from distributed research workers
✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements -
Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.
Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day
Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production
learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable,
which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity
commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in
which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our
CFO Statement -
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an
operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud's machine learning will allow our
quantitative researchers to work on our high-value problems instead of problems with our data pipelines.
You need to compose visualization for operations teams with the following requirements:
✑ Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute)
✑ The report must not be more than 3 hours delayed from live data.
✑ The actionable report should only show suboptimal links.
✑ Most suboptimal links should be sorted to the top.
Suboptimal links can be grouped and filtered by regional geography.
✑ User response time to load the report must be <5 seconds.
You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct
geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid
creating and updating new visualizations each month. What should you do?
A. Look through the current data and compose a series of charts and tables, one for each possible combination of criteria.
B. Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.
Most Voted
C. Export the data to a spreadsheet, compose a series of charts and tables, one for each possible combination of criteria, and spread them
D. Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each
criteria, and then renders results using the Google Charts and visualization API.
Correct Answer: B
Company Overview -
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for
innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive
hardware.
Company Background -
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space.
Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine
learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to
account for the impact of dynamic regional politics on location availability and cost.
Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and
provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.
Solution Concept -
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:
✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.
✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.
MJTelco will also use three separate operating environments `" development/test, staging, and production `" to meet the needs of running
Business Requirements -
✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed
telecom user community.
✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.
✑ Provide reliable and timely access to data for analysis from distributed research workers
✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements -
Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.
Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day
Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production
learning cycles.
CEO Statement -
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable,
which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity
commitments.
CTO Statement -
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in
which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our
CFO Statement -
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an
operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud's machine learning will allow our
quantitative researchers to work on our high-value problems instead of problems with our data pipelines.
Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco
asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of
daily queries while performing fine-grained analysis of each day's events. They also want to use streaming ingestion. What should you do?
B. Create a partitioned table called tracking_table and include a TIMESTAMP column. Most Voted
C. Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.
D. Create a table called tracking_table with a TIMESTAMP column to represent the day.
Correct Answer: B
Company Overview -
Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport
them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.
Company Background -
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their
infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary
technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on
Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine
Solution Concept -
✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy
resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.
✑ Databases
- 8 physical servers in 2 clusters
- 3 physical servers
- Batch servers
✑ Storage appliances
- iSCSI for virtual machine (VM) hosts
- Fibre Channel storage area network (FC SAN) `" SQL server storage
✑ 20 miscellaneous servers
- Jenkins, monitoring, bastion hosts,
Business Requirements -
Technical Requirements -
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at
moving shipments around the world, but we are inefficient at moving data around.
We need to organize our information so we can more easily understand where our customers are and what they are shipping.
CTO Statement -
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but
they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the
analytics, and figuring out how to implement the CFO' s tracking technology.
CFO Statement -
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times
has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.
Flowlogistic's management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory
tracking system.
You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to
ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products
Correct Answer: A
After migrating ETL jobs to run on BigQuery, you need to verify that the output of the migrated jobs is the same as the output of the original.
You've loaded a table containing the output of the original job and want to compare the contents with output from the migrated job to show that
they are identical. The tables do not contain a primary key column that would enable you to join them together for comparison.
A. Select random samples from the tables using the RAND() function and compare the samples.
B. Select random samples from the tables using the HASH() function and compare the samples.
C. Use a Dataproc cluster and the BigQuery Hadoop connector to read the data from each table and calculate a hash from non-timestamp
columns of the table after sorting. Compare the hashes of each table. Most Voted
D. Create stratified random samples using the OVER() function and compare equivalent samples from each table.
Correct Answer: C
You are a head of BI at a large enterprise company with multiple business units that each have different priorities and budgets. You use on-
BigQuery with a quota of 2K concurrent on-demand slots per project. Users at your organization sometimes don't get slots to execute their query
and you need to correct this. You'd like to avoid introducing new projects to your account.
C. Switch to flat-rate pricing and establish a hierarchical priority model for your projects. Most Voted
D. Increase the amount of concurrent slots per project at the Quotas page at the Cloud Console.
Correct Answer: C
You have an Apache Kafka cluster on-prem with topics containing web application logs. You need to replicate the data to Google Cloud for
analysis in BigQuery and Cloud Storage. The preferred replication method is mirroring to avoid deployment of Kafka Connect plugins.
A. Deploy a Kafka cluster on GCE VM Instances. Configure your on-prem cluster to mirror your topics to the cluster running in GCE. Use a
Dataproc cluster or Dataflow job to read from Kafka and write to GCS. Most Voted
B. Deploy a Kafka cluster on GCE VM Instances with the Pub/Sub Kafka connector configured as a Sink connector. Use a Dataproc cluster or
C. Deploy the Pub/Sub Kafka connector to your on-prem Kafka cluster and configure Pub/Sub as a Source connector. Use a Dataflow job to
D. Deploy the Pub/Sub Kafka connector to your on-prem Kafka cluster and configure Pub/Sub as a Sink connector. Use a Dataflow job to read
Correct Answer: A
You've migrated a Hadoop job from an on-prem cluster to dataproc and GCS. Your Spark job is a complicated analytical workload that consists of
many shuffling operations and initial data are parquet files (on average 200-400 MB size each). You see some degradation in performance after
the migration to Dataproc, so you'd like to optimize for it. You need to keep in mind that your organization is very cost-sensitive, so you'd like to
continue using Dataproc on preemptibles (with 2 non-preemptible workers only) for this workload.
B. Switch to TFRecords formats (appr. 200MB per file) instead of parquet files.
C. Switch from HDDs to SSDs, copy initial data from GCS to HDFS, run the Spark job and copy results back to GCS.
D. Switch from HDDs to SSDs, override the preemptible VMs configuration to increase the boot disk size. Most Voted
Correct Answer: D
Your team is responsible for developing and maintaining ETLs in your company. One of your Dataflow jobs is failing because of some errors in the
input data, and you need to improve reliability of the pipeline (incl. being able to reprocess all failing data).
A. Add a filtering step to skip these types of errors in the future, extract erroneous rows from logs.
B. Add a tryג€¦ catch block to your DoFn that transforms the data, extract erroneous rows from logs.
C. Add a tryג€¦ catch block to your DoFn that transforms the data, write erroneous rows to Pub/Sub PubSub directly from the DoFn.
D. Add a tryג€¦ catch block to your DoFn that transforms the data, use a sideOutput to create a PCollection that can be stored to Pub/Sub
Correct Answer: D
You're training a model to predict housing prices based on an available dataset with real estate properties. Your plan is to train a fully connected
neural net, and you've discovered that the dataset contains latitude and longitude of the property. Real estate professionals have told you that the
location of the property is highly influential on price, so you'd like to engineer a feature that incorporates this physical dependency.
C. Create a feature cross of latitude and longitude, bucketize it at the minute level and use L1 regularization during optimization. Most Voted
D. Create a feature cross of latitude and longitude, bucketize it at the minute level and use L2 regularization during optimization.
Correct Answer: C
You are deploying MariaDB SQL databases on GCE VM Instances and need to configure monitoring and alerting. You want to collect metrics
including network connections, disk IO and replication status from MariaDB with minimal development effort and use StackDriver for dashboards
and alerts.
A. Install the OpenCensus Agent and create a custom metric collection application with a StackDriver exporter.
C. Install the StackDriver Logging Agent and configure fluentd in_tail plugin to read MariaDB logs.
D. Install the StackDriver Agent and configure the MySQL plugin. Most Voted
Correct Answer: D
You work for a bank. You have a labelled dataset that contains information on already granted loan application and whether these applications
have been defaulted. You have been asked to train a model to predict default rates for credit applicants.
B. Train a linear regression to predict a credit default risk score. Most Voted
C. Remove the bias from the data and collect applications that have been declined loans.
D. Match loan applicants with their social profiles to enable feature engineering.
Correct Answer: B
You need to migrate a 2TB relational database to Google Cloud Platform. You do not have the resources to significantly refactor the application
Which service do you select for storing and serving your data?
A. Cloud Spanner
B. Cloud Bigtable
C. Cloud Firestore
Correct Answer: D
You're using Bigtable for a real-time application, and you have a heavy load that is a mix of read and writes. You've recently identified an additional
use case and need to perform hourly an analytical job to calculate certain statistics across the whole database. You need to ensure both the
A. Export Bigtable dump to GCS and run your analytical job on top of the exported files.
B. Add a second cluster to an existing instance with a multi-cluster routing, use live-traffic app profile for your regular workload and batch-
C. Add a second cluster to an existing instance with a single-cluster routing, use live-traffic app profile for your regular workload and batch-
D. Increase the size of your existing cluster twice and execute your analytics workload on your new resized cluster.
Correct Answer: C
You are designing an Apache Beam pipeline to enrich data from Cloud Pub/Sub with static reference data from BigQuery. The reference data is
small enough to fit in memory on a single worker. The pipeline should write enriched results to BigQuery for analysis. Which job type and
Correct Answer: C
You have a data pipeline that writes data to Cloud Bigtable using well-designed row keys. You want to monitor your pipeline to determine when to
increase the size of your Cloud Bigtable cluster. Which two actions can you take to accomplish this? (Choose two.)
A. Review Key Visualizer metrics. Increase the size of the Cloud Bigtable cluster when the Read pressure index is above 100.
B. Review Key Visualizer metrics. Increase the size of the Cloud Bigtable cluster when the Write pressure index is above 100.
C. Monitor the latency of write operations. Increase the size of the Cloud Bigtable cluster when there is a sustained increase in write latency.
Most Voted
D. Monitor storage utilization. Increase the size of the Cloud Bigtable cluster when utilization increases above 70% of max capacity.
Most Voted
E. Monitor latency of read operations. Increase the size of the Cloud Bigtable cluster of read operations take longer than 100 ms.
Correct Answer: CD
You want to analyze hundreds of thousands of social media posts daily at the lowest cost and with the fewest steps.
✑ You will batch-load the posts once per day and run them through the Cloud Natural Language API.
✑ You will extract topics and sentiment from the posts.
✑ You must store the raw posts for archiving and reprocessing.
✑ You will create dashboards to be shared with people both inside and outside your organization.
You need to store both the data extracted from the API to perform analysis as well as the raw social media posts for historical archiving. What
A. Store the social media posts and the data extracted from the API in BigQuery.
B. Store the social media posts and the data extracted from the API in Cloud SQL.
C. Store the raw social media posts in Cloud Storage, and write the data extracted from the API into BigQuery. Most Voted
D. Feed to social media posts into the API directly from the source, and write the extracted data from the API into BigQuery.
Correct Answer: C
You store historic data in Cloud Storage. You need to perform analytics on the historic data. You want to use a solution to detect invalid data
entries and perform data transformations that will not require programming or knowledge of SQL.
A. Use Cloud Dataflow with Beam to detect errors and perform transformations.
B. Use Cloud Dataprep with recipes to detect errors and perform transformations. Most Voted
C. Use Cloud Dataproc with a Hadoop job to detect errors and perform transformations.
D. Use federated tables in BigQuery with queries to detect errors and perform transformations.
Correct Answer: B
Your company needs to upload their historic data to Cloud Storage. The security rules don't allow access from external IPs to their on-premises
resources. After an initial upload, they will add new data from existing on-premises applications every day. What should they do?
D. Install an FTP server on a Compute Engine VM to receive the files and move them to Cloud Storage.
Correct Answer: A
You have a query that filters a BigQuery table using a WHERE clause on timestamp and ID columns. By using bq query `"-dry_run you learn that the
query triggers a full scan of the table, even though the filter on timestamp and ID select a tiny fraction of the overall data. You want to reduce the
amount of data scanned by BigQuery with minimal changes to existing SQL queries. What should you do?
C. Recreate the table with a partitioning column and clustering column. Most Voted
D. Use the bq query --maximum_bytes_billed flag to restrict the number of bytes billed.
Correct Answer: C
You have a requirement to insert minute-resolution data from 50,000 sensors into a BigQuery table. You expect significant growth in data volume
and need the data to be available within 1 minute of ingestion for real-time analysis of aggregated trends. What should you do?
B. Use a Cloud Dataflow pipeline to stream data into the BigQuery table. Most Voted
Correct Answer: B
You need to copy millions of sensitive patient records from a relational database to BigQuery. The total size of the database is 10 TB. You need to
design a solution that is secure and time-efficient. What should you do?
A. Export the records from the database as an Avro file. Upload the file to GCS using gsutil, and then load the Avro file into BigQuery using the
B. Export the records from the database as an Avro file. Copy the file onto a Transfer Appliance and send it to Google, and then load the Avro
file into BigQuery using the BigQuery web UI in the GCP Console. Most Voted
C. Export the records from the database into a CSV file. Create a public URL for the CSV file, and then use Storage Transfer Service to move
the file to Cloud Storage. Load the CSV file into BigQuery using the BigQuery web UI in the GCP Console.
D. Export the records from the database as an Avro file. Create a public URL for the Avro file, and then use Storage Transfer Service to move
the file to Cloud Storage. Load the Avro file into BigQuery using the BigQuery web UI in the GCP Console.
Correct Answer: B
You need to create a near real-time inventory dashboard that reads the main inventory tables in your BigQuery data warehouse. Historical
inventory data is stored as inventory balances by item and location. You have several thousand updates to inventory every hour. You want to
maximize performance of the dashboard and ensure that the data is accurate. What should you do?
A. Leverage BigQuery UPDATE statements to update the inventory balances as they are changing.
B. Partition the inventory balance table by item to reduce the amount of data scanned with each inventory update.
C. Use the BigQuery streaming the stream changes into a daily inventory movement table. Calculate balances in a view that joins it to the
historical inventory balance table. Update the inventory balance table nightly. Most Voted
D. Use the BigQuery bulk loader to batch load inventory changes into a daily inventory movement table. Calculate balances in a view that joins
it to the historical inventory balance table. Update the inventory balance table nightly.
Correct Answer: C
You have a data stored in BigQuery. The data in the BigQuery dataset must be highly available. You need to define a storage, backup, and recovery
strategy of this data that minimizes cost. How should you configure the BigQuery table that have a recovery point objective (RPO) of 30 days?
A. Set the BigQuery dataset to be regional. In the event of an emergency, use a point-in-time snapshot to recover the data.
B. Set the BigQuery dataset to be regional. Create a scheduled query to make copies of the data to tables suffixed with the time of the backup.
C. Set the BigQuery dataset to be multi-regional. In the event of an emergency, use a point-in-time snapshot to recover the data. Most Voted
D. Set the BigQuery dataset to be multi-regional. Create a scheduled query to make copies of the data to tables suffixed with the time of the
backup. In the event of an emergency, use the backup copy of the table.
Correct Answer: C
You used Dataprep to create a recipe on a sample of data in a BigQuery table. You want to reuse this recipe on a daily upload of data with the
same schema, after the load job with variable execution time completes. What should you do?
B. Create an App Engine cron job to schedule the execution of the Dataprep job.
C. Export the recipe as a Dataprep template, and create a job in Cloud Scheduler.
D. Export the Dataprep job as a Dataflow template, and incorporate it into a Composer job. Most Voted
Correct Answer: D
You want to automate execution of a multi-step data pipeline running on Google Cloud. The pipeline includes Dataproc and Dataflow jobs that
have multiple dependencies on each other. You want to use managed services where possible, and the pipeline will run every day. Which tool
A. cron
C. Cloud Scheduler
Correct Answer: B
You are managing a Cloud Dataproc cluster. You need to make a job run faster while minimizing costs, without losing work in progress on your
B. Increase the cluster size with preemptible worker nodes, and configure them to forcefully decommission.
C. Increase the cluster size with preemptible worker nodes, and use Cloud Stackdriver to trigger a script to preserve work.
D. Increase the cluster size with preemptible worker nodes, and configure them to use graceful decommissioning. Most Voted
Correct Answer: D
You work for a shipping company that uses handheld scanners to read shipping labels. Your company has strict data privacy standards that
require scanners to only transmit tracking numbers when events are sent to Kafka topics. A recent software update caused the scanners to
accidentally transmit recipients' personally identifiable information (PII) to analytics systems, which violates user privacy rules. You want to
quickly build a scalable solution using cloud-native managed services to prevent exposure of PII to the analytics systems. What should you do?
A. Create an authorized view in BigQuery to restrict access to tables with sensitive data.
B. Install a third-party data validation tool on Compute Engine virtual machines to check the incoming data for sensitive information.
C. Use Cloud Logging to analyze the data passed through the total pipeline to identify transactions that may contain sensitive information.
D. Build a Cloud Function that reads the topics and makes a call to the Cloud Data Loss Prevention (Cloud DLP) API. Use the tagging and
confidence levels to either pass or quarantine the data in a bucket for review. Most Voted
Correct Answer: D
You have developed three data processing jobs. One executes a Cloud Dataflow pipeline that transforms data uploaded to Cloud Storage and
writes results to
BigQuery. The second ingests data from on-premises servers and uploads it to Cloud Storage. The third is a Cloud Dataflow pipeline that gets
information from third-party data providers and uploads the information to Cloud Storage. You need to be able to schedule and monitor the
execution of these three workflows and manually execute them when needed. What should you do?
A. Create a Direct Acyclic Graph in Cloud Composer to schedule and monitor the jobs. Most Voted
B. Use Stackdriver Monitoring and set up an alert with a Webhook notification to trigger the jobs.
C. Develop an App Engine application to schedule and request the status of the jobs using GCP API calls.
D. Set up cron jobs in a Compute Engine instance to schedule and monitor the pipelines using GCP API calls.
Correct Answer: A
You have Cloud Functions written in Node.js that pull messages from Cloud Pub/Sub and send the data to BigQuery. You observe that the
message processing rate on the Pub/Sub topic is orders of magnitude higher than anticipated, but there is no error logged in Cloud Logging. What
are the two most likely causes of this problem? (Choose two.)
C. Error handling in the subscriber code is not handling run-time errors properly. Most Voted
E. The subscriber code does not acknowledge the messages that it pulls. Most Voted
Correct Answer: CE
You are creating a new pipeline in Google Cloud to stream IoT data from Cloud Pub/Sub through Cloud Dataflow to BigQuery. While previewing the
data, you notice that roughly 2% of the data appears to be corrupt. You need to modify the Cloud Dataflow pipeline to filter out this corrupt data.
B. Add a ParDo transform in Cloud Dataflow to discard corrupt elements. Most Voted
C. Add a Partition transform in Cloud Dataflow to separate valid data from corrupt data.
D. Add a GroupByKey transform in Cloud Dataflow to group all of the valid data together and discard the rest.
Correct Answer: B
You have historical data covering the last three years in BigQuery and a data pipeline that delivers new data to BigQuery daily. You have noticed
Data Science team runs a query filtered on a date column and limited to 30`"90 days of data, the query scans the entire table. You also noticed
that your bill is increasing more quickly than you expected. You want to resolve the issue as cost-effectively as possible while maintaining the
A. Re-create the tables using DDL. Partition the tables by a column containing a TIMESTAMP or DATE Type. Most Voted
B. Recommend that the Data Science team export the table to a CSV file on Cloud Storage and use Cloud Datalab to explore the data by
C. Modify your pipeline to maintain the last 3090"€ גdays of data in one table and the longer history in a different table to minimize full table
D. Write an Apache Beam pipeline that creates a BigQuery table per day. Recommend that the Data Science team use wildcards on the table
Correct Answer: A
You operate a logistics company, and you want to improve event delivery reliability for vehicle-based sensors. You operate small data centers
around the world to capture these events, but leased lines that provide connectivity from your event collection infrastructure to your event
processing infrastructure are unreliable, with unpredictable latency. You want to address this issue in the most cost-effective way. What should
you do?
B. Have the data acquisition devices publish data to Cloud Pub/Sub. Most Voted
C. Establish a Cloud Interconnect between all remote data centers and Google.
D. Write a Cloud Dataflow pipeline that aggregates all data in session windows.
Correct Answer: B
You are a retailer that wants to integrate your online sales capabilities with different in-home assistants, such as Google Home. You need to
interpret customer voice commands and issue an order to the backend systems. Which solutions should you choose?
A. Speech-to-Text API
Correct Answer: C
Your company has a hybrid cloud initiative. You have a complex data pipeline that moves data between cloud provider services and leverages
services from each of the cloud providers. Which cloud-native service should you use to orchestrate the entire pipeline?
A. Cloud Dataflow
C. Cloud Dataprep
D. Cloud Dataproc
Correct Answer: B
You use a dataset in BigQuery for analysis. You want to provide third-party companies with access to the same dataset. You need to keep the
costs of data sharing low and ensure that the data is current. Which solution should you choose?
A. Use Analytics Hub to control data access, and provide third party companies with access to the dataset. Most Voted
B. Use Cloud Scheduler to export the data on a regular basis to Cloud Storage, and provide third-party companies with access to the bucket.
C. Create a separate dataset in BigQuery that contains the relevant data to share, and provide third-party companies with access to the new
dataset.
D. Create a Dataflow job that reads the data in frequent time intervals, and writes it to the relevant BigQuery dataset or Cloud Storage bucket
Correct Answer: A
Your company is in the process of migrating its on-premises data warehousing solutions to BigQuery. The existing data warehouse uses trigger-
based change data capture (CDC) to apply updates from multiple transactional database sources on a daily basis. With BigQuery, your company
CDC so that changes to the source systems are available to query in BigQuery in near-real time using log-based CDC streams, while also
optimizing for the performance of applying changes to the data warehouse. Which two steps should they take to ensure that changes are
available in the BigQuery reporting table with minimal latency while reducing compute overhead? (Choose two.)
A. Perform a DML INSERT, UPDATE, or DELETE to replicate each individual CDC record in real time directly on the reporting table.
B. Insert each new CDC record and corresponding operation type to a staging table in real time. Most Voted
D. Periodically use a DML MERGE to perform several DML INSERT, UPDATE, and DELETE operations at the same time on the reporting table.
Most Voted
E. Insert each new CDC record and corresponding operation type in real time to the reporting table, and use a materialized view to expose only
Correct Answer: BD
You are designing a data processing pipeline. The pipeline must be able to scale automatically as load increases. Messages must be processed at
least once and must be ordered within windows of 1 hour. How should you design the solution?
A. Use Apache Kafka for message ingestion and use Cloud Dataproc for streaming analysis.
B. Use Apache Kafka for message ingestion and use Cloud Dataflow for streaming analysis.
C. Use Cloud Pub/Sub for message ingestion and Cloud Dataproc for streaming analysis.
D. Use Cloud Pub/Sub for message ingestion and Cloud Dataflow for streaming analysis. Most Voted
Correct Answer: D
You need to set access to BigQuery for different departments within your company. Your solution should comply with the following requirements:
A. Create a dataset for each department. Assign the department leads the role of OWNER, and assign the data analysts the role of WRITER on
their dataset.
B. Create a dataset for each department. Assign the department leads the role of WRITER, and assign the data analysts the role of READER on
C. Create a table for each department. Assign the department leads the role of Owner, and assign the data analysts the role of Editor on the
D. Create a table for each department. Assign the department leads the role of Editor, and assign the data analysts the role of Viewer on the
Correct Answer: B
You operate a database that stores stock trades and an application that retrieves average stock price for a given company over an adjustable
window of time. The data is stored in Cloud Bigtable where the datetime of the stock trade is the beginning of the row key. Your application has
thousands of concurrent users, and you notice that performance is starting to degrade as more stocks are added. What should you do to improve
A. Change the row key syntax in your Cloud Bigtable table to begin with the stock symbol. Most Voted
B. Change the row key syntax in your Cloud Bigtable table to begin with a random number per second.
C. Change the data pipeline to use BigQuery for storing stock trades, and update your application.
D. Use Cloud Dataflow to write a summary of each day's stock trades to an Avro file on Cloud Storage. Update your application to read from
Correct Answer: A
You are operating a Cloud Dataflow streaming pipeline. The pipeline aggregates events from a Cloud Pub/Sub subscription source, within a
window, and sinks the resulting aggregation to a Cloud Storage bucket. The source has consistent throughput. You want to monitor an alert on
Stackdriver to ensure that it is processing data. Which Stackdriver alerts should you create?
A. An alert based on a decrease of subscription/num_undelivered_messages for the source and a rate of change increase of
B. An alert based on an increase of subscription/num_undelivered_messages for the source and a rate of change decrease of
C. An alert based on a decrease of instance/storage/used_bytes for the source and a rate of change increase of subscription/
D. An alert based on an increase of instance/storage/used_bytes for the source and a rate of change decrease of subscription/
Correct Answer: B
You currently have a single on-premises Kafka cluster in a data center in the us-east region that is responsible for ingesting messages from IoT
devices globally.
Because large parts of globe have poor internet connectivity, messages sometimes batch at the edge, come in all at once, and cause a spike in
load on your
Kafka cluster. This is becoming difficult to manage and prohibitively expensive. What is the Google-recommended cloud native architecture for
this scenario?
A. Edge TPUs as sensor devices for storing and transmitting the messages.
B. Cloud Dataflow connected to the Kafka cluster to scale the processing of incoming messages.
C. An IoT gateway connected to Cloud Pub/Sub, with Cloud Dataflow to read and process the messages from Cloud Pub/Sub. Most Voted
D. A Kafka cluster virtualized on Compute Engine in us-east with Cloud Load Balancing to connect to the devices around the world.
Correct Answer: C
You decided to use Cloud Datastore to ingest vehicle telemetry data in real time. You want to build a storage system that will account for the long-
term data growth, while keeping the costs low. You also want to create snapshots of the data periodically, so that you can make a point-in-time
(PIT) recovery, or clone a copy of the data for Cloud Datastore in a different environment. You want to archive these snapshots for a long time.
(Choose two.)
A. Use managed export, and store the data in a Cloud Storage bucket using Nearline or Coldline class. Most Voted
B. Use managed export, and then import to Cloud Datastore in a separate project under a unique namespace reserved for that export.
Most Voted
C. Use managed export, and then import the data into a BigQuery table created just for that export, and delete temporary export files.
D. Write an application that uses Cloud Datastore client libraries to read all the entities. Treat each entity as a BigQuery table row via BigQuery
streaming insert. Assign an export timestamp for each export, and attach it as an extra column for each row. Make sure that the BigQuery
E. Write an application that uses Cloud Datastore client libraries to read all the entities. Format the exported data into a JSON file. Apply
Correct Answer: AB
You need to create a data pipeline that copies time-series transaction data so that it can be queried from within BigQuery by your data science
Every hour, thousands of transactions are updated with a new status. The size of the initial dataset is 1.5 PB, and it will grow by 3 TB per day. The
data is heavily structured, and your data science team will build machine learning models based on this data. You want to maximize performance
and usability for your data science team. Which two strategies should you adopt? (Choose two.)
D. Develop a data pipeline where status updates are appended to BigQuery instead of updated. Most Voted
E. Copy a daily snapshot of transaction data to Cloud Storage and store it as an Avro file. Use BigQuery's support for external data sources to
query.
Correct Answer: AD
You are designing a cloud-native historical data processing system to meet the following conditions:
✑ The data being analyzed is in CSV, Avro, and PDF formats and will be accessed by multiple analysis tools including Dataproc, BigQuery, and
Compute
Engine.
A. Create a Dataproc cluster with high availability. Store the data in HDFS, and perform analysis as needed.
B. Store the data in BigQuery. Access the data using the BigQuery Connector on Dataproc and Compute Engine.
C. Store the data in a regional Cloud Storage bucket. Access the bucket directly using Dataproc, BigQuery, and Compute Engine.
D. Store the data in a multi-regional Cloud Storage bucket. Access the data directly using Dataproc, BigQuery, and Compute Engine. Most Voted
Correct Answer: D
You have a petabyte of analytics data and need to design a storage and processing platform for it. You must be able to perform data warehouse-
style analytics on the data in Google Cloud and expose the dataset as files for batch analysis tools in other cloud providers. What should you do?
C. Store the full dataset in BigQuery, and store a compressed copy of the data in a Cloud Storage bucket. Most Voted
D. Store the warm data as files in Cloud Storage, and store the active data in BigQuery. Keep this ratio as 80% warm and 20% active.
Correct Answer: C
You work for a manufacturing company that sources up to 750 different components, each from a different supplier. You've collected a labeled
dataset that has on average 1000 examples for each unique component. Your team wants to implement an app to help warehouse workers
recognize incoming components based on a photo of the component. You want to implement the first working version of this app (as Proof-Of-
A. Use Cloud Vision AutoML with the existing dataset. Most Voted
D. Train your own image recognition model leveraging transfer learning techniques.
Correct Answer: A
You are working on a niche product in the image recognition domain. Your team has developed a model that is dominated by custom C++
TensorFlow ops your team has implemented. These ops are used inside your main training loop and are performing bulky matrix multiplications. It
currently takes up to several days to train a model. You want to decrease this time significantly and keep the cost low by using an accelerator on
B. Use Cloud TPUs after implementing GPU kernel support for your customs ops.
C. Use Cloud GPUs after implementing GPU kernel support for your customs ops. Most Voted
D. Stay on CPUs, and increase the size of the cluster you're training your model on.
Correct Answer: C
You work on a regression problem in a natural language processing domain, and you have 100M labeled examples in your dataset. You have
randomly shuffled your data and split your dataset into train and test samples (in a 90/10 ratio). After you trained the neural network and
evaluated your model on a test set, you discover that the root-mean-squared error (RMSE) of your model is twice as high on the train set as on the
test set. How should you improve the performance of your model?
B. Try to collect more data and increase the size of your dataset.
C. Try out regularization techniques (e.g., dropout of batch normalization) to avoid overfitting.
D. Increase the complexity of your model by, e.g., introducing an additional layer or increase sizing the size of vocabularies or n-grams used.
Most Voted
Correct Answer: D
You use BigQuery as your centralized analytics platform. New data is loaded every day, and an ETL pipeline modifies the original data and
prepares it for the final users. This ETL pipeline is regularly modified and can generate errors, but sometimes the errors are detected only after 2
weeks. You need to provide a method to recover from these errors, and your backups should be optimized for storage costs. How should you
A. Organize your data in a single table, export, and compress and store the BigQuery data in Cloud Storage.
B. Organize your data in separate tables for each month, and export, compress, and store the data in Cloud Storage. Most Voted
C. Organize your data in separate tables for each month, and duplicate your data on a separate dataset in BigQuery.
D. Organize your data in separate tables for each month, and use snapshot decorators to restore the table to a time prior to the corruption.
Correct Answer: B
The marketing team at your organization provides regular updates of a segment of your customer dataset. The marketing team has given you a
CSV with 1 million records that must be updated in BigQuery. When you use the UPDATE statement in BigQuery, you receive a quotaExceeded
A. Reduce the number of records updated each day to stay within the BigQuery UPDATE DML statement limit.
B. Increase the BigQuery UPDATE DML statement limit in the Quota management section of the Google Cloud Platform Console.
C. Split the source CSV file into smaller CSV files in Cloud Storage to reduce the number of BigQuery UPDATE DML statements per BigQuery
job.
D. Import the new records from the CSV file into a new BigQuery table. Create a BigQuery job that merges the new records with the existing
records and writes the results to a new BigQuery table. Most Voted
Correct Answer: D
As your organization expands its usage of GCP, many teams have started to create their own projects. Projects are further multiplied to
accommodate different stages of deployments and target audiences. Each project requires unique access control configurations. The central IT
Furthermore, data from Cloud Storage buckets and BigQuery datasets must be shared for use in other projects in an ad hoc way. You want to
simplify access control management by minimizing the number of policies. Which two steps should you take? (Choose two.)
B. Introduce resource hierarchy to leverage access control policy inheritance. Most Voted
C. Create distinct groups for various teams, and specify groups in Cloud IAM policies. Most Voted
D. Only use service accounts when sharing data for Cloud Storage buckets and BigQuery datasets.
E. For each Cloud Storage bucket or BigQuery dataset, decide which projects need access. Find all the active members who have access to
these projects, and create a Cloud IAM policy to grant access to all these users.
Correct Answer: BC
Your United States-based company has created an application for assessing and responding to user actions. The primary table's data volume
grows by 250,000 records per second. Many third parties use your application's APIs to build the functionality into their own frontend applications.
B. Implement Cloud Spanner with the leader in North America and read-only replicas in Asia and Europe. Most Voted
C. Implement Cloud SQL for PostgreSQL with the master in North America and read replicas in Asia and Europe.
D. Implement Bigtable with the primary cluster in North America and secondary clusters in Asia and Europe.
Correct Answer: B
A data scientist has created a BigQuery ML model and asks you to create an ML pipeline to serve predictions. You have a REST API application
with the requirement to serve predictions for an individual user ID with latency under 100 milliseconds. You use the following query to generate
predictions: SELECT predicted_label, user_id FROM ML.PREDICT (MODEL 'dataset.model', table user_features). How should you create the ML
pipeline?
A. Add a WHERE clause to the query, and grant the BigQuery Data Viewer role to the application service account.
B. Create an Authorized View with the provided query. Share the dataset that contains the view with the application service account.
C. Create a Dataflow pipeline using BigQueryIO to read results from the query. Grant the Dataflow Worker role to the application service
account.
D. Create a Dataflow pipeline using BigQueryIO to read predictions for all users from the query. Write the results to Bigtable using BigtableIO.
Grant the Bigtable Reader role to the application service account so that the application can read predictions for individual users from
Correct Answer: D
You are building an application to share financial market data with consumers, who will receive data feeds. Data is collected from the markets in
real time.
Correct Answer: B
You are building a new application that you need to collect data from in a scalable way. Data arrives continuously from the application throughout
the day, and you expect to generate approximately 150 GB of JSON data per day by the end of the year. Your requirements are:
A. Create an application that provides an API. Write a tool to poll the API and write data to Cloud Storage as gzipped JSON files.
B. Create an application that writes to a Cloud SQL database to store the data. Set up periodic exports of the database to write to Cloud
C. Create an application that publishes events to Cloud Pub/Sub, and create Spark jobs on Cloud Dataproc to convert the JSON data to Avro
D. Create an application that publishes events to Cloud Pub/Sub, and create a Cloud Dataflow pipeline that transforms the JSON event
payloads to Avro, writing the data to Cloud Storage and BigQuery. Most Voted
Correct Answer: D
You are running a pipeline in Dataflow that receives messages from a Pub/Sub topic and writes the results to a BigQuery dataset in the EU.
Currently, your pipeline is located in europe-west4 and has a maximum of 3 workers, instance type n1-standard-1. You notice that during peak
periods, your pipeline is struggling to process records in a timely fashion, when all 3 workers are at maximum CPU utilization. Which two actions
B. Use a larger instance type for your Dataflow workers Most Voted
D. Create a temporary table in Bigtable that will act as a buffer for new data. Create a new step in your pipeline to write to this table first, and
E. Create a temporary table in Cloud Spanner that will act as a buffer for new data. Create a new step in your pipeline to write to this table first,
and then create a new pipeline to write from Cloud Spanner to BigQuery
Correct Answer: AB
You have a data pipeline with a Dataflow job that aggregates and writes time series metrics to Bigtable. You notice that data is slow to update in
Bigtable. This data feeds a dashboard used by thousands of users across the organization. You need to support additional concurrent users and
reduce the amount of time required to write the data. Which two actions should you take? (Choose two.)
B. Increase the maximum number of Dataflow workers by setting maxNumWorkers in PipelineOptions Most Voted
D. Modify your Dataflow pipeline to use the Flatten transform before writing to Bigtable
E. Modify your Dataflow pipeline to use the CoGroupByKey transform before writing to Bigtable
Correct Answer: BC
You have several Spark jobs that run on a Cloud Dataproc cluster on a schedule. Some of the jobs run in sequence, and some of the jobs run
concurrently. You need to automate this process. What should you do?
D. Create a Bash script that uses the Cloud SDK to create a cluster, execute jobs, and then tear down the cluster
Correct Answer: C
You are building a new data pipeline to share data between two different types of applications: jobs generators and job runners. Your solution
must scale to accommodate increases in usage and must accommodate the addition of new applications without negatively affecting the
A. Create an API using App Engine to receive and send messages to the applications
B. Use a Cloud Pub/Sub topic to publish jobs, and use subscriptions to execute them Most Voted
C. Create a table on Cloud SQL, and insert and delete rows with the job information
D. Create a table on Cloud Spanner, and insert and delete rows with the job information
Correct Answer: B
You need to create a new transaction table in Cloud Spanner that stores product sales data. You are deciding what to use as a primary key. From a
D. The original order identification number from the sales system, which is a monotonically increasing integer
Correct Answer: C
Data Analysts in your company have the Cloud IAM Owner role assigned to them in their projects to allow them to work with multiple GCP
products in their projects. Your organization requires that all BigQuery data access logs be retained for 6 months. You need to ensure that only
audit personnel in your company can access the data access logs for all projects. What should you do?
A. Enable data access logs in each Data Analyst's project. Restrict access to Stackdriver Logging via Cloud IAM roles.
B. Export the data access logs via a project-level export sink to a Cloud Storage bucket in the Data Analysts' projects. Restrict access to the
C. Export the data access logs via a project-level export sink to a Cloud Storage bucket in a newly created projects for audit logs. Restrict
D. Export the data access logs via an aggregated export sink to a Cloud Storage bucket in a newly created project for audit logs. Restrict
access to the project that contains the exported logs. Most Voted
Correct Answer: D
Each analytics team in your organization is running BigQuery jobs in their own projects. You want to enable each team to monitor slot usage
B. Create a Cloud Monitoring dashboard based on the BigQuery metric slots/allocated_for_project Most Voted
C. Create a log export for each project, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs, and create
D. Create an aggregated log export at the organization level, capture the BigQuery job execution logs, create a custom metric based on the
totalSlotMs, and create a Cloud Monitoring dashboard based on the custom metric
Correct Answer: B
You are operating a streaming Cloud Dataflow pipeline. Your engineers have a new version of the pipeline with a different windowing algorithm
and triggering strategy. You want to update the running pipeline with the new version. You want to ensure that no data is lost during the update.
A. Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to the existing job name
B. Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to a new unique job name
C. Stop the Cloud Dataflow pipeline with the Cancel option. Create a new Cloud Dataflow job with the updated code
D. Stop the Cloud Dataflow pipeline with the Drain option. Create a new Cloud Dataflow job with the updated code Most Voted
Correct Answer: D
You need to move 2 PB of historical data from an on-premises storage appliance to Cloud Storage within six months, and your outbound network
capacity is constrained to 20 Mb/sec. How should you migrate this data to Cloud Storage?
A. Use Transfer Appliance to copy the data to Cloud Storage Most Voted
B. Use gsutil cp ג€"J to compress the content being uploaded to Cloud Storage
C. Create a private URL for the historical data, and then use Storage Transfer Service to copy the data to Cloud Storage
D. Use trickle or ionice along with gsutil cp to limit the amount of bandwidth gsutil utilizes to less than 20 Mb/sec so it does not interfere with
Correct Answer: A
You receive data files in CSV format monthly from a third party. You need to cleanse this data, but every third month the schema of the files
A. Use Dataprep by Trifacta to build and maintain the transformation recipes, and execute them on a scheduled basis Most Voted
B. Load each month's CSV data into BigQuery, and write a SQL query to transform the data to a standard schema. Merge the transformed
C. Help the analysts write a Dataflow pipeline in Python to perform the transformation. The Python code should be stored in a revision control
D. Use Apache Spark on Dataproc to infer the schema of the CSV file before creating a Dataframe. Then implement the transformations in
Spark SQL before writing the data out to Cloud Storage and loading into BigQuery
Correct Answer: A
You want to migrate an on-premises Hadoop system to Cloud Dataproc. Hive is the primary tool in use, and the data format is Optimized Row
Columnar (ORC).
All ORC files have been successfully copied to a Cloud Storage bucket. You need to replicate some data to the cluster's local Hadoop Distributed
File System
(HDFS) to maximize performance. What are two ways to start using Hive in Cloud Dataproc? (Choose two.)
A. Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to HDFS. Mount the Hive tables locally.
B. Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to any node of the Dataproc cluster. Mount the Hive tables
locally.
C. Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to the master node of the Dataproc cluster. Then run the
Hadoop utility to copy them do HDFS. Mount the Hive tables from HDFS. Most Voted
D. Leverage Cloud Storage connector for Hadoop to mount the ORC files as external Hive tables. Replicate external Hive tables to the native
E. Load the ORC files into BigQuery. Leverage BigQuery connector for Hadoop to mount the BigQuery tables as external Hive tables. Replicate
Correct Answer: CD
You are implementing several batch jobs that must be executed on a schedule. These jobs have many interdependent steps that must be executed
in a specific order. Portions of the jobs involve executing shell scripts, running Hadoop jobs, and running queries in BigQuery. The jobs are
expected to run for many minutes up to several hours. If the steps fail, they must be retried a fixed number of times. Which service should you use
A. Cloud Scheduler
B. Cloud Dataflow
C. Cloud Functions
Correct Answer: D
You work for a shipping company that has distribution centers where packages move on delivery lines to route them properly. The company wants
to add cameras to the delivery lines to detect and track any visual damage to the packages in transit. You need to create a way to automate the
detection of damaged packages and flag them for human review in real time while the packages are in transit. Which solution should you choose?
A. Use BigQuery machine learning to be able to train the model at scale, so you can analyze the packages in batches.
B. Train an AutoML model on your corpus of images, and build an API around that model to integrate with the package tracking applications.
Most Voted
C. Use the Cloud Vision API to detect for damage, and raise an alert through Cloud Functions. Integrate the package tracking applications with
this function.
D. Use TensorFlow to create a model that is trained on your corpus of images. Create a Python notebook in Cloud Datalab that uses this model
Correct Answer: B
You are migrating your data warehouse to BigQuery. You have migrated all of your data into tables in a dataset. Multiple users from your
organization will be using the data. They should only see certain tables based on their team membership. How should you set user permissions?
A. Assign the users/groups data viewer access at the table level for each table Most Voted
B. Create SQL views for each team in the same dataset in which the data resides, and assign the users/groups data viewer access to the SQL
views
C. Create authorized views for each team in the same dataset in which the data resides, and assign the users/groups data viewer access to
D. Create authorized views for each team in datasets created for each team. Assign the authorized views data viewer access to the dataset in
which the data resides. Assign the users/groups data viewer access to the datasets in which the authorized views reside
Correct Answer: A
You want to build a managed Hadoop system as your data lake. The data transformation process is composed of a series of Hadoop jobs
executed in sequence.
To accomplish the design of separating storage from compute, you decided to use the Cloud Storage connector to store all input data, output
data, and intermediary data. However, you noticed that one Hadoop job runs very slowly with Cloud Dataproc, when compared with the on-
premises bare-metal Hadoop environment (8-core nodes with 100-GB RAM). Analysis shows that this particular Hadoop job is disk I/O intensive.
A. Allocate sufficient memory to the Hadoop cluster, so that the intermediary data of that particular Hadoop job can be held in memory
B. Allocate sufficient persistent disk space to the Hadoop cluster, and store the intermediate data of that particular Hadoop job on native
C. Allocate more CPU cores of the virtual machine instances of the Hadoop cluster so that the networking bandwidth for each instance can
scale up
D. Allocate additional network interface card (NIC), and configure link aggregation in the operating system to use the combined throughput
Correct Answer: B
You work for an advertising company, and you've developed a Spark ML model to predict click-through rates at advertisement blocks. You've been
developing everything at your on-premises data center, and now your company is migrating to Google Cloud. Your data center will be closing soon,
so a rapid lift-and-shift migration is necessary. However, the data you've been using will be migrated to migrated to BigQuery. You periodically
retrain your Spark ML models, so you need to migrate existing training pipelines to Google Cloud. What should you do?
C. Use Dataproc for training existing Spark ML models, but start reading data directly from BigQuery Most Voted
D. Spin up a Spark cluster on Compute Engine, and train Spark ML models on the data exported from BigQuery
Correct Answer: C
You work for a global shipping company. You want to train a model on 40 TB of data to predict which ships in each geographic region are likely to
cause delivery delays on any given day. The model will be based on multiple attributes collected from multiple sources. Telemetry data, including
location in GeoJSON format, will be pulled from each ship and loaded every hour. You want to have a dashboard that shows how many and which
ships are likely to cause delays within a region. You want to use a storage solution that has native functionality for prediction and geospatial
B. Cloud Bigtable
C. Cloud Datastore
Correct Answer: A
You operate an IoT pipeline built around Apache Kafka that normally receives around 5000 messages per second. You want to use Google Cloud
Platform to create an alert as soon as the moving average over 1 hour drops below 4000 messages per second. What should you do?
A. Consume the stream of data in Dataflow using Kafka IO. Set a sliding time window of 1 hour every 5 minutes. Compute the average when
the window closes, and send an alert if the average is less than 4000 messages. Most Voted
B. Consume the stream of data in Dataflow using Kafka IO. Set a fixed time window of 1 hour. Compute the average when the window closes,
C. Use Kafka Connect to link your Kafka message queue to Pub/Sub. Use a Dataflow template to write your messages from Pub/Sub to
Bigtable. Use Cloud Scheduler to run a script every hour that counts the number of rows created in Bigtable in the last hour. If that number
D. Use Kafka Connect to link your Kafka message queue to Pub/Sub. Use a Dataflow template to write your messages from Pub/Sub to
BigQuery. Use Cloud Scheduler to run a script every five minutes that counts the number of rows created in BigQuery in the last hour. If that
Correct Answer: A
You plan to deploy Cloud SQL using MySQL. You need to ensure high availability in the event of a zone failure. What should you do?
A. Create a Cloud SQL instance in one zone, and create a failover replica in another zone within the same region. Most Voted
B. Create a Cloud SQL instance in one zone, and create a read replica in another zone within the same region.
C. Create a Cloud SQL instance in one zone, and configure an external read replica in a zone in a different region.
D. Create a Cloud SQL instance in a region, and configure automatic backup to a Cloud Storage bucket in the same region.
Correct Answer: A
Your company is selecting a system to centralize data ingestion and delivery. You are considering messaging and data integration systems to
✑ The ability to seek to a particular offset in a topic, possibly back to the start of all data ever captured
✑ Support for publish/subscribe semantics on hundreds of topics
B. Cloud Storage
C. Dataflow
Correct Answer: A
You are planning to migrate your current on-premises Apache Hadoop deployment to the cloud. You need to ensure that the deployment is as
fault-tolerant and cost-effective as possible for long-running batch jobs. You want to use a managed service. What should you do?
A. Deploy a Dataproc cluster. Use a standard persistent disk and 50% preemptible workers. Store data in Cloud Storage, and change
B. Deploy a Dataproc cluster. Use an SSD persistent disk and 50% preemptible workers. Store data in Cloud Storage, and change references in
C. Install Hadoop and Spark on a 10-node Compute Engine instance group with standard instances. Install the Cloud Storage connector, and
store the data in Cloud Storage. Change references in scripts from hdfs:// to gs://
D. Install Hadoop and Spark on a 10-node Compute Engine instance group with preemptible instances. Store data in HDFS. Change references
Correct Answer: A
Your team is working on a binary classification problem. You have trained a support vector machine (SVM) classifier with default parameters, and
received an area under the Curve (AUC) of 0.87 on the validation set. You want to increase the AUC of the model. What should you do?
B. Train a classifier with deep neural networks, because neural networks would always beat SVMs
C. Deploy the model and measure the real-world AUC; it's always higher because of generalization
D. Scale predictions you get out of the model (tune a scaling factor as a hyperparameter) in order to get the highest AUC
Correct Answer: A
You need to deploy additional dependencies to all nodes of a Cloud Dataproc cluster at startup using an existing initialization action. Company
security policies require that Cloud Dataproc nodes do not have access to the Internet so public initialization actions cannot fetch resources.
B. Use an SSH tunnel to give the Cloud Dataproc cluster access to the Internet
C. Copy all dependencies to a Cloud Storage bucket within your VPC security perimeter Most Voted
D. Use Resource Manager to add the service account used by the Cloud Dataproc cluster to the Network User role
Correct Answer: C
You need to choose a database for a new project that has the following requirements:
✑ Fully managed
✑ Able to automatically scale up
✑ Transactionally consistent
✑ Able to scale up to 6 TB
✑ Able to be queried using SQL
Which database do you choose?
A. Cloud SQL
B. Cloud Bigtable
D. Cloud Datastore
Correct Answer: C
You work for a mid-sized enterprise that needs to move its operational system transaction data from an on-premises database to GCP. The
database is about 20
B. Cloud Bigtable
C. Cloud Spanner
D. Cloud Datastore
Correct Answer: A
You need to choose a database to store time series CPU and memory usage for millions of computers. You need to store this data in one-second
interval samples. Analysts will be performing real-time, ad hoc analytics against the database. You want to avoid being charged for every query
executed and ensure that the schema design will allow for future growth of the dataset. Which database and data model should you choose?
A. Create a table in BigQuery, and append the new samples for CPU and memory to the table
B. Create a wide table in BigQuery, create a column for the sample value at each second, and update the row with the interval for each second
C. Create a narrow table in Bigtable with a row key that combines the Computer Engine computer identifier with the sample time at each
D. Create a wide table in Bigtable with a row key that combines the computer identifier with the sample time at each minute, and combine the
Correct Answer: C
You want to archive data in Cloud Storage. Because some data is very sensitive, you want to use the `Trust No One` (TNO) approach to encrypt
your data to prevent the cloud provider staff from decrypting your data. What should you do?
A. Use gcloud kms keys create to create a symmetric key. Then use gcloud kms encrypt to encrypt each archival file with the key and unique
additional authenticated data (AAD). Use gsutil cp to upload each encrypted file to the Cloud Storage bucket, and keep the AAD outside of
Google Cloud.
B. Use gcloud kms keys create to create a symmetric key. Then use gcloud kms encrypt to encrypt each archival file with the key. Use gsutil cp
to upload each encrypted file to the Cloud Storage bucket. Manually destroy the key previously used for encryption, and rotate the key once.
C. Specify customer-supplied encryption key (CSEK) in the .boto configuration file. Use gsutil cp to upload each archival file to the Cloud
Storage bucket. Save the CSEK in Cloud Memorystore as permanent storage of the secret.
D. Specify customer-supplied encryption key (CSEK) in the .boto configuration file. Use gsutil cp to upload each archival file to the Cloud
Storage bucket. Save the CSEK in a different project that only the security team can access. Most Voted
Correct Answer: D
You have data pipelines running on BigQuery, Dataflow, and Dataproc. You need to perform health checks and monitor their behavior, and then
notify the team managing the pipelines if they fail. You also need to be able to work across multiple projects. Your preference is to use managed
A. Export the information to Cloud Monitoring, and set up an Alerting policy Most Voted
B. Run a Virtual Machine in Compute Engine with Airflow, and export the information to Cloud Monitoring
C. Export the logs to BigQuery, and set up App Engine to read that information and send emails if you find a failure in the logs
D. Develop an App Engine application to consume logs using GCP API calls, and send emails if you find a failure in the logs
Correct Answer: A
You are working on a linear regression model on BigQuery ML to predict a customer's likelihood of purchasing your company's products. Your
model uses a city name variable as a key predictive component. In order to train and serve the model, your data must be organized in columns.
You want to prepare your data using the least amount of coding while maintaining the predictable variables. What should you do?
A. Create a new view with BigQuery that does not include a column with city information.
B. Use SQL in BigQuery to transform the state column using a one-hot encoding method, and make each city a column with binary values.
Most Voted
C. Use TensorFlow to create a categorical variable with a vocabulary list. Create the vocabulary file and upload that as part of your model to
BigQuery ML.
D. Use Cloud Data Fusion to assign each city to a region that is labeled as 1, 2, 3, 4, or 5, and then use that number to represent the city in the
model.
Correct Answer: B
You work for a large bank that operates in locations throughout North America. You are setting up a data storage system that will handle bank
account transactions. You require ACID compliance and the ability to access data with SQL. Which solution is appropriate?
A. Store transaction data in Cloud Spanner. Enable stale reads to reduce latency.
B. Store transaction in Cloud Spanner. Use locking read-write transactions. Most Voted
C. Store transaction data in BigQuery. Disabled the query cache to ensure consistency.
D. Store transaction data in Cloud SQL. Use a federated query BigQuery for analysis.
Correct Answer: B
A shipping company has live package-tracking data that is sent to an Apache Kafka stream in real time. This is then loaded into BigQuery.
Analysts in your company want to query the tracking data in BigQuery to analyze geospatial trends in the lifecycle of a package. The table was
originally created with ingest-date partitioning. Over time, the query processing time has increased. You need to implement a change that would
C. Tier older data onto Cloud Storage files and create a BigQuery table using Cloud Storage as an external data source.
D. Re-create the table using data partitioning on the package delivery date.
Correct Answer: B
Your company currently runs a large on-premises cluster using Spark, Hive, and HDFS in a colocation facility. The cluster is designed to
accommodate peak usage on the system; however, many jobs are batch in nature, and usage of the cluster fluctuates quite dramatically. Your
company is eager to move to the cloud to reduce the overhead associated with on-premises infrastructure and maintenance and to benefit from
the cost savings. They are also hoping to modernize their existing infrastructure to use more serverless offerings in order to take advantage of the
cloud. Because of the timing of their contract renewal with the colocation facility, they have only 2 months for their initial migration. How would
you recommend they approach their upcoming migration strategy so they can maximize their cost savings in the cloud while still executing the
migration in time?
B. Migrate the workloads to Dataproc plus Cloud Storage; modernize later. Most Voted
C. Migrate the Spark workload to Dataproc plus HDFS, and modernize the Hive workload for BigQuery.
D. Modernize the Spark workload for Dataflow and the Hive workload for BigQuery.
Correct Answer: B
You work for a financial institution that lets customers register online. As new customers register, their user data is sent to Pub/Sub before being
ingested into
BigQuery. For security reasons, you decide to redact your customers' Government issued Identification Number while allowing customer service
representatives to view the original values when necessary. What should you do?
A. Use BigQuery's built-in AEAD encryption to encrypt the SSN column. Save the keys to a new table that is only viewable by permissioned
users.
B. Use BigQuery column-level security. Set the table permissions so that only members of the Customer Service user group can see the SSN
column.
C. Before loading the data into BigQuery, use Cloud Data Loss Prevention (DLP) to replace input values with a cryptographic hash.
D. Before loading the data into BigQuery, use Cloud Data Loss Prevention (DLP) to replace input values with a cryptographic format-preserving
Correct Answer: D
You are migrating a table to BigQuery and are deciding on the data model. Your table stores information related to purchases made across several
store locations and includes information like the time of the transaction, items purchased, the store ID, and the city and state in which the store is
located. You frequently query this table to see how many of each item were sold over the past 30 days and to look at purchasing trends by state,
city, and individual store. How would you model this table for the best query performance?
A. Partition by transaction time; cluster by state first, then city, then store ID. Most Voted
B. Partition by transaction time; cluster by store ID first, then city, then state.
Correct Answer: A
You are updating the code for a subscriber to a Pub/Sub feed. You are concerned that upon deployment the subscriber may erroneously
acknowledge messages, leading to message loss. Your subscriber is not set up to retain acknowledged messages. What should you do to ensure
A. Set up the Pub/Sub emulator on your local machine. Validate the behavior of your new subscriber logic before deploying it to production.
B. Create a Pub/Sub snapshot before deploying new subscriber code. Use a Seek operation to re-deliver messages that became available
C. Use Cloud Build for your deployment. If an error occurs after deployment, use a Seek operation to locate a timestamp logged by Cloud Build
D. Enable dead-lettering on the Pub/Sub topic to capture messages that aren't successfully acknowledged. If an error occurs after
Correct Answer: B
You work for a large real estate firm and are preparing 6 TB of home sales data to be used for machine learning. You will use SQL to transform the
BigQuery ML to create a machine learning model. You plan to use the model for predictions against a raw dataset that has not been transformed.
How should you set up your workflow in order to prevent skew at prediction time?
A. When creating your model, use BigQuery's TRANSFORM clause to define preprocessing steps. At prediction time, use BigQuery's
ML.EVALUATE clause without specifying any transformations on the raw input data. Most Voted
B. When creating your model, use BigQuery's TRANSFORM clause to define preprocessing steps. Before requesting predictions, use a saved
query to transform your raw input data, and then use ML.EVALUATE.
C. Use a BigQuery view to define your preprocessing logic. When creating your model, use the view as your model training data. At prediction
time, use BigQuery's ML.EVALUATE clause without specifying any transformations on the raw input data.
D. Preprocess all data using Dataflow. At prediction time, use BigQuery's ML.EVALUATE clause without specifying any further transformations
Correct Answer: A
You are analyzing the price of a company's stock. Every 5 seconds, you need to compute a moving average of the past 30 seconds' worth of data.
You are reading data from Pub/Sub and using DataFlow to conduct the analysis. How should you set up your windowed pipeline?
A. Use a fixed window with a duration of 5 seconds. Emit results by setting the following trigger:
AfterProcessingTime.pastFirstElementInPane().plusDelayOf (Duration.standardSeconds(30))
B. Use a fixed window with a duration of 30 seconds. Emit results by setting the following trigger:
AfterWatermark.pastEndOfWindow().plusDelayOf (Duration.standardSeconds(5))
C. Use a sliding window with a duration of 5 seconds. Emit results by setting the following trigger:
AfterProcessingTime.pastFirstElementInPane().plusDelayOf (Duration.standardSeconds(30))
D. Use a sliding window with a duration of 30 seconds and a period of 5 seconds. Emit results by setting the following trigger:
Correct Answer: D
You are designing a pipeline that publishes application events to a Pub/Sub topic. Although message ordering is not important, you need to be
able to aggregate events across disjoint hourly intervals before loading the results to BigQuery for analysis. What technology should you use to
BigQuery while ensuring that it will scale with large volumes of events?
A. Create a Cloud Function to perform the necessary data processing that executes using the Pub/Sub trigger every time a new message is
B. Schedule a Cloud Function to run hourly, pulling all available messages from the Pub/Sub topic and performing the necessary aggregations.
C. Schedule a batch Dataflow job to run hourly, pulling all available messages from the Pub/Sub topic and performing the necessary
aggregations.
D. Create a streaming Dataflow job that reads continually from the Pub/Sub topic and performs the necessary aggregations using tumbling
Correct Answer: D
You work for a large financial institution that is planning to use Dialogflow to create a chatbot for the company's mobile app. You have reviewed
old chat logs and tagged each conversation for intent based on each customer's stated intention for contacting customer service. About 70% of
customer requests are simple requests that are solved within 10 intents. The remaining 30% of inquiries require much longer, more complicated
A. Automate the 10 intents that cover 70% of the requests so that live agents can handle more complicated requests. Most Voted
B. Automate the more complicated requests first because those require more of the agents' time.
C. Automate a blend of the shortest and longest intents to be representative of all intents.
D. Automate intents in places where common words such as 'payment' appear only once so the software isn't confused.
Correct Answer: A
Your company is implementing a data warehouse using BigQuery, and you have been tasked with designing the data model. You move your on-
premises sales data warehouse with a star data schema to BigQuery but notice performance issues when querying the data of the past 30 days.
Based on Google's recommended practices, what should you do to speed up the query without increasing storage costs?
Correct Answer: D
You have uploaded 5 years of log data to Cloud Storage. A user reported that some data points in the log data are outside of their expected
ranges, which indicates errors. You need to address this issue and be able to run the process again in the future while keeping the original data for
A. Import the data from Cloud Storage into BigQuery. Create a new BigQuery table, and skip the rows with errors.
B. Create a Compute Engine instance and create a new copy of the data in Cloud Storage. Skip the rows with errors.
C. Create a Dataflow workflow that reads the data from Cloud Storage, checks for values outside the expected range, sets the value to an
appropriate default, and writes the updated records to a new dataset in Cloud Storage. Most Voted
D. Create a Dataflow workflow that reads the data from Cloud Storage, checks for values outside the expected range, sets the value to an
appropriate default, and writes the updated records to the same dataset in Cloud Storage.
Correct Answer: C
You want to rebuild your batch pipeline for structured data on Google Cloud. You are using PySpark to conduct data transformations at scale, but
your pipelines are taking over twelve hours to run. To expedite development and pipeline run time, you want to use a serverless tool and SOL
syntax. You have already moved your raw data into Cloud Storage. How should you build the pipeline on Google Cloud while meeting speed and
processing requirements?
A. Convert your PySpark commands into SparkSQL queries to transform the data, and then run your pipeline on Dataproc to write the data into
BigQuery.
B. Ingest your data into Cloud SQL, convert your PySpark commands into SparkSQL queries to transform the data, and then use federated
C. Ingest your data into BigQuery from Cloud Storage, convert your PySpark commands into BigQuery SQL queries to transform the data, and
D. Use Apache Beam Python SDK to build the transformation pipelines, and write the data into BigQuery.
Correct Answer: C
You are testing a Dataflow pipeline to ingest and transform text files. The files are compressed gzip, errors are written to a dead-letter queue, and
SideInputs to join data. You noticed that the pipeline is taking longer to complete than expected; what should you do to expedite the Dataflow job?
Correct Answer: D
You are building a real-time prediction engine that streams files, which may contain PII (personal identifiable information) data, into Cloud Storage
and eventually into BigQuery. You want to ensure that the sensitive data is masked but still maintains referential integrity, because names and
How should you use the Cloud Data Loss Prevention API (DLP API) to ensure that the PII data is not accessible by unauthorized individuals?
A. Create a pseudonym by replacing the PII data with cryptogenic tokens, and store the non-tokenized data in a locked-down button.
B. Redact all PII data, and store a version of the unredacted data in a locked-down bucket.
C. Scan every table in BigQuery, and mask the data it finds that has PII.
D. Create a pseudonym by replacing PII data with a cryptographic format-preserving token. Most Voted
Correct Answer: D
You are migrating an application that tracks library books and information about each book, such as author or year published, from an on-
premises data warehouse to BigQuery. In your current relational database, the author information is kept in a separate table and joined to the book
information on a common key. Based on Google's recommended practice for schema design, how would you structure the data to ensure optimal
speed of queries about the author of each book that has been borrowed?
A. Keep the schema the same, maintain the different tables for the book and each of the attributes, and query as you are doing today.
B. Create a table that is wide and includes a column for each attribute, including the author's first name, last name, date of birth, etc.
C. Create a table that includes information about the books and authors, but nest the author fields inside the author column. Most Voted
D. Keep the schema the same, create a view that joins all of the tables, and always query the view.
Correct Answer: C
You need to give new website users a globally unique identifier (GUID) using a service that takes in data points and returns a GUID. This data is
sourced from both internal and external systems via HTTP calls that you will make via microservices within your pipeline. There will be tens of
thousands of messages per second and that can be multi-threaded. and you worry about the backpressure on the system. How should you design
Correct Answer: D
You are migrating your data warehouse to Google Cloud and decommissioning your on-premises data center. Because this is a priority for your
company, you know that bandwidth will be made available for the initial data load to the cloud. The files being transferred are not large in number,
Additionally, you want your transactional systems to continually update the warehouse on Google Cloud in real time. What tools should you use to
migrate the data and ensure that it continues to write to your warehouse?
A. Storage Transfer Service for the migration; Pub/Sub and Cloud Data Fusion for the real-time updates
B. BigQuery Data Transfer Service for the migration; Pub/Sub and Dataproc for the real-time updates
C. gsutil for the migration; Pub/Sub and Dataflow for the real-time updates Most Voted
Correct Answer: C
You are using Bigtable to persist and serve stock market data for each of the major indices. To serve the trading application, you need to access
only the most recent stock prices that are streaming in. How should you design your row key and tables to ensure that you can access the data
A. Create one unique table for all of the indices, and then use the index and timestamp as the row key design.
B. Create one unique table for all of the indices, and then use a reverse timestamp as the row key design. Most Voted
C. For each index, have a separate table and use a timestamp as the row key design.
D. For each index, have a separate table and use a reverse timestamp as the row key design.
Correct Answer: B
You are building a report-only data warehouse where the data is streamed into BigQuery via the streaming API. Following Google's best practices,
you have both a staging and a production table for the data. How should you design your data loading to ensure that there is only one master
A. Have a staging table that is an append-only model, and then update the production table every three hours with the changes written to
staging.
B. Have a staging table that is an append-only model, and then update the production table every ninety minutes with the changes written to
staging.
C. Have a staging table that moves the staged data over to the production table and deletes the contents of the staging table every three
D. Have a staging table that moves the staged data over to the production table and deletes the contents of the staging table every thirty
minutes.
Correct Answer: C
You issue a new batch job to Dataflow. The job starts successfully, processes a few elements, and then suddenly fails and shuts down. You
navigate to the
Dataflow monitoring interface where you find errors related to a particular DoFn in your pipeline. What is the most likely cause of the errors?
A. Job validation
D. Insufficient permissions
Correct Answer: B
Your new customer has requested daily reports that show their net consumption of Google Cloud compute resources and who used the resources.
You need to quickly and efficiently generate these daily reports. What should you do?
A. Do daily exports of Cloud Logging data to BigQuery. Create views filtering by project, log type, resource, and user. Most Voted
B. Filter data in Cloud Logging by project, resource, and user; then export the data in CSV format.
C. Filter data in Cloud Logging by project, log type, resource, and user, then import the data into BigQuery.
D. Export Cloud Logging data to Cloud Storage in CSV format. Cleanse the data using Dataprep, filtering by project, resource, and user.
Correct Answer: A
The Development and External teams have the project viewer Identity and Access Management (IAM) role in a folder named Visualization. You
want the
Development Team to be able to read data from both Cloud Storage and BigQuery, but the External Team should only be able to read data from
A. Remove Cloud Storage IAM permissions to the External Team on the acme-raw-data project.
B. Create Virtual Private Cloud (VPC) firewall rules on the acme-raw-data project that deny all ingress traffic from the External Team CIDR
range.
C. Create a VPC Service Controls perimeter containing both projects and BigQuery as a restricted API. Add the External Team users to the
D. Create a VPC Service Controls perimeter containing both projects and Cloud Storage as a restricted API. Add the Development Team users
Correct Answer: D
Your startup has a web application that currently serves customers out of a single region in Asia. You are targeting funding that will allow your
startup to serve customers globally. Your current goal is to optimize for cost, and your post-funding goal is to optimize for global presence and
A. Use Cloud Spanner to configure a single region instance initially, and then configure multi-region Cloud Spanner instances after securing
B. Use a Cloud SQL for PostgreSQL highly available instance first, and Bigtable with US, Europe, and Asia replication after securing funding.
C. Use a Cloud SQL for PostgreSQL zonal instance first, and Bigtable with US, Europe, and Asia after securing funding.
D. Use a Cloud SQL for PostgreSQL zonal instance first, and Cloud SQL for PostgreSQL with highly available configuration after securing
funding.
Correct Answer: A
You need to migrate 1 PB of data from an on-premises data center to Google Cloud. Data transfer time during the migration should take only a few
hours. You want to follow Google-recommended practices to facilitate the large data transfer over a secure connection. What should you do?
A. Establish a Cloud Interconnect connection between the on-premises data center and Google Cloud, and then use the Storage Transfer
B. Use a Transfer Appliance and have engineers manually encrypt, decrypt, and verify the data.
C. Establish a Cloud VPN connection, start gcloud compute scp jobs in parallel, and run checksums to verify the data.
D. Reduce the data into 3 TB batches, transfer the data using gsutil, and run checksums to verify the data.
Correct Answer: A
You are loading CSV files from Cloud Storage to BigQuery. The files have known data quality issues, including mismatched data types, such as
STRINGs and
INT64s in the same column, and inconsistent formatting of values such as phone numbers or addresses. You need to create the data pipeline to
maintain data quality and perform the required cleansing and transformation. What should you do?
A. Use Data Fusion to transform the data before loading it into BigQuery. Most Voted
B. Use Data Fusion to convert the CSV files to a self-describing data format, such as AVRO, before loading the data to BigQuery.
C. Load the CSV files into a staging table with the desired schema, perform the transformations with SQL, and then write the results to the
D. Create a table with the desired schema, load the CSV files into the table, and perform the transformations in place using SQL.
Correct Answer: A
You are developing a new deep learning model that predicts a customer's likelihood to buy on your ecommerce site. After running an evaluation of
the model against both the original training data and new test data, you find that your model is overfitting the data. You want to improve the
accuracy of the model when predicting new data. What should you do?
A. Increase the size of the training dataset, and increase the number of input features.
B. Increase the size of the training dataset, and decrease the number of input features. Most Voted
C. Reduce the size of the training dataset, and increase the number of input features.
D. Reduce the size of the training dataset, and decrease the number of input features.
Correct Answer: B
You are implementing a chatbot to help an online retailer streamline their customer service. The chatbot must be able to respond to both text and
voice inquiries.
You are looking for a low-code or no-cade option, and you want to be able to easily train the chatbot to provide answers to keywords. What should
you do?
A. Use the Cloud Speech-to-Text API to build a Python application in App Engine.
B. Use the Cloud Speech-to-Text API to build a Python application in a Compute Engine instance.
C. Use Dialogflow for simple queries and the Cloud Speech-to-Text API for complex queries.
D. Use Dialogflow to implement the chatbot, defining the intents based on the most common queries collected. Most Voted
Correct Answer: D
An aerospace company uses a proprietary data format to store its flight data. You need to connect this new data source to BigQuery and stream
BigQuery. You want to efficiently import the data into BigQuery while consuming as few resources as possible. What should you do?
A. Write a shell script that triggers a Cloud Function that performs periodic ETL batch jobs on the new data source.
B. Use a standard Dataflow pipeline to store the raw data in BigQuery, and then transform the format later when the data is used.
C. Use Apache Hive to write a Dataproc job that streams the data into BigQuery in CSV format.
D. Use an Apache Beam custom connector to write a Dataflow pipeline that streams the data into BigQuery in Avro format. Most Voted
Correct Answer: D
An online brokerage company requires a high volume trade processing architecture. You need to create a secure queuing system that triggers
jobs. The jobs will run in Google Cloud and call the company's Python API to execute trades. You need to efficiently implement a solution. What
A. Use a Pub/Sub push subscription to trigger a Cloud Function to pass the data to the Python API. Most Voted
B. Write an application hosted on a Compute Engine instance that makes a push subscription to the Pub/Sub topic.
D. Use Cloud Composer to subscribe to a Pub/Sub topic and call the Python API.
Correct Answer: A
Your company wants to be able to retrieve large result sets of medical information from your current system, which has over 10 TBs in the
database, and store the data in new tables for further query. The database must have a low-maintenance architecture and be accessible via SQL.
You need to implement a cost-effective solution that can support data analytics for large result sets. What should you do?
A. Use Cloud SQL, but first organize the data into tables. Use JOIN in queries to retrieve data.
B. Use BigQuery as a data warehouse. Set output destinations for caching large queries. Most Voted
C. Use a MySQL cluster installed on a Compute Engine managed instance group for scalability.
D. Use Cloud Spanner to replicate the data across regions. Normalize the data in a series of tables.
Correct Answer: B
You have 15 TB of data in your on-premises data center that you want to transfer to Google Cloud. Your data changes weekly and is stored in a
POSIX-compliant source. The network operations team has granted you 500 Mbps bandwidth to the public internet. You want to follow Google-
recommended practices to reliably transfer your data to Google Cloud on a weekly basis. What should you do?
A. Use Cloud Scheduler to trigger the gsutil command. Use the -m parameter for optimal parallelism.
B. Use Transfer Appliance to migrate your data into a Google Kubernetes Engine cluster, and then configure a weekly transfer job.
C. Install Storage Transfer Service for on-premises data in your data center, and then configure a weekly transfer job. Most Voted
D. Install Storage Transfer Service for on-premises data on a Google Cloud virtual machine, and then configure a weekly transfer job.
Correct Answer: C
You are designing a system that requires an ACID-compliant database. You must ensure that the system requires minimal human intervention in
case of a failure.
A. Configure a Cloud SQL for MySQL instance with point-in-time recovery enabled.
B. Configure a Cloud SQL for PostgreSQL instance with high availability enabled. Most Voted
Correct Answer: B
You are implementing workflow pipeline scheduling using open source-based tools and Google Kubernetes Engine (GKE). You want to use a
Google managed service to simplify and automate the task. You also want to accommodate Shared VPC networking considerations. What should
you do?
A. Use Dataflow for your workflow pipelines. Use Cloud Run triggers for scheduling.
B. Use Dataflow for your workflow pipelines. Use shell scripts to schedule workflows.
C. Use Cloud Composer in a Shared VPC configuration. Place the Cloud Composer resources in the host project.
D. Use Cloud Composer in a Shared VPC configuration. Place the Cloud Composer resources in the service project. Most Voted
Correct Answer: D
You are using BigQuery and Data Studio to design a customer-facing dashboard that displays large quantities of aggregated data. You expect a
high volume of concurrent users. You need to optimize the dashboard to provide quick visualizations with minimal latency. What should you do?
Correct Answer: A
Government regulations in the banking industry mandate the protection of clients' personally identifiable information (PII). Your company requires
PII to be access controlled, encrypted, and compliant with major data protection standards. In addition to using Cloud Data Loss Prevention
Google-recommended practices and use service accounts to control access to PII. What should you do?
A. Assign the required Identity and Access Management (IAM) roles to every employee, and create a single service account to access project
resources.
B. Use one service account to access a Cloud SQL database, and use separate service accounts for each human user.
C. Use Cloud Storage to comply with major data protection standards. Use one service account shared by all users.
D. Use Cloud Storage to comply with major data protection standards. Use multiple service accounts attached to IAM groups to grant the
Correct Answer: D
You need to migrate a Redis database from an on-premises data center to a Memorystore for Redis instance. You want to follow Google-
recommended practices and perform the migration for minimal cost, time and effort. What should you do?
A. Make an RDB backup of the Redis database, use the gsutil utility to copy the RDB file into a Cloud Storage bucket, and then import the RDB
B. Make a secondary instance of the Redis database on a Compute Engine instance and then perform a live cutover.
C. Create a Dataflow job to read the Redis database from the on-premises data center and write the data to a Memorystore for Redis instance.
D. Write a shell script to migrate the Redis data and create a new Memorystore for Redis instance.
Correct Answer: A
Your platform on your on-premises environment generates 100 GB of data daily, composed of millions of structured JSON text files. Your on-
premises environment cannot be accessed from the public internet. You want to use Google Cloud products to query and explore the platform
A. Use Cloud Scheduler to copy data daily from your on-premises environment to Cloud Storage. Use the BigQuery Data Transfer Service to
B. Use a Transfer Appliance to copy data from your on-premises environment to Cloud Storage. Use the BigQuery Data Transfer Service to
C. Use Transfer Service for on-premises data to copy data from your on-premises environment to Cloud Storage. Use the BigQuery Data
D. Use the BigQuery Data Transfer Service dataset copy to transfer all data into BigQuery.
Correct Answer: C
A TensorFlow machine learning model on Compute Engine virtual machines (n2-standard-32) takes two days to complete training. The model has
custom TensorFlow operations that must run partially on a CPU. You want to reduce the training time in a cost-effective manner. What should you
do?
C. Train the model using a VM with a GPU hardware accelerator. Most Voted
Correct Answer: C
You want to create a machine learning model using BigQuery ML and create an endpoint for hosting the model using Vertex AI. This will enable
the processing of continuous streaming data in near-real time from multiple vendors. The data may contain invalid values. What should you do?
A. Create a new BigQuery dataset and use streaming inserts to land the data from multiple vendors. Configure your BigQuery ML model to use
B. Use BigQuery streaming inserts to land the data from multiple vendors where your BigQuery dataset ML model is deployed.
C. Create a Pub/Sub topic and send all vendor data to it. Connect a Cloud Function to the topic to process the data and store it in BigQuery.
D. Create a Pub/Sub topic and send all vendor data to it. Use Dataflow to process and sanitize the Pub/Sub data and stream it to BigQuery.
Most Voted
Correct Answer: D
You have a data processing application that runs on Google Kubernetes Engine (GKE). Containers need to be launched with their latest available
configurations from a container registry. Your GKE nodes need to have GPUs, local SSDs, and 8 Gbps bandwidth. You want to efficiently provision
the data processing infrastructure and manage the deployment process. What should you do?
A. Use Compute Engine startup scripts to pull container images, and use gcloud commands to provision the infrastructure.
B. Use Cloud Build to schedule a job using Terraform build to provision the infrastructure and launch with the most current container images.
Most Voted
C. Use GKE to autoscale containers, and use gcloud commands to provision the infrastructure.
D. Use Dataflow to provision the data pipeline, and use Cloud Scheduler to run the job.
Correct Answer: B
You need ads data to serve AI models and historical data for analytics. Longtail and outlier data points need to be identified. You want to cleanse
the data in near-real time before running it through AI models. What should you do?
A. Use Cloud Storage as a data warehouse, shell scripts for processing, and BigQuery to create views for desired datasets.
B. Use Dataflow to identify longtail and outlier data points programmatically, with BigQuery as a sink. Most Voted
C. Use BigQuery to ingest, prepare, and then analyze the data, and then run queries to create views.
D. Use Cloud Composer to identify longtail and outlier data points, and then output a usable dataset to BigQuery.
Correct Answer: B
You are collecting IoT sensor data from millions of devices across the world and storing the data in BigQuery. Your access pattern is based on
recent data, filtered by location_id and device_version with the following query:
You want to optimize your queries for cost and performance. How should you structure your data?
B. Partition table data by create_date, cluster table data by location_id, and device_version. Most Voted
Correct Answer: B
A live TV show asks viewers to cast votes using their mobile phones. The event generates a large volume of data during a 3-minute period. You
are in charge of the "Voting infrastructure" and must ensure that the platform can handle the load and that all votes are processed. You must
display partial results while voting is open. After voting closes, you need to count the votes exactly once while optimizing cost. What should you
do?
B. Create a Cloud SQL for PostgreSQL database with high availability (HA) configuration and multiple read replicas.
C. Write votes to a Pub/Sub topic and have Cloud Functions subscribe to it and write votes to BigQuery.
D. Write votes to a Pub/Sub topic and load into both Bigtable and BigQuery via a Dataflow pipeline. Query Bigtable for real-time results and
BigQuery for later analysis. Shut down the Bigtable instance when voting concludes. Most Voted
Correct Answer: D
A shipping company has live package-tracking data that is sent to an Apache Kafka stream in real time. This is then loaded into BigQuery.
Analysts in your company want to query the tracking data in BigQuery to analyze geospatial trends in the lifecycle of a package. The table was
originally created with ingest-date partitioning. Over time, the query processing time has increased. You need to copy all the data to a new
A. Re-create the table using data partitioning on the package delivery date.
D. Tier older data onto Cloud Storage files and create a BigQuery table using Cloud Storage as an external data source.
Correct Answer: B
You are designing a data mesh on Google Cloud with multiple distinct data engineering teams building data products. The typical data curation
design pattern consists of landing files in Cloud Storage, transforming raw data in Cloud Storage and BigQuery datasets, and storing the final
curated data product in BigQuery datasets. You need to configure Dataplex to ensure that each team can access only the assets needed to build
their data products. You also need to ensure that teams can easily share the curated data product. What should you do?
A. 1. Create a single Dataplex virtual lake and create a single zone to contain landing, raw, and curated data.
B. 1. Create a single Dataplex virtual lake and create a single zone to contain landing, raw, and curated data.
2. Build separate assets for each data product within the zone.
C. 1. Create a Dataplex virtual lake for each data product, and create a single zone to contain landing, raw, and curated data.
2. Provide the data engineering teams with full access to the virtual lake assigned to their data product.
D. 1. Create a Dataplex virtual lake for each data product, and create multiple zones for landing, raw, and curated data.
2. Provide the data engineering teams with full access to the virtual lake assigned to their data product. Most Voted
Correct Answer: D
You are using BigQuery with a multi-region dataset that includes a table with the daily sales volumes. This table is updated multiple times per day.
You need to protect your sales table in case of regional failures with a recovery point objective (RPO) of less than 24 hours, while keeping costs to
A. Schedule a daily export of the table to a Cloud Storage dual or multi-region bucket. Most Voted
D. Modify ETL job to load the data into both the current and another backup region.
Correct Answer: A
You are troubleshooting your Dataflow pipeline that processes data from Cloud Storage to BigQuery. You have discovered that the Dataflow worker
nodes cannot communicate with one another. Your networking team relies on Google Cloud network tags to define firewall rules. You need to
identify the issue while following Google-recommended networking security practices. What should you do?
A. Determine whether your Dataflow pipeline has a custom network tag set.
B. Determine whether there is a firewall rule set to allow traffic on TCP ports 12345 and 12346 for the Dataflow network tag. Most Voted
C. Determine whether there is a firewall rule set to allow traffic on TCP ports 12345 and 12346 on the subnet used by Dataflow workers.
D. Determine whether your Dataflow pipeline is deployed with the external IP address option enabled.
Correct Answer: B
Your company's customer_order table in BigQuery stores the order history for 10 million customers, with a table size of 10 PB. You need to create
a dashboard for the support team to view the order history. The dashboard has two filters, country_name and username. Both are string data types
in the BigQuery table. When a filter is applied, the dashboard fetches the order history from the table and displays the query results. However, the
dashboard is slow to show the results when applying the filters to the following query:
How should you redesign the BigQuery table to support faster access?
Correct Answer: A
You have a Standard Tier Memorystore for Redis instance deployed in a production environment. You need to simulate a Redis instance failover in
the most accurate disaster recovery situation, and ensure that the failover has no impact on production data. What should you do?
A. Create a Standard Tier Memorystore for Redis instance in the development environment. Initiate a manual failover by using the limited-data-
B. Create a Standard Tier Memorystore for Redis instance in a development environment. Initiate a manual failover by using the force-data-
C. Increase one replica to Redis instance in production environment. Initiate a manual failover by using the force-data-loss data protection
mode.
D. Initiate a manual failover by using the limited-data-loss data protection mode to the Memorystore for Redis instance in the production
environment.
Correct Answer: B
You are administering a BigQuery dataset that uses a customer-managed encryption key (CMEK). You need to share the dataset with a partner
organization that does not have access to your CMEK. What should you do?
A. Provide the partner organization a copy of your CMEKs to decrypt the data.
B. Export the tables to parquet files to a Cloud Storage bucket and grant the storageinsights.viewer role on the bucket to the partner
organization.
C. Copy the tables you need to share to a dataset without CMEKs. Create an Analytics Hub listing for this dataset. Most Voted
D. Create an authorized view that contains the CMEK to decrypt the data when accessed.
Correct Answer: C
You are developing an Apache Beam pipeline to extract data from a Cloud SQL instance by using JdbcIO. You have two projects running in Google
Cloud. The pipeline will be deployed and executed on Dataflow in Project A. The Cloud SQL. instance is running in Project B and does not have a
public IP address. After deploying the pipeline, you noticed that the pipeline failed to extract data from the Cloud SQL instance due to connection
failure. You verified that VPC Service Controls and shared VPC are not in use in these projects. You want to resolve this error while ensuring that
the data does not go through the public internet. What should you do?
A. Set up VPC Network Peering between Project A and Project B. Add a firewall rule to allow the peered subnet range to access all instances
B. Turn off the external IP addresses on the Dataflow worker. Enable Cloud NAT in Project A.
C. Add the external IP addresses of the Dataflow worker as authorized networks in the Cloud SQL instance.
D. Set up VPC Network Peering between Project A and Project B. Create a Compute Engine instance without external IP address in Project B
on the peered subnet to serve as a proxy server to the Cloud SQL database.
Correct Answer: A
You have a BigQuery table that contains customer data, including sensitive information such as names and addresses. You need to share the
customer data with your data analytics and consumer support teams securely. The data analytics team needs to access the data of all the
customers, but must not be able to access the sensitive data. The consumer support team needs access to all data columns, but must not be able
to access customers that no longer have active contracts. You enforced these requirements by using an authorized dataset and policy tags. After
implementing these steps, the data analytics team reports that they still have access to the sensitive columns. You need to ensure that the data
analytics team does not have access to restricted data. What should you do? (Choose two.)
A. Create two separate authorized datasets; one for the data analytics team and another for the consumer support team.
B. Ensure that the data analytics team members do not have the Data Catalog Fine-Grained Reader role for the policy tags.
C. Replace the authorized dataset with an authorized view. Use row-level security and apply filter_expression to limit data access.
D. Remove the bigquery.dataViewer role from the data analytics team on the authorized datasets.
Correct Answer: E
You have a Cloud SQL for PostgreSQL instance in Region’ with one read replica in Region2 and another read replica in Region3. An unexpected
event in Region’ requires that you perform disaster recovery by promoting a read replica in Region2. You need to ensure that your application has
the same database capacity available before you switch over the connections. What should you do?
A. Enable zonal high availability on the primary instance. Create a new read replica in a new region.
B. Create a cascading read replica from the existing read replica in Region3.
C. Create two new read replicas from the new primary instance, one in Region3 and one in a new region. Most Voted
D. Create a new read replica in Region1, promote the new read replica to be the primary instance, and enable zonal high availability.
Correct Answer: C
You orchestrate ETL pipelines by using Cloud Composer. One of the tasks in the Apache Airflow directed acyclic graph (DAG) relies on a third-
party service. You want to be notified when the task does not succeed. What should you do?
A. Assign a function with notification logic to the on_retry_callback parameter for the operator responsible for the task at risk.
B. Configure a Cloud Monitoring alert on the sla_missed metric associated with the task at risk to trigger a notification.
C. Assign a function with notification logic to the on_failure_callback parameter tor the operator responsible for the task at risk. Most Voted
D. Assign a function with notification logic to the sla_miss_callback parameter for the operator responsible for the task at risk.
Correct Answer: C
You are migrating your on-premises data warehouse to BigQuery. One of the upstream data sources resides on a MySQL. database that runs in
your on-premises data center with no public IP addresses. You want to ensure that the data ingestion into BigQuery is done securely and does not
A. Update your existing on-premises ETL tool to write to BigQuery by using the BigQuery Open Database Connectivity (ODBC) driver. Set up the
proxy parameter in the simba.googlebigqueryodbc.ini file to point to your data center’s NAT gateway.
B. Use Datastream to replicate data from your on-premises MySQL database to BigQuery. Set up Cloud Interconnect between your on-
premises data center and Google Cloud. Use Private connectivity as the connectivity method and allocate an IP address range within your
VPC network to the Datastream connectivity configuration. Use Server-only as the encryption type when setting up the connection profile in
C. Use Datastream to replicate data from your on-premises MySQL database to BigQuery. Use Forward-SSH tunnel as the connectivity method
to establish a secure tunnel between Datastream and your on-premises MySQL database through a tunnel server in your on-premises data
center. Use None as the encryption type when setting up the connection profile in Datastream.
D. Use Datastream to replicate data from your on-premises MySQL database to BigQuery. Gather Datastream public IP addresses of the
Google Cloud region that will be used to set up the stream. Add those IP addresses to the firewall allowlist of your on-premises data center.
Use IP Allowlisting as the connectivity method and Server-only as the encryption type when setting up the connection profile in Datastream.
Correct Answer: B
You store and analyze your relational data in BigQuery on Google Cloud with all data that resides in US regions. You also have a variety of object
stores across Microsoft Azure and Amazon Web Services (AWS), also in US regions. You want to query all your data in BigQuery daily with as little
A. Use BigQuery Data Transfer Service to load files from Azure and AWS into BigQuery.
B. Create a Dataflow pipeline to ingest files from Azure and AWS to BigQuery.
C. Load files from AWS and Azure to Cloud Storage with Cloud Shell gsutil rsync arguments.
D. Use the BigQuery Omni functionality and BigLake tables to query files in Azure and AWS. Most Voted
Correct Answer: D
You have a variety of files in Cloud Storage that your data science team wants to use in their models. Currently, users do not have a method to
explore, cleanse, and validate the data in Cloud Storage. You are looking for a low code solution that can be used by your data science team to
quickly cleanse and explore data within Cloud Storage. What should you do?
A. Provide the data science team access to Dataflow to create a pipeline to prepare and validate the raw data and load data into BigQuery for
data exploration.
B. Create an external table in BigQuery and use SQL to transform the data as necessary. Provide the data science team access to the external
C. Load the data into BigQuery and use SQL to transform the data as necessary. Provide the data science team access to staging tables to
D. Provide the data science team access to Dataprep to prepare, validate, and explore the data within Cloud Storage. Most Voted
Correct Answer: D
You are building an ELT solution in BigQuery by using Dataform. You need to perform uniqueness and null value checks on your final tables. What
Correct Answer: C
A web server sends click events to a Pub/Sub topic as messages. The web server includes an eventTimestamp attribute in the messages, which is
the time when the click occurred. You have a Dataflow streaming job that reads from this Pub/Sub topic through a subscription, applies some
transformations, and writes the result to another Pub/Sub topic for use by the advertising department. The advertising department needs to
receive each message within 30 seconds of the corresponding click occurrence, but they report receiving the messages late. Your Dataflow job's
system lag is about 5 seconds, and the data freshness is about 40 seconds. Inspecting a few messages show no more than 1 second lag between
their eventTimestamp and publishTime. What is the problem and what should you do?
A. The advertising department is causing delays when consuming the messages. Work with the advertising department to fix this.
B. Messages in your Dataflow job are taking more than 30 seconds to process. Optimize your job or increase the number of workers to fix this.
G. Messages in your Dataflow job are processed in less than 30 seconds, but your job cannot keep up with the backlog in the Pub/Sub
subscription. Optimize your job or increase the number of workers to fix this. Most Voted
D. The web server is not pushing messages fast enough to Pub/Sub. Work with the web server team to fix this.
Correct Answer: G
Your organization stores customer data in an on-premises Apache Hadoop cluster in Apache Parquet format. Data is processed on a daily basis
by Apache Spark jobs that run on the cluster. You are migrating the Spark jobs and Parquet data to Google Cloud. BigQuery will be used on future
transformation pipelines so you need to ensure that your data is available in BigQuery. You want to use managed services, while minimizing ETL
data processing changes and overhead costs. What should you do?
A. Migrate your data to Cloud Storage and migrate the metadata to Dataproc Metastore (DPMS). Refactor Spark pipelines to write and read
data on Cloud Storage, and run them on Dataproc Serverless. Most Voted
B. Migrate your data to Cloud Storage and register the bucket as a Dataplex asset. Refactor Spark pipelines to write and read data on Cloud
C. Migrate your data to BigQuery. Refactor Spark pipelines to write and read data on BigQuery, and run them on Dataproc Serverless.
D. Migrate your data to BigLake. Refactor Spark pipelines to write and read data on Cloud Storage, and run them on Dataproc on Compute
Engine.
Correct Answer: A
Your organization has two Google Cloud projects, project A and project B. In project A, you have a Pub/Sub topic that receives data from
confidential sources. Only the resources in project A should be able to access the data in that topic. You want to ensure that project B and any
future project cannot access data in the project A topic. What should you do?
A. Add firewall rules in project A so only traffic from the VPC in project A is permitted.
B. Configure VPC Service Controls in the organization with a perimeter around project A. Most Voted
C. Use Identity and Access Management conditions to ensure that only users and service accounts in project A. can access resources in
project A.
D. Configure VPC Service Controls in the organization with a perimeter around the VPC of project A.
Correct Answer: B
You stream order data by using a Dataflow pipeline, and write the aggregated result to Memorystore. You provisioned a Memorystore for Redis
instance with Basic Tier, 4 GB capacity, which is used by 40 clients for read-only access. You are expecting the number of read-only clients to
increase significantly to a few hundred and you need to be able to support the demand. You want to ensure that read and write access availability
is not impacted, and any changes you make can be deployed quickly. What should you do?
A. Create a new Memorystore for Redis instance with Standard Tier. Set capacity to 4 GB and read replica to No read replicas (high availability
B. Create a new Memorystore for Redis instance with Standard Tier. Set capacity to 5 GB and create multiple read replicas. Delete the old
C. Create a new Memorystore for Memcached instance. Set a minimum of three nodes, and memory per node to 4 GB. Modify the Dataflow
pipeline and all clients to use the Memcached instance. Delete the old instance.
D. Create multiple new Memorystore for Redis instances with Basic Tier (4 GB capacity). Modify the Dataflow pipeline and new clients to use
all instances.
Correct Answer: B
You have a streaming pipeline that ingests data from Pub/Sub in production. You need to update this streaming pipeline with improved business
logic. You need to ensure that the updated pipeline reprocesses the previous two days of delivered Pub/Sub messages. What should you do?
(Choose two.)
Correct Answer: D
You currently use a SQL-based tool to visualize your data stored in BigQuery. The data visualizations require the use of outer joins and analytic
functions. Visualizations must be based on data that is no less than 4 hours old. Business users are complaining that the visualizations are too
slow to generate. You want to improve the performance of the visualization queries while minimizing the maintenance overhead of the data
A. Create materialized views with the allow_non_incremental_definition option set to true for the visualization queries. Specify the
max_staleness parameter to 4 hours and the enable_refresh parameter to true. Reference the materialized views in the data visualization tool.
Most Voted
B. Create views for the visualization queries. Reference the views in the data visualization tool.
C. Create a Cloud Function instance to export the visualization query results as parquet files to a Cloud Storage bucket. Use Cloud Scheduler
to trigger the Cloud Function every 4 hours. Reference the parquet files in the data visualization tool.
D. Create materialized views for the visualization queries. Use the incremental updates capability of BigQuery materialized views to handle
changed data automatically. Reference the materialized views in the data visualization tool.
Correct Answer: A
You need to modernize your existing on-premises data strategy. Your organization currently uses:
• Apache Hadoop clusters for processing multiple large data sets, including on-premises Hadoop Distributed File System (HDFS) for data
replication.
• Apache Airflow to orchestrate hundreds of ETL pipelines with thousands of job steps.
You need to set up a new architecture in Google Cloud that can handle your Hadoop workloads and requires minimal changes to your existing
A. Use Bigtable for your large workloads, with connections to Cloud Storage to handle any HDFS use cases. Orchestrate your pipelines with
Cloud Composer.
B. Use Dataproc to migrate Hadoop clusters to Google Cloud, and Cloud Storage to handle any HDFS use cases. Orchestrate your pipelines
C. Use Dataproc to migrate Hadoop clusters to Google Cloud, and Cloud Storage to handle any HDFS use cases. Convert your ETL pipelines to
Dataflow.
D. Use Dataproc to migrate your Hadoop clusters to Google Cloud, and Cloud Storage to handle any HDFS use cases. Use Cloud Data Fusion to
Correct Answer: B
You recently deployed several data processing jobs into your Cloud Composer 2 environment. You notice that some tasks are failing in Apache
Airflow. On the monitoring dashboard, you see an increase in the total workers memory usage, and there were worker pod evictions. You need to
C. Increase the maximum number of workers and reduce worker concurrency. Most Voted
Correct Answer: C
You are on the data governance team and are implementing security requirements to deploy resources. You need to ensure that resources are
limited to only the europe-west3 region. You want to follow Google-recommended practices.
B. Deploy resources with Terraform and implement a variable validation rule to ensure that the region is set to the europe-west3 region for all
resources.
D. Create a Cloud Function to monitor all resources created and automatically destroy the ones created outside the europe-west3 region.
Correct Answer: A
You are a BigQuery admin supporting a team of data consumers who run ad hoc queries and downstream reporting in tools such as Looker. All
data and users are combined under a single organizational project. You recently noticed some slowness in query results and want to troubleshoot
where the slowdowns are occurring. You think that there might be some job queuing or slot contention occurring as users run jobs, which slows
down access to results. You need to investigate the query job information and determine where performance is being affected. What should you
do?
A. Use slot reservations for your project to ensure that you have enough query processing capacity and are able to allocate available slots to
B. Use Cloud Monitoring to view BigQuery metrics and set up alerts that let you know when a certain percentage of slots were used.
C. Use available administrative resource charts to determine how slots are being used and how jobs are performing over time. Run a query on
D. Use Cloud Logging to determine if any users or downstream consumers are changing or deleting access grants on tagged resources.
Correct Answer: C
You migrated a data backend for an application that serves 10 PB of historical product data for analytics. Only the last known state for a product,
which is about 10 GB of data, needs to be served through an API to the other applications. You need to choose a cost-effective persistent storage
solution that can accommodate the analytics requirements and the API performance of up to 1000 queries per second (QPS) with less than 1
3. Serve the last state data directly from BigQuery to the API.
B. 1. Store the products as a collection in Firestore with each product having a set of historical changes.
3. Serve the last state data directly from Firestore to the API.
2. In a separate table, store the last state of the product after every product change.
3. Serve the last state data directly from Cloud SQL to the API.
2. In a Cloud SQL table, store the last state of the product after every product change.
3. Serve the last state data directly from Cloud SQL to the API. Most Voted
Correct Answer: D
You want to schedule a number of sequential load and transformation jobs. Data files will be added to a Cloud Storage bucket by an upstream
process. There is no fixed schedule for when the new data arrives. Next, a Dataproc job is triggered to perform some transformations and write
the data to BigQuery. You then need to run additional transformation jobs in BigQuery. The transformation jobs are different for every table. These
jobs might take hours to complete. You need to determine the most efficient and maintainable workflow to process hundreds of tables and provide
the freshest data to your end users. What should you do?
A. 1. Create an Apache Airflow directed acyclic graph (DAG) in Cloud Composer with sequential tasks by using the Cloud Storage, Dataproc,
2. Use a single shared DAG for all tables that need to go through the pipeline.
B. 1. Create an Apache Airflow directed acyclic graph (DAG) in Cloud Composer with sequential tasks by using the Cloud Storage, Dataproc,
2. Create a separate DAG for each table that needs to go through the pipeline.
C. 1. Create an Apache Airflow directed acyclic graph (DAG) in Cloud Composer with sequential tasks by using the Dataproc and BigQuery
operators.
2. Use a single shared DAG for all tables that need to go through the pipeline.
3. Use a Cloud Storage object trigger to launch a Cloud Function that triggers the DAG.
D. 1. Create an Apache Airflow directed acyclic graph (DAG) in Cloud Composer with sequential tasks by using the Dataproc and BigQuery
operators.
2. Create a separate DAG for each table that needs to go through the pipeline.
3. Use a Cloud Storage object trigger to launch a Cloud Function that triggers the DAG. Most Voted
Correct Answer: D
You are deploying a MySQL database workload onto Cloud SQL. The database must be able to scale up to support several readers from various
geographic regions. The database must be highly available and meet low RTO and RPO requirements, even in the event of a regional outage. You
need to ensure that interruptions to the readers are minimal during a database failover. What should you do?
A. Create a highly available Cloud SQL instance in region Create a highly available read replica in region B. Scale up read workloads by
creating cascading read replicas in multiple regions. Backup the Cloud SQL instances to a multi-regional Cloud Storage bucket. Restore the
Cloud SQL backup to a new instance in another region when Region A is down.
B. Create a highly available Cloud SQL instance in region A. Scale up read workloads by creating read replicas in multiple regions. Promote
C. Create a highly available Cloud SQL instance in region A. Create a highly available read replica in region B. Scale up read workloads by
creating cascading read replicas in multiple regions. Promote the read replica in region B when region A is down. Most Voted
D. Create a highly available Cloud SQL instance in region A. Scale up read workloads by creating read replicas in the same region. Failover to
the standby Cloud SQL instance when the primary instance fails.
Correct Answer: C
You are planning to load some of your existing on-premises data into BigQuery on Google Cloud. You want to either stream or batch-load data,
depending on your use case. Additionally, you want to mask some sensitive data before loading into BigQuery. You need to do this in a
programmatic way while keeping costs to a minimum. What should you do?
A. Use Cloud Data Fusion to design your pipeline, use the Cloud DLP plug-in to de-identify data within your pipeline, and then move the data
into BigQuery.
B. Use the BigQuery Data Transfer Service to schedule your migration. After the data is populated in BigQuery, use the connection to the Cloud
Data Loss Prevention (Cloud DLP) API to de-identify the necessary data.
C. Create your pipeline with Dataflow through the Apache Beam SDK for Python, customizing separate options within your code for streaming,
batch processing, and Cloud DLP. Select BigQuery as your data sink. Most Voted
Correct Answer: C
You want to encrypt the customer data stored in BigQuery. You need to implement per-user crypto-deletion on data stored in your tables. You want
to adopt native features in Google Cloud to avoid custom solutions. What should you do?
A. Implement Authenticated Encryption with Associated Data (AEAD) BigQuery functions while storing your data in BigQuery. Most Voted
B. Create a customer-managed encryption key (CMEK) in Cloud KMS. Associate the key to the table while creating the table.
C. Create a customer-managed encryption key (CMEK) in Cloud KMS. Use the key to encrypt data before storing in BigQuery.
D. Encrypt your data during ingestion by using a cryptographic library supported by your ETL pipeline.
Correct Answer: A
The data analyst team at your company uses BigQuery for ad-hoc queries and scheduled SQL pipelines in a Google Cloud project with a slot
reservation of 2000 slots. However, with the recent introduction of hundreds of new non time-sensitive SQL pipelines, the team is encountering
frequent quota errors. You examine the logs and notice that approximately 1500 queries are being triggered concurrently during peak time. You
A. Increase the slot capacity of the project with baseline as 0 and maximum reservation size as 3000.
B. Update SQL pipelines to run as a batch query, and run ad-hoc queries as interactive query jobs. Most Voted
C. Increase the slot capacity of the project with baseline as 2000 and maximum reservation size as 3000.
D. Update SQL pipelines and ad-hoc queries to run as interactive query jobs.
Correct Answer: B
You are designing a data mesh on Google Cloud by using Dataplex to manage data in BigQuery and Cloud Storage. You want to simplify data asset
permissions. You are creating a customer virtual lake with two user groups:
You need to assign access rights to these two groups. What should you do?
A. 1. Grant the dataplex.dataOwner role to the data engineer group on the customer data lake.
2. Grant the dataplex.dataReader role to the analytic user group on the customer curated zone. Most Voted
B. 1. Grant the dataplex.dataReader role to the data engineer group on the customer data lake.
2. Grant the dataplex.dataOwner to the analytic user group on the customer curated zone.
C. 1. Grant the bigquery.dataOwner role on BigQuery datasets and the storage.objectCreator role on Cloud Storage buckets to data engineers.
2. Grant the bigquery.dataViewer role on BigQuery datasets and the storage.objectViewer role on Cloud Storage buckets to analytic users.
D. 1. Grant the bigquery.dataViewer role on BigQuery datasets and the storage.objectViewer role on Cloud Storage buckets to data engineers.
2. Grant the bigquery.dataOwner role on BigQuery datasets and the storage.objectEditor role on Cloud Storage buckets to analytic users.
Correct Answer: A
You are designing the architecture of your application to store data in Cloud Storage. Your application consists of pipelines that read data from a
Cloud Storage bucket that contains raw data, and write the data to a second bucket after processing. You want to design an architecture with
Cloud Storage resources that are capable of being resilient if a Google Cloud regional failure occurs. You want to minimize the recovery point
objective (RPO) if a failure occurs, with no impact on applications that use the stored data. What should you do?
B. Adopt two regional Cloud Storage buckets, and update your application to write the output on both buckets.
C. Adopt a dual-region Cloud Storage bucket, and enable turbo replication in your architecture. Most Voted
D. Adopt two regional Cloud Storage buckets, and create a daily task to copy from one bucket to the other.
Correct Answer: C
You have designed an Apache Beam processing pipeline that reads from a Pub/Sub topic. The topic has a message retention duration of one day,
and writes to a Cloud Storage bucket. You need to select a bucket location and processing strategy to prevent data loss in case of a regional
2. Monitor Dataflow metrics with Cloud Monitoring to determine when an outage occurs.
3. Seek the subscription back in time by 15 minutes to recover the acknowledged messages.
2. Monitor Dataflow metrics with Cloud Monitoring to determine when an outage occurs.
3. Seek the subscription back in time by 60 minutes to recover the acknowledged messages.
2. Monitor Dataflow metrics with Cloud Monitoring to determine when an outage occurs.
3. Seek the subscription back in time by one day to recover the acknowledged messages.
4. Start the Dataflow job in a secondary region and write in a bucket in the same region.
2. Monitor Dataflow metrics with Cloud Monitoring to determine when an outage occurs.
3. Seek the subscription back in time by 60 minutes to recover the acknowledged messages.
Correct Answer: D
You are preparing data that your machine learning team will use to train a model using BigQueryML. They want to predict the price per square foot
of real estate. The training data has a column for the price and a column for the number of square feet. Another feature column called ‘feature1’
contains null values due to missing data. You want to replace the nulls with zeros to keep more data points. Which query should you use?
A.
B.
C.
Most Voted
D.
Correct Answer: C
Different teams in your organization store customer and performance data in BigQuery. Each team needs to keep full control of their collected
data, be able to query data within their projects, and be able to exchange their data with other teams. You need to implement an organization-wide
solution, while minimizing operational tasks and costs. What should you do?
A. Ask each team to create authorized views of their data. Grant the biquery.jobUser role to each team.
B. Create a BigQuery scheduled query to replicate all customer data into team projects.
C. Ask each team to publish their data in Analytics Hub. Direct the other teams to subscribe to them. Most Voted
D. Enable each team to create materialized views of the data they need to access in their projects.
Correct Answer: C
You are developing a model to identify the factors that lead to sales conversions for your customers. You have completed processing your data.
You want to continue through the model development lifecycle. What should you do next?
C. Delineate what data will be used for testing and what will be used for training the model. Most Voted
D. Test and evaluate your model on your curated data to determine how well the model performs.
Correct Answer: C
You have one BigQuery dataset which includes customers’ street addresses. You want to retrieve all occurrences of street addresses from the
A. Write a SQL query in BigQuery by using REGEXP_CONTAINS on all tables in your dataset to find rows where the word “street” appears.
B. Create a deep inspection job on each table in your dataset with Cloud Data Loss Prevention and create an inspection template that includes
C. Create a discovery scan configuration on your organization with Cloud Data Loss Prevention and create an inspection template that
D. Create a de-identification job in Cloud Data Loss Prevention and use the masking transformation.
Correct Answer: B
Your company operates in three domains: airlines, hotels, and ride-hailing services. Each domain has two teams: analytics and data science,
which create data assets in BigQuery with the help of a central data platform team. However, as each domain is evolving rapidly, the central data
platform team is becoming a bottleneck. This is causing delays in deriving insights from data, and resulting in stale data when pipelines are not
kept up to date. You need to design a data mesh architecture by using Dataplex to eliminate the bottleneck. What should you do?
A. 1. Create one lake for each team. Inside each lake, create one zone for each domain.
2. Attach each of the BigQuery datasets created by the individual teams as assets to the respective zone.
3. Have the central data platform team manage all zones’ data assets.
B. 1. Create one lake for each team. Inside each lake, create one zone for each domain.
2. Attach each of the BigQuery datasets created by the individual teams as assets to the respective zone.
C. 1. Create one lake for each domain. Inside each lake, create one zone for each team.
2. Attach each of the BigQuery datasets created by the individual teams as assets to the respective zone.
3. Direct each domain to manage their own lake’s data assets. Most Voted
D. 1. Create one lake for each domain. Inside each lake, create one zone for each team.
2. Attach each of the BigQuery datasets created by the individual teams as assets to the respective zone.
3. Have the central data platform team manage all lakes’ data assets.
Correct Answer: C
You have an inventory of VM data stored in the BigQuery table. You want to prepare the data for regular reporting in the most cost-effective way.
You need to exclude VM rows with fewer than 8 vCPU in your report. What should you do?
A. Create a view with a filter to drop rows with fewer than 8 vCPU, and use the UNNEST operator. Most Voted
B. Create a materialized view with a filter to drop rows with fewer than 8 vCPU, and use the WITH common table expression.
C. Create a view with a filter to drop rows with fewer than 8 vCPU, and use the WITH common table expression.
D. Use Dataflow to batch process and write the result to another BigQuery table.
Correct Answer: A
Your team is building a data lake platform on Google Cloud. As a part of the data foundation design, you are planning to store all the raw data in
Cloud Storage. You are expecting to ingest approximately 25 GB of data a day and your billing department is worried about the increasing cost of
A. Create the bucket with the Autoclass storage class feature. Most Voted
B. Create an Object Lifecycle Management policy to modify the storage class for data older than 30 days to nearline, 90 days to coldline, and
C. Create an Object Lifecycle Management policy to modify the storage class for data older than 30 days to coldline, 90 days to nearline, and
D. Create an Object Lifecycle Management policy to modify the storage class for data older than 30 days to nearline, 45 days to coldline, and
Correct Answer: A
Your company's data platform ingests CSV file dumps of booking and user profile data from upstream sources into Cloud Storage. The data
analyst team wants to join these datasets on the email field available in both the datasets to perform analysis. However, personally identifiable
information (PII) should not be accessible to the analysts. You need to de-identify the email field in both the datasets before loading them into
A. 1. Create a pipeline to de-identify the email field by using recordTransformations in Cloud Data Loss Prevention (Cloud DLP) with masking
2. Load the booking and user profile data into a BigQuery table.
B. 1. Create a pipeline to de-identify the email field by using recordTransformations in Cloud DLP with format-preserving encryption with FFX
2. Load the booking and user profile data into a BigQuery table. Most Voted
C. 1. Load the CSV files from Cloud Storage into a BigQuery table, and enable dynamic data masking.
2. Create a policy tag with the email mask as the data masking rule.
4. Assign the Identity and Access Management bigquerydatapolicy.maskedReader role for the BigQuery tables to the analysts.
D. 1. Load the CSV files from Cloud Storage into a BigQuery table, and enable dynamic data masking.
2. Create a policy tag with the default masking value as the data masking rule.
4. Assign the Identity and Access Management bigquerydatapolicy.maskedReader role for the BigQuery tables to the analysts
Correct Answer: B
You have important legal hold documents in a Cloud Storage bucket. You need to ensure that these documents are not deleted or modified. What
B. Set a retention policy. Set the default storage class to Archive for long-term digital preservation.
D. Enable the Object Versioning feature. Create a copy in a bucket in a different region.
Correct Answer: A
You are designing a data warehouse in BigQuery to analyze sales data for a telecommunication service provider. You need to create a data model
for customers, products, and subscriptions. All customers, products, and subscriptions can be updated monthly, but you must maintain a
historical record of all data. You plan to use the visualization layer for current and historical reporting. You need to ensure that the data model is
A. Create a normalized model with tables for each entity. Use snapshots before updates to track historical data.
B. Create a normalized model with tables for each entity. Keep all input files in a Cloud Storage bucket to track historical data.
C. Create a denormalized model with nested and repeated fields. Update the table and use snapshots to track historical data.
D. Create a denormalized, append-only model with nested and repeated fields. Use the ingestion timestamp to track historical data. Most Voted
Correct Answer: D
You are deploying a batch pipeline in Dataflow. This pipeline reads data from Cloud Storage, transforms the data, and then writes the data into
BigQuery. The security team has enabled an organizational constraint in Google Cloud, requiring all Compute Engine instances to use only internal
A. Ensure that your workers have network tags to access Cloud Storage and BigQuery. Use Dataflow with only internal IP addresses.
B. Ensure that the firewall rules allow access to Cloud Storage and BigQuery. Use Dataflow with only internal IPs.
C. Create a VPC Service Controls perimeter that contains the VPC network and add Dataflow, Cloud Storage, and BigQuery as allowed services
D. Ensure that Private Google Access is enabled in the subnetwork. Use Dataflow with only internal IP addresses. Most Voted
Correct Answer: D
You are running a Dataflow streaming pipeline, with Streaming Engine and Horizontal Autoscaling enabled. You have set the maximum number of
workers to 1000. The input of your pipeline is Pub/Sub messages with notifications from Cloud Storage. One of the pipeline transforms reads CSV
files and emits an element for every CSV line. The job performance is low, the pipeline is using only 10 workers, and you notice that the autoscaler
B. Change the pipeline code, and introduce a Reshuffle step to prevent fusion. Most Voted
D. Use Dataflow Prime, and enable Right Fitting to increase the worker resources.
Correct Answer: B
You have an Oracle database deployed in a VM as part of a Virtual Private Cloud (VPC) network. You want to replicate and continuously
synchronize 50 tables to BigQuery. You want to minimize the need to manage infrastructure. What should you do?
A. Deploy Apache Kafka in the same VPC network, use Kafka Connect Oracle Change Data Capture (CDC), and Dataflow to stream the Kafka
topic to BigQuery.
B. Create a Pub/Sub subscription to write to BigQuery directly. Deploy the Debezium Oracle connector to capture changes in the Oracle
C. Deploy Apache Kafka in the same VPC network, use Kafka Connect Oracle change data capture (CDC), and the Kafka Connect Google
D. Create a Datastream service from Oracle to BigQuery, use a private connectivity configuration to the same VPC network, and a connection
Correct Answer: D
You are deploying an Apache Airflow directed acyclic graph (DAG) in a Cloud Composer 2 instance. You have incoming files in a Cloud Storage
bucket that the DAG processes, one file at a time. The Cloud Composer instance is deployed in a subnetwork with no Internet access. Instead of
running the DAG based on a schedule, you want to run the DAG in a reactive way every time a new file is received. What should you do?
A. 1. Enable Private Google Access in the subnetwork, and set up Cloud Storage notifications to a Pub/Sub topic.
B. 1. Enable the Cloud Composer API, and set up Cloud Storage notifications to trigger a Cloud Function.
2. Write a Cloud Function instance to call the DAG by using the Cloud Composer API and the web server URL.
C. 1. Enable the Airflow REST API, and set up Cloud Storage notifications to trigger a Cloud Function instance.
3. Write a Cloud Function that connects to the Cloud Composer cluster through the PSC endpoint. Most Voted
D. 1. Enable the Airflow REST API, and set up Cloud Storage notifications to trigger a Cloud Function instance.
2. Write a Cloud Function instance to call the DAG by using the Airflow REST API and the web server URL.
Correct Answer: C
You are planning to use Cloud Storage as part of your data lake solution. The Cloud Storage bucket will contain objects ingested from external
systems. Each object will be ingested once, and the access patterns of individual objects will be random. You want to minimize the cost of storing
and retrieving these objects. You want to ensure that any cost optimization efforts are transparent to the users and applications. What should you
do?
B. Create a Cloud Storage bucket with an Object Lifecycle Management policy to transition objects from Standard to Coldline storage class if
C. Create a Cloud Storage bucket with an Object Lifecycle Management policy to transition objects from Standard to Coldline storage class if
D. Create two Cloud Storage buckets. Use the Standard storage class for the first bucket, and use the Coldline storage class for the second
bucket. Migrate objects from the first bucket to the second bucket after 30 days.
Correct Answer: A
You have several different file type data sources, such as Apache Parquet and CSV. You want to store the data in Cloud Storage. You need to set
up an object sink for your data that allows you to use your own encryption keys. You want to use a GUI-based solution. What should you do?
B. Use Cloud Data Fusion to move files into Cloud Storage. Most Voted
Correct Answer: B
Your business users need a way to clean and prepare data before using the data for analysis. Your business users are less technically savvy and
prefer to work with graphical user interfaces to define their transformations. After the data has been transformed, the business users want to
perform their analysis directly in a spreadsheet. You need to recommend a solution that they can use. What should you do?
A. Use Dataprep to clean the data, and write the results to BigQuery. Analyze the data by using Connected Sheets. Most Voted
B. Use Dataprep to clean the data, and write the results to BigQuery. Analyze the data by using Looker Studio.
C. Use Dataflow to clean the data, and write the results to BigQuery. Analyze the data by using Connected Sheets.
D. Use Dataflow to clean the data, and write the results to BigQuery. Analyze the data by using Looker Studio.
Correct Answer: A
• One project runs production jobs that have strict completion time SLAs. These are high priority jobs that must have the required compute
resources available when needed. These jobs generally never go below a 300 slot utilization, but occasionally spike up an additional 500 slots.
• The other project is for users to run ad-hoc analytical queries. This project generally never uses more than 200 slots at a time. You want these
ad-hoc queries to be billed based on how much data users scan rather than by slot capacity.
You need to ensure that both projects have the appropriate compute resources available. What should you do?
A. Create a single Enterprise Edition reservation for both projects. Set a baseline of 300 slots. Enable autoscaling up to 700 slots.
B. Create two reservations, one for each of the projects. For the SLA project, use an Enterprise Edition with a baseline of 300 slots and enable
autoscaling up to 500 slots. For the ad-hoc project, configure on-demand billing. Most Voted
C. Create two Enterprise Edition reservations, one for each of the projects. For the SLA project, set a baseline of 300 slots and enable
autoscaling up to 500 slots. For the ad-hoc project, set a reservation baseline of 0 slots and set the ignore idle slots flag to False.
D. Create two Enterprise Edition reservations, one for each of the projects. For the SLA project, set a baseline of 800 slots. For the ad-hoc
Correct Answer: B
You want to migrate your existing Teradata data warehouse to BigQuery. You want to move the historical data to BigQuery by using the most
efficient method that requires the least amount of programming, but local storage space on your existing data warehouse is limited. What should
you do?
A. Use BigQuery Data Transfer Service by using the Java Database Connectivity (JDBC) driver with FastExport connection. Most Voted
B. Create a Teradata Parallel Transporter (TPT) export script to export the historical data, and import to BigQuery by using the bq command-
line tool.
C. Use BigQuery Data Transfer Service with the Teradata Parallel Transporter (TPT) tbuild utility.
D. Create a script to export the historical data, and upload in batches to Cloud Storage. Set up a BigQuery Data Transfer Service instance from
Correct Answer: A
You are on the data governance team and are implementing security requirements. You need to encrypt all your data in BigQuery by using an
encryption key managed by your team. You must implement a mechanism to generate and store encryption material only on your on-premises
hardware security module (HSM). You want to rely on Google managed solutions. What should you do?
A. Create the encryption key in the on-premises HSM, and import it into a Cloud Key Management Service (Cloud KMS) key. Associate the
B. Create the encryption key in the on-premises HSM and link it to a Cloud External Key Manager (Cloud EKM) key. Associate the created
Cloud KMS key while creating the BigQuery resources. Most Voted
C. Create the encryption key in the on-premises HSM, and import it into Cloud Key Management Service (Cloud HSM) key. Associate the
D. Create the encryption key in the on-premises HSM. Create BigQuery resources and encrypt data while ingesting them into BigQuery.
Correct Answer: B
You maintain ETL pipelines. You notice that a streaming pipeline running on Dataflow is taking a long time to process incoming data, which
causes output delays. You also noticed that the pipeline graph was automatically optimized by Dataflow and merged into one step. You want to
identify where the potential bottleneck is occurring. What should you do?
A. Insert a Reshuffle operation after each processing step, and monitor the execution details in the Dataflow console. Most Voted
B. Insert output sinks after each key processing step, and observe the writing throughput of each block.
C. Log debug information in each ParDo function, and analyze the logs at execution time.
D. Verify that the Dataflow service accounts have appropriate permissions to write the processed data to the output sinks.
Correct Answer: A
You are running your BigQuery project in the on-demand billing model and are executing a change data capture (CDC) process that ingests data.
The CDC process loads 1 GB of data every 10 minutes into a temporary table, and then performs a merge into a 10 TB target table. This process is
very scan intensive and you want to explore options to enable a predictable cost model. You need to create a BigQuery reservation based on
utilization information gathered from BigQuery Monitoring and apply the reservation to the CDC process. What should you do?
C. Create a BigQuery reservation for the service account running the job.
Correct Answer: D
You are designing a fault-tolerant architecture to store data in a regional BigQuery dataset. You need to ensure that your application is able to
recover from a corruption event in your tables that occurred within the past seven days. You want to adopt managed services with the lowest RPO
B. Export the data from BigQuery into a new table that excludes the corrupted data
Correct Answer: A
You are building a streaming Dataflow pipeline that ingests noise level data from hundreds of sensors placed near construction sites across a
city. The sensors measure noise level every ten seconds, and send that data to the pipeline when levels reach above 70 dBA. You need to detect
the average noise level from a sensor when data is received for a duration of more than 30 minutes, but the window ends when no data has been
D. Use tumbling windows with a 15-minute window and a fifteen-minute .withAllowedLateness operator.
Correct Answer: A
You are creating a data model in BigQuery that will hold retail transaction data. Your two largest tables, sales_transaction_header and
sales_transaction_line, have a tightly coupled immutable relationship. These tables are rarely modified after load and are frequently joined when
queried. You need to model the sales_transaction_header and sales_transaction_line tables to improve the performance of data analytics queries.
A. Create a sales_transaction table that holds the sales_transaction_header information as rows and the sales_transaction_line rows as
B. Create a sales_transaction table that holds the sales_transaction_header and sales_transaction_line information as rows, duplicating the
C. Create a sales_transaction table that stores the sales_transaction_header and sales_transaction_line data as a JSON data type.
D. Create separate sales_transaction_header and sales_transaction_line tables and, when querying, specify the sales_transaction_line first in
Correct Answer: A
You created a new version of a Dataflow streaming data ingestion pipeline that reads from Pub/Sub and writes to BigQuery. The previous version
of the pipeline that runs in production uses a 5-minute window for processing. You need to deploy the new version of the pipeline without losing
any data, creating inconsistencies, or increasing the processing latency by more than 10 minutes. What should you do?
B. Snapshot the old pipeline, stop the old pipeline, and then start the new pipeline from the snapshot.
C. Drain the old pipeline, then start the new pipeline. Most Voted
Correct Answer: C
Your organization's data assets are stored in BigQuery, Pub/Sub, and a PostgreSQL instance running on Compute Engine. Because there are
multiple domains and diverse teams using the data, teams in your organization are unable to discover existing data assets. You need to design a
solution to improve data discoverability while keeping development and configuration efforts to a minimum. What should you do?
A. Use Data Catalog to automatically catalog BigQuery datasets. Use Data Catalog APIs to manually catalog Pub/Sub topics and PostgreSQL
tables.
B. Use Data Catalog to automatically catalog BigQuery datasets and Pub/Sub topics. Use Data Catalog APIs to manually catalog PostgreSQL
C. Use Data Catalog to automatically catalog BigQuery datasets and Pub/Sub topics. Use custom connectors to manually catalog PostgreSQL
tables.
D. Use customer connectors to manually catalog BigQuery datasets, Pub/Sub topics, and PostgreSQL tables.
Correct Answer: B
You need to create a SQL pipeline. The pipeline runs an aggregate SQL transformation on a BigQuery table every two hours and appends the result
to another existing BigQuery table. You need to configure the pipeline to retry if errors occur. You want the pipeline to send an email notification
A. Use the BigQueryUpsertTableOperator in Cloud Composer, set the retry parameter to three, and set the email_on_failure parameter to true.
B. Use the BigQueryInsertJobOperator in Cloud Composer, set the retry parameter to three, and set the email_on_failure parameter to true.
Most Voted
C. Create a BigQuery scheduled query to run the SQL transformation with schedule options that repeats every two hours, and enable email
notifications.
D. Create a BigQuery scheduled query to run the SQL transformation with schedule options that repeats every two hours, and enable
notification to Pub/Sub topic. Use Pub/Sub and Cloud Functions to send an email after three failed executions.
Correct Answer: B
You are monitoring your organization’s data lake hosted on BigQuery. The ingestion pipelines read data from Pub/Sub and write the data into
tables on BigQuery. After a new version of the ingestion pipelines is deployed, the daily stored data increased by 50%. The volumes of data in
Pub/Sub remained the same and only some tables had their daily partition data size doubled. You need to investigate and fix the cause of the data
A. 1. Check for duplicate rows in the BigQuery tables that have the daily partition data size doubled.
3. Share the deduplication script with the other operational teams to reuse if this occurs to other tables.
3. Check for errors in Cloud Logging during the day of the release of the new pipelines.
4. If no errors, restore the BigQuery tables to their content before the last release by using time travel.
C. 1. Check for duplicate rows in the BigQuery tables that have the daily partition data size doubled.
3. Use Cloud Monitoring to determine when the identified Dataflow jobs started and the pipeline code version.
4. When more than one pipeline ingests data into a table, stop all versions except the latest one. Most Voted
2. Restore the BigQuery tables to their content before the last release by using time travel.
3. Restart the Dataflow jobs and replay the messages by seeking the subscription to the timestamp of the release.
Correct Answer: C
You have a BigQuery dataset named “customers”. All tables will be tagged by using a Data Catalog tag template named “gdpr”. The template
contains one mandatory field, “has_sensitive_data”, with a boolean value. All employees must be able to do a simple search and find tables in the
dataset that have either true or false in the “has_sensitive_data’ field. However, only the Human Resources (HR) group should be able to see the
data inside the tables for which “has_sensitive data” is true. You give the all employees group the bigquery.metadataViewer and
bigquery.connectionUser roles on the dataset. You want to minimize configuration overhead. What should you do next?
A. Create the “gdpr” tag template with private visibility. Assign the bigquery.dataViewer role to the HR group on the tables that contain
sensitive data.
B. Create the “gdpr” tag template with private visibility. Assign the datacatalog.tagTemplateViewer role on this tag to the all employees group,
and assign the bigquery.dataViewer role to the HR group on the tables that contain sensitive data.
C. Create the “gdpr” tag template with public visibility. Assign the bigquery.dataViewer role to the HR group on the tables that contain
D. Create the “gdpr” tag template with public visibility. Assign the datacatalog.tagTemplateViewer role on this tag to the all employees group,
and assign the bigquery.dataViewer role to the HR group on the tables that contain sensitive data.
Correct Answer: C
You are creating the CI/CD cycle for the code of the directed acyclic graphs (DAGs) running in Cloud Composer. Your team has two Cloud
Composer instances: one instance for development and another instance for production. Your team is using a Git repository to maintain and
develop the code of the DAGs. You want to deploy the DAGs automatically to Cloud Composer when a certain tag is pushed to the Git repository.
A. 1. Use Cloud Build to copy the code of the DAG to the Cloud Storage bucket of the development instance for DAG testing.
2. If the tests pass, use Cloud Build to copy the code to the bucket of the production instance. Most Voted
B. 1. Use Cloud Build to build a container with the code of the DAG and the KubernetesPodOperator to deploy the code to the Google
2. If the tests pass, use the KubernetesPodOperator to deploy the container to the GKE cluster of the production instance.
C. 1. Use Cloud Build to build a container and the KubernetesPodOperator to deploy the code of the DAG to the Google Kubernetes Engine
2. If the tests pass, copy the code to the Cloud Storage bucket of the production instance.
D. 1. Use Cloud Build to copy the code of the DAG to the Cloud Storage bucket of the development instance for DAG testing.
2. If the tests pass, use Cloud Build to build a container with the code of the DAG and the KubernetesPodOperator to deploy the container to
Correct Answer: A
You have a BigQuery table that ingests data directly from a Pub/Sub subscription. The ingested data is encrypted with a Google-managed
encryption key. You need to meet a new organization policy that requires you to use keys from a centralized Cloud Key Management Service
(Cloud KMS) project to encrypt data at rest. What should you do?
A. Use Cloud KMS encryption key with Dataflow to ingest the existing Pub/Sub subscription to the existing BigQuery table.
B. Create a new BigQuery table by using customer-managed encryption keys (CMEK), and migrate the data from the old BigQuery table.
Most Voted
C. Create a new Pub/Sub topic with CMEK and use the existing BigQuery table by using Google-managed encryption key.
D. Create a new BigQuery table and Pub/Sub topic by using customer-managed encryption keys (CMEK), and migrate the data from the old
BigQuery table.
Correct Answer: B
You created an analytics environment on Google Cloud so that your data scientist team can explore data without impacting the on-premises
Apache Hadoop solution. The data in the on-premises Hadoop Distributed File System (HDFS) cluster is in Optimized Row Columnar (ORC)
formatted files with multiple columns of Hive partitioning. The data scientist team needs to be able to explore the data in a similar way as they
used the on-premises HDFS cluster with SQL on the Hive query engine. You need to choose the most cost-effective storage and processing
A. Import the ORC files to Bigtable tables for the data scientist team.
B. Import the ORC files to BigQuery tables for the data scientist team.
C. Copy the ORC files on Cloud Storage, then deploy a Dataproc cluster for the data scientist team.
D. Copy the ORC files on Cloud Storage, then create external BigQuery tables for the data scientist team. Most Voted
Correct Answer: D
You are designing a Dataflow pipeline for a batch processing job. You want to mitigate multiple zonal failures at job submission time. What should
you do?
A. Submit duplicate pipelines in two different zones by using the --zone flag.
D. Create an Eventarc trigger to resubmit the job in case of zonal failure when submitting the job.
Correct Answer: C
You are designing a real-time system for a ride hailing app that identifies areas with high demand for rides to effectively reroute available drivers
to meet the demand. The system ingests data from multiple sources to Pub/Sub, processes the data, and stores the results for visualization and
analysis in real-time dashboards. The data sources include driver location updates every 5 seconds and app-based booking events from riders.
The data processing involves real-time aggregation of supply and demand data for the last 30 seconds, every 2 seconds, and storing the results in
A. Group the data by using a tumbling window in a Dataflow pipeline, and write the aggregated data to Memorystore.
B. Group the data by using a hopping window in a Dataflow pipeline, and write the aggregated data to Memorystore. Most Voted
C. Group the data by using a session window in a Dataflow pipeline, and write the aggregated data to BigQuery.
D. Group the data by using a hopping window in a Dataflow pipeline, and write the aggregated data to BigQuery.
Correct Answer: B
Your car factory is pushing machine measurements as messages into a Pub/Sub topic in your Google Cloud project. A Dataflow streaming job,
that you wrote with the Apache Beam SDK, reads these messages, sends acknowledgment to Pub/Sub, applies some custom business logic in a
DoFn instance, and writes the result to BigQuery. You want to ensure that if your business logic fails on a message, the message will be sent to a
Pub/Sub topic that you want to monitor for alerting purposes. What should you do?
A. Enable retaining of acknowledged messages in your Pub/Sub pull subscription. Use Cloud Monitoring to monitor the
B. Use an exception handling block in your Dataflow’s DoFn code to push the messages that failed to be transformed through a side output
and to a new Pub/Sub topic. Use Cloud Monitoring to monitor the topic/num_unacked_messages_by_region metric on this new topic.
Most Voted
C. Enable dead lettering in your Pub/Sub pull subscription, and specify a new Pub/Sub topic as the dead letter topic. Use Cloud Monitoring to
D. Create a snapshot of your Pub/Sub pull subscription. Use Cloud Monitoring to monitor the snapshot/num_messages metric on this
snapshot.
Correct Answer: B
You want to store your team’s shared tables in a single dataset to make data easily accessible to various analysts. You want to make this data
readable but unmodifiable by analysts. At the same time, you want to provide the analysts with individual workspaces in the same project, where
they can create and store tables for their own use, without the tables being accessible by other analysts. What should you do?
A. Give analysts the BigQuery Data Viewer role at the project level. Create one other dataset, and give the analysts the BigQuery Data Editor
B. Give analysts the BigQuery Data Viewer role at the project level. Create a dataset for each analyst, and give each analyst the BigQuery Data
C. Give analysts the BigQuery Data Viewer role on the shared dataset. Create a dataset for each analyst, and give each analyst the BigQuery
Data Editor role at the dataset level for their assigned dataset. Most Voted
D. Give analysts the BigQuery Data Viewer role on the shared dataset. Create one other dataset and give the analysts the BigQuery Data Editor
Correct Answer: C
You are running a streaming pipeline with Dataflow and are using hopping windows to group the data as the data arrives. You noticed that some
data is arriving late but is not being marked as late data, which is resulting in inaccurate aggregations downstream. You need to find a solution
that allows you to capture the late data in the appropriate window. What should you do?
A. Use watermarks to define the expected data arrival window. Allow late data as it arrives. Most Voted
B. Change your windowing function to tumbling windows to avoid overlapping window periods.
C. Change your windowing function to session windows to define your windows based on certain activity.
D. Expand your hopping window so that the late data has more time to arrive within the grouping.
Correct Answer: A
You work for a large ecommerce company. You store your customer's order data in Bigtable. You have a garbage collection policy set to delete the
data after 30 days and the number of versions is set to 1. When the data analysts run a query to report total customer spending, the analysts
sometimes see customer data that is older than 30 days. You need to ensure that the analysts do not see customer data older than 30 days while
A. Set the expiring values of the column families to 29 days and keep the number of versions to 1.
B. Use a timestamp range filter in the query to fetch the customer's data for a specific range. Most Voted
C. Schedule a job daily to scan the data in the table and delete data older than 30 days.
D. Set the expiring values of the column families to 30 days and set the number of versions to 2.
Correct Answer: B
You are using a Dataflow streaming job to read messages from a message bus that does not support exactly-once delivery. Your job then applies
some transformations, and loads the result into BigQuery. You want to ensure that your data is being streamed into BigQuery with exactly-once
delivery semantics. You expect your ingestion throughput into BigQuery to be about 1.5 GB per second. What should you do?
A. Use the BigQuery Storage Write API and ensure that your target BigQuery table is regional. Most Voted
B. Use the BigQuery Storage Write API and ensure that your target BigQuery table is multiregional.
C. Use the BigQuery Streaming API and ensure that your target BigQuery table is regional.
D. Use the BigQuery Streaming API and ensure that your target BigQuery table is multiregional.
Correct Answer: A
You have created an external table for Apache Hive partitioned data that resides in a Cloud Storage bucket, which contains a large number of files.
You notice that queries against this table are slow. You want to improve the performance of these queries. What should you do?
A. Change the storage class of the Hive partitioned data objects from Coldline to Standard.
B. Create an individual external table for each Hive partition by using a common table name prefix. Use wildcard table queries to reference the
partitioned data.
C. Upgrade the external table to a BigLake table. Enable metadata caching for the table. Most Voted
D. Migrate the Hive partitioned data objects to a multi-region Cloud Storage bucket.
Correct Answer: C
You have a network of 1000 sensors. The sensors generate time series data: one metric per sensor per second, along with a timestamp. You
already have 1 TB of data, and expect the data to grow by 1 GB every day. You need to access this data in two ways. The first access pattern
requires retrieving the metric from one specific sensor stored at a specific timestamp, with a median single-digit millisecond latency. The second
access pattern requires running complex analytic queries on the data, including joins, once a day. How should you store this data?
A. Store your data in BigQuery. Concatenate the sensor ID and timestamp, and use it as the primary key.
B. Store your data in Bigtable. Concatenate the sensor ID and timestamp and use it as the row key. Perform an export to BigQuery every day.
Most Voted
C. Store your data in Bigtable. Concatenate the sensor ID and metric, and use it as the row key. Perform an export to BigQuery every day.
Correct Answer: B
You have 100 GB of data stored in a BigQuery table. This data is outdated and will only be accessed one or two times a year for analytics with
SQL. For backup purposes, you want to store this data to be immutable for 3 years. You want to minimize storage costs. What should you do?
C. 1. Perform a BigQuery export to a Cloud Storage bucket with archive storage class.
D. 1. Perform a BigQuery export to a Cloud Storage bucket with archive storage class.
Correct Answer: D
You have thousands of Apache Spark jobs running in your on-premises Apache Hadoop cluster. You want to migrate the jobs to Google Cloud. You
want to use managed services to run your jobs instead of maintaining a long-lived Hadoop cluster yourself. You have a tight timeline and want to
A. Move your data to BigQuery. Convert your Spark scripts to a SQL-based processing approach.
C. Copy your data to Compute Engine disks. Manage and run your jobs directly on those instances.
D. Move your data to Cloud Storage. Run your jobs on Dataproc. Most Voted
Correct Answer: D
You are administering shared BigQuery datasets that contain views used by multiple teams in your organization. The marketing team is concerned
about the variability of their monthly BigQuery analytics spend using the on-demand billing model. You need to help the marketing team establish
a consistent BigQuery analytics spend each month. What should you do?
A. Create a BigQuery Enterprise reservation with a baseline of 250 slots and autoscaling set to 500 for the marketing team, and bill them back
accordingly.
B. Establish a BigQuery quota for the marketing team, and limit the maximum number of bytes scanned each day.
C. Create a BigQuery reservation with a baseline of 500 slots with no autoscaling for the marketing team, and bill them back accordingly.
Most Voted
D. Create a BigQuery Standard pay-as-you go reservation with a baseline of 0 slots and autoscaling set to 500 for the marketing team, and bill
Correct Answer: C
You are part of a healthcare organization where data is organized and managed by respective data owners in various storage services. As a result
of this decentralized ecosystem, discovering and managing data has become difficult. You need to quickly identify and implement a cost-
A. Use BigLake to convert the current solution into a data lake architecture.
B. Build a new data discovery tool on Google Kubernetes Engine that helps with new source onboarding and data lineage tracking.
C. Use BigQuery to track data lineage, and use Dataprep to manage data and perform data quality validation.
D. Use Dataplex to manage data, track data lineage, and perform data quality validation. Most Voted
Correct Answer: D
You have data located in BigQuery that is used to generate reports for your company. You have noticed some weekly executive report fields do not
correspond to format according to company standards. For example, report errors include different telephone formats and different country code
identifiers. This is a frequent issue, so you need to create a recurring job to normalize the data. You want a quick solution that requires no coding.
A. Use Cloud Data Fusion and Wrangler to normalize the data, and set up a recurring job. Most Voted
B. Use Dataflow SQL to create a job that normalizes the data, and that after the first run of the job, schedule the pipeline to execute
recurrently.
D. Use BigQuery and GoogleSQL to normalize the data, and schedule recurring queries in BigQuery.
Correct Answer: A
You are designing a messaging system by using Pub/Sub to process clickstream data with an event-driven consumer app that relies on a push
subscription. You need to configure the messaging system that is reliable enough to handle temporary downtime of the consumer app. You also
need the messaging system to store the input messages that cannot be consumed by the subscriber. The system needs to retry failed messages
gradually, avoiding overloading the consumer app, and store the failed messages after a maximum of 10 retries in a topic. How should you
B. Use immediate redelivery as the subscription retry policy, and configure dead lettering to a different topic with maximum delivery attempts
set to 10.
C. Use exponential backoff as the subscription retry policy, and configure dead lettering to the same source topic with maximum delivery
D. Use exponential backoff as the subscription retry policy, and configure dead lettering to a different topic with maximum delivery attempts
Correct Answer: D
You designed a data warehouse in BigQuery to analyze sales data. You want a self-serving, low-maintenance, and cost- effective solution to share
the sales dataset to other business units in your organization. What should you do?
A. Create an Analytics Hub private exchange, and publish the sales dataset. Most Voted
B. Enable the other business units’ projects to access the authorized views of the sales dataset.
C. Create and share views with the users in the other business units.
D. Use the BigQuery Data Transfer Service to create a schedule that copies the sales dataset to the other business units’ projects.
Correct Answer: A
You have terabytes of customer behavioral data streaming from Google Analytics into BigQuery daily. Your customers’ information, such as their
preferences, is hosted on a Cloud SQL for MySQL database. Your CRM database is hosted on a Cloud SQL for PostgreSQL instance. The marketing
team wants to use your customers’ information from the two databases and the customer behavioral data to create marketing campaigns for
yearly active customers. You need to ensure that the marketing team can run the campaigns over 100 times a day on typical days and up to 300
during sales. At the same time, you want to keep the load on the Cloud SQL databases to a minimum. What should you do?
A. Create BigQuery connections to both Cloud SQL databases. Use BigQuery federated queries on the two databases and the Google Analytics
B. Create a job on Apache Spark with Dataproc Serverless to query both Cloud SQL databases and the Google Analytics data on BigQuery for
these queries.
C. Create streams in Datastream to replicate the required tables from both Cloud SQL databases to BigQuery for these queries. Most Voted
D. Create a Dataproc cluster with Trino to establish connections to both Cloud SQL databases and BigQuery, to execute the queries.
Correct Answer: C
Your organization is modernizing their IT services and migrating to Google Cloud. You need to organize the data that will be stored in Cloud
Storage and BigQuery. You need to enable a data mesh approach to share the data between sales, product design, and marketing departments.
A. 1. Create a project for storage of the data for each of your departments.
2. Enable each department to create Cloud Storage buckets and BigQuery datasets.
3. Create user groups for authorized readers for each bucket and dataset.
4. Enable the IT team to administer the user groups to add or remove users as the departments’ request.
B. 1. Create multiple projects for storage of the data for each of your departments’ applications.
2. Enable each department to create Cloud Storage buckets and BigQuery datasets.
4. Enable all departments to discover and subscribe to the data they need in Analytics Hub.
2. Create a central Cloud Storage bucket with three folders to store the files for each department.
3. Create a central BigQuery dataset with tables prefixed with the department name.
4. Give viewer rights for the storage project for the users of your departments.
D. 1. Create multiple projects for storage of the data for each of your departments’ applications.
2. Enable each department to create Cloud Storage buckets and BigQuery datasets.
3. In Dataplex, map each department to a data lake and the Cloud Storage buckets, and map the BigQuery datasets to zones.
4. Enable each department to own and share the data of their data lakes. Most Voted
Correct Answer: D
You work for a large ecommerce company. You are using Pub/Sub to ingest the clickstream data to Google Cloud for analytics. You observe that
when a new subscriber connects to an existing topic to analyze data, they are unable to subscribe to older data. For an upcoming yearly sale event
in two months, you need a solution that, once implemented, will enable any new subscriber to read the last 30 days of data. What should you do?
A. Create a new topic, and publish the last 30 days of data each time a new subscriber connects to an existing topic.
D. Ask the source system to re-push the data to Pub/Sub, and subscribe to it.
Correct Answer: B
You are designing the architecture to process your data from Cloud Storage to BigQuery by using Dataflow. The network team provided you with
the Shared VPC network and subnetwork to be used by your pipelines. You need to enable the deployment of the pipeline on the Shared VPC
B. Assign the compute.networkUser role to the service account that executes the Dataflow pipeline. Most Voted
D. Assign the dataflow.admin role to the service account that executes the Dataflow pipeline.
Correct Answer: B
Your infrastructure team has set up an interconnect link between Google Cloud and the on-premises network. You are designing a high-throughput
streaming pipeline to ingest data in streaming from an Apache Kafka cluster hosted on- premises. You want to store the data in BigQuery, with as
A. Setup a Kafka Connect bridge between Kafka and Pub/Sub. Use a Google-provided Dataflow template to read the data from Pub/Sub, and
B. Use a proxy host in the VPC in Google Cloud connecting to Kafka. Write a Dataflow pipeline, read data from the proxy host, and write the
data to BigQuery.
C. Use Dataflow, write a pipeline that reads the data from Kafka, and writes the data to BigQuery. Most Voted
D. Setup a Kafka Connect bridge between Kafka and Pub/Sub. Write a Dataflow pipeline, read the data from Pub/Sub, and write the data to
BigQuery.
Correct Answer: C
You migrated your on-premises Apache Hadoop Distributed File System (HDFS) data lake to Cloud Storage. The data scientist team needs to
process the data by using Apache Spark and SQL. Security policies need to be enforced at the column level. You need a cost-effective solution
that can scale into a data mesh. What should you do?
A. 1. Deploy a long-living Dataproc cluster with Apache Hive and Ranger enabled.
D. 1. Apply an Identity and Access Management (IAM) policy at the file level in Cloud Storage.
Correct Answer: B
One of your encryption keys stored in Cloud Key Management Service (Cloud KMS) was exposed. You need to re- encrypt all of your CMEK-
protected Cloud Storage data that used that key, and then delete the compromised key. You also want to reduce the risk of objects getting written
without customer-managed encryption key (CMEK) protection in the future. What should you do?
A. Rotate the Cloud KMS key version. Continue to use the same Cloud Storage bucket.
B. Create a new Cloud KMS key. Set the default CMEK key on the existing Cloud Storage bucket to the new one.
C. Create a new Cloud KMS key. Create a new Cloud Storage bucket. Copy all objects from the old bucket to the new one bucket while
D. Create a new Cloud KMS key. Create a new Cloud Storage bucket configured to use the new key as the default CMEK key. Copy all objects
from the old bucket to the new bucket without specifying a key. Most Voted
Correct Answer: D
You have an upstream process that writes data to Cloud Storage. This data is then read by an Apache Spark job that runs on Dataproc. These jobs
are run in the us-central1 region, but the data could be stored anywhere in the United States. You need to have a recovery process in place in case
of a catastrophic single region failure. You need an approach with a maximum of 15 minutes of data loss (RPO=15 mins). You want to ensure that
there is minimal latency when reading the data. What should you do?
A. 1. Create two regional Cloud Storage buckets, one in the us-central1 region and one in the us-south1 region.
2. Have the upstream process write data to the us-central1 bucket. Use the Storage Transfer Service to copy data hourly from the us-central1
3. Run the Dataproc cluster in a zone in the us-central1 region, reading from the bucket in that region.
4. In case of regional failure, redeploy your Dataproc clusters to the us-south1 region and read from the bucket in that region instead.
2. Run the Dataproc cluster in a zone in the us-central1 region, reading data from the US multi-region bucket.
3. In case of a regional failure, redeploy the Dataproc cluster to the us-central2 region and continue reading from the same bucket.
C. 1. Create a dual-region Cloud Storage bucket in the us-central1 and us-south1 regions.
3. Run the Dataproc cluster in a zone in the us-central1 region, reading from the bucket in the us-south1 region.
4. In case of a regional failure, redeploy your Dataproc cluster to the us-south1 region and continue reading from the same bucket.
D. 1. Create a dual-region Cloud Storage bucket in the us-central1 and us-south1 regions.
3. Run the Dataproc cluster in a zone in the us-central1 region, reading from the bucket in the same region.
4. In case of a regional failure, redeploy the Dataproc clusters to the us-south1 region and read from the same bucket. Most Voted
Correct Answer: D
You currently have transactional data stored on-premises in a PostgreSQL database. To modernize your data environment, you want to run
transactional workloads and support analytics needs with a single database. You need to move to Google Cloud without changing database
management systems, and minimize cost and complexity. What should you do?
Correct Answer: B
You are architecting a data transformation solution for BigQuery. Your developers are proficient with SQL and want to use the ELT development
technique. In addition, your developers need an intuitive coding environment and the ability to manage SQL as code. You need to identify a
solution for your developers to build these pipelines. What should you do?
A. Use Dataform to build, manage, and schedule SQL pipelines. Most Voted
B. Use Dataflow jobs to read data from Pub/Sub, transform the data, and load the data to BigQuery.
D. Use Cloud Composer to load data and run SQL pipelines by using the BigQuery job operators.
Correct Answer: A
You work for a farming company. You have one BigQuery table named sensors, which is about 500 MB and contains the list of your 5000 sensors,
with columns for id, name, and location. This table is updated every hour. Each sensor generates one metric every 30 seconds along with a
timestamp, which you want to store in BigQuery. You want to run an analytical query on the data once a week for monitoring purposes. You also
2. Set RECORD type and REPEATED mode for the metrics column.
2. Set RECORD type and REPEATED mode for the metrics column.
2. Create a sensorId column in the metrics table, that points to the id column in the sensors table.
3. Use an INSERT statement every 30 seconds to append new metrics to the metrics table.
4. Join the two tables, if needed, when running the analytical query. Most Voted
2. Create a sensorId column in the metrics table, which points to the id column in the sensors table.
3. Use an UPDATE statement every 30 seconds to append new metrics to the metrics table.
4. Join the two tables, if needed, when running the analytical query.
Correct Answer: C
You are managing a Dataplex environment with raw and curated zones. A data engineering team is uploading JSON and CSV files to a bucket
asset in the curated zone but the files are not being automatically discovered by Dataplex. What should you do to ensure that the files are
discovered by Dataplex?
A. Move the JSON and CSV files to the raw zone. Most Voted
C. Use the bg command-line tool to load the JSON and CSV files into BigQuery tables.
D. Grant object level access to the CSV and JSON files in Cloud Storage.
Correct Answer: A
You have a table that contains millions of rows of sales data, partitioned by date. Various applications and users query this data many times a
minute. The query requires aggregating values by using AVG, MAX, and SUM, and does not require joining to other tables. The required
aggregations are only computed over the past year of data, though you need to retain full historical data in the base tables. You want to ensure
that the query results always include the latest data from the tables, while also reducing computation cost, maintenance overhead, and duration.
A. Create a materialized view to aggregate the base table data. Include a filter clause to specify the last one year of partitions. Most Voted
B. Create a materialized view to aggregate the base table data. Configure a partition expiration on the base table to retain only the last one
year of partitions.
C. Create a view to aggregate the base table data. Include a filter clause to specify the last year of partitions.
D. Create a new table that aggregates the base table data. Include a filter clause to specify the last year of partitions. Set up a scheduled
Correct Answer: A
Your organization uses a multi-cloud data storage strategy, storing data in Cloud Storage, and data in Amazon Web Services’ (AWS) S3 storage
buckets. All data resides in US regions. You want to query up-to-date data by using BigQuery, regardless of which cloud the data is stored in. You
need to allow users to query the tables from BigQuery without giving direct access to the data in the storage buckets. What should you do?
A. Setup a BigQuery Omni connection to the AWS S3 bucket data. Create BigLake tables over the Cloud Storage and S3 data and query the
B. Set up a BigQuery Omni connection to the AWS S3 bucket data. Create external tables over the Cloud Storage and S3 data and query the
C. Use the Storage Transfer Service to copy data from the AWS S3 buckets to Cloud Storage buckets. Create BigLake tables over the Cloud
D. Use the Storage Transfer Service to copy data from the AWS S3 buckets to Cloud Storage buckets. Create external tables over the Cloud
Correct Answer: A
You are preparing an organization-wide dataset. You need to preprocess customer data stored in a restricted bucket in Cloud Storage. The data
will be used to create consumer analyses. You need to comply with data privacy requirements.
A. Use Dataflow and the Cloud Data Loss Prevention API to mask sensitive data. Write the processed data in BigQuery. Most Voted
B. Use customer-managed encryption keys (CMEK) to directly encrypt the data in Cloud Storage. Use federated queries from BigQuery. Share
C. Use the Cloud Data Loss Prevention API and Dataflow to detect and remove sensitive fields from the data in Cloud Storage. Write the
D. Use Dataflow and Cloud KMS to encrypt sensitive fields and write the encrypted data in BigQuery. Share the encryption key by following the
Correct Answer: A
You need to connect multiple applications with dynamic public IP addresses to a Cloud SQL instance. You configured users with strong passwords
and enforced the SSL connection to your Cloud SQL instance. You want to use Cloud SQL public IP and ensure that you have secured connections.
A. Add CIDR 0.0.0.0/0 network to Authorized Network. Use Identity and Access Management (IAM) to add users.
B. Add all application networks to Authorized Network and regularly update them.
C. Leave the Authorized Network empty. Use Cloud SQL Auth proxy on all applications. Most Voted
D. Add CIDR 0.0.0.0/0 network to Authorized Network. Use Cloud SQL Auth proxy on all applications.
Correct Answer: C
You are migrating a large number of files from a public HTTPS endpoint to Cloud Storage. The files are protected from unauthorized access using
signed URLs. You created a TSV file that contains the list of object URLs and started a transfer job by using Storage Transfer Service. You notice
that the job has run for a long time and eventually failed. Checking the logs of the transfer job reveals that the job was running fine until one point,
and then it failed due to HTTP 403 errors on the remaining files. You verified that there were no changes to the source system. You need to fix the
A. Set up Cloud Storage FUSE, and mount the Cloud Storage bucket on a Compute Engine instance. Remove the completed files from the TSV
file. Use a shell script to iterate through the TSV file and download the remaining URLs to the FUSE mount point.
B. Renew the TLS certificate of the HTTPS endpoint. Remove the completed files from the TSV file and rerun the Storage Transfer Service job.
C. Create a new TSV file for the remaining files by generating signed URLs with a longer validity period. Split the TSV file into multiple smaller
files and submit them as separate Storage Transfer Service jobs in parallel. Most Voted
D. Update the file checksums in the TSV file from using MD5 to SHA256. Remove the completed files from the TSV file and rerun the Storage
Correct Answer: C
You work for an airline and you need to store weather data in a BigQuery table. Weather data will be used as input to a machine learning model.
The model only uses the last 30 days of weather data. You want to avoid storing unnecessary data and minimize costs. What should you do?
A. Create a BigQuery table where each record has an ingestion timestamp. Run a scheduled query to delete all the rows with an ingestion
B. Create a BigQuery table partitioned by datetime value of the weather date. Set up partition expiration to 30 days. Most Voted
C. Create a BigQuery table partitioned by ingestion time. Set up partition expiration to 30 days.
D. Create a BigQuery table with a datetime column for the day the weather data refers to. Run a scheduled query to delete rows with a
Correct Answer: B
You need to look at BigQuery data from a specific table multiple times a day. The underlying table you are querying is several petabytes in size, but
you want to filter your data and provide simple aggregations to downstream users. You want to run queries faster and get up-to-date insights
A. Run a scheduled query to pull the necessary data at specific intervals dally.
D. Create a materialized view based off of the query being run. Most Voted
Correct Answer: D
Your chemical company needs to manually check documentation for customer order. You use a pull subscription in Pub/Sub so that sales agents
get details from the order. You must ensure that you do not process orders twice with different sales agents and that you do not add more
A. Use a Deduplicate PTransform in Dataflow before sending the messages to the sales agents.
D. Create a new Pub/Sub push subscription to monitor the orders processed in the agent's system.
Correct Answer: C
You are migrating your on-premises data warehouse to BigQuery. As part of the migration, you want to facilitate cross-team collaboration to get
the most value out of the organization’s data. You need to design an architecture that would allow teams within the organization to securely
publish, discover, and subscribe to read-only data in a self-service manner. You need to minimize costs while also maximizing data freshness.
B. Create authorized datasets to publish shared data in the subscribing team's project.
C. Create a new dataset for sharing in each individual team’s project. Grant the subscribing team the bigquery.dataViewer role on the dataset.
D. Use BigQuery Data Transfer Service to copy datasets to a centralized BigQuery project for sharing.
Correct Answer: A
You want to migrate an Apache Spark 3 batch job from on-premises to Google Cloud. You need to minimally change the job so that the job reads
from Cloud Storage and writes the result to BigQuery. Your job is optimized for Spark, where each executor has 8 vCPU and 16 GB memory, and
you want to be able to choose similar settings. You want to minimize installation and management effort to run your job. What should you do?
A. Execute the job as part of a deployment in a new Google Kubernetes Engine cluster.
Correct Answer: D
You are configuring networking for a Dataflow job. The data pipeline uses custom container images with the libraries that are required for the
transformation logic preinstalled. The data pipeline reads the data from Cloud Storage and writes the data to BigQuery. You need to ensure cost-
effective and secure communication between the pipeline and Google APIs and services. What should you do?
A. Disable external IP addresses from worker VMs and enable Private Google Access. Most Voted
B. Leave external IP addresses assigned to worker VMs while enforcing firewall rules.
C. Disable external IP addresses and establish a Private Service Connect endpoint IP address.
D. Enable Cloud NAT to provide outbound internet connectivity while enforcing firewall rules.
Correct Answer: A
You are using Workflows to call an API that returns a 1KB JSON response, apply some complex business logic on this response, wait for the logic
to complete, and then perform a load from a Cloud Storage file to BigQuery. The Workflows standard library does not have sufficient capabilities to
perform your complex logic, and you want to use Python's standard library instead. You want to optimize your workflow for simplicity and speed of
A. Create a Cloud Composer environment and run the logic in Cloud Composer.
B. Create a Dataproc cluster, and use PySpark to apply the logic on your JSON file.
C. Invoke a Cloud Function instance that uses Python to apply the logic on your JSON file. Most Voted
Correct Answer: C
You are administering a BigQuery on-demand environment. Your business intelligence tool is submitting hundreds of queries each day that
aggregate a large (50 TB) sales history fact table at the day and month levels. These queries have a slow response time and are exceeding cost
expectations. You need to decrease response time, lower query costs, and minimize maintenance. What should you do?
A. Build authorized views on top of the sales table to aggregate data at the day and month level.
C. Build materialized views on top of the sales table to aggregate data at the day and month level. Most Voted
D. Create a scheduled query to build sales day and sales month aggregate tables on an hourly basis.
Correct Answer: C
You have several different unstructured data sources, within your on-premises data center as well as in the cloud. The data is in various formats,
such as Apache Parquet and CSV. You want to centralize this data in Cloud Storage. You need to set up an object sink for your data that allows
you to use your own encryption keys. You want to use a GUI-based solution. What should you do?
D. Use Cloud Data Fusion to move files into Cloud Storage. Most Voted
Correct Answer: D
You are using BigQuery with a regional dataset that includes a table with the daily sales volumes. This table is updated multiple times per day. You
need to protect your sales table in case of regional failures with a recovery point objective (RPO) of less than 24 hours, while keeping costs to a
A. Schedule a daily export of the table to a Cloud Storage dual or multi-region bucket. Most Voted
D. Modify ETL job to load the data into both the current and another backup region.
Correct Answer: A
You are preparing an organization-wide dataset. You need to preprocess customer data stored in a restricted bucket in Cloud Storage. The data
will be used to create consumer analyses. You need to follow data privacy requirements, including protecting certain sensitive data elements,
while also retaining all of the data for potential future use cases. What should you do?
A. Use the Cloud Data Loss Prevention API and Dataflow to detect and remove sensitive fields from the data in Cloud Storage. Write the
B. Use customer-managed encryption keys (CMEK) to directly encrypt the data in Cloud Storage. Use federated queries from BigQuery. Share
C. Use Dataflow and the Cloud Data Loss Prevention API to mask sensitive data. Write the processed data in BigQuery. Most Voted
D. Use Dataflow and Cloud KMS to encrypt sensitive fields and write the encrypted data in BigQuery. Share the encryption key by following the
Correct Answer: C