The Secret Of Google Professional-Data-Engineer Sample Question

It is impossible to pass Google Professional-Data-Engineer exam without any help in the short term. Come to Exambible soon and find the most advanced, correct and guaranteed Google Professional-Data-Engineer practice questions. You will get a surprising result by our Rebirth Google Professional Data Engineer Exam practice guides.

Google Professional-Data-Engineer Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1

Your company needs to upload their historic data to Cloud Storage. The security rules don’t allow access from external IPs to their on-premises resources. After an initial upload, they will add new data from existing
on-premises applications every day. What should they do?

  • A. Execute gsutil rsync from the on-premises servers.
  • B. Use Cloud Dataflow and write the data to Cloud Storage.
  • C. Write a job template in Cloud Dataproc to perform the data transfer.
  • D. Install an FTP server on a Compute Engine VM to receive the files and move them to Cloud Storage.

Answer: B

NEW QUESTION 2

Each analytics team in your organization is running BigQuery jobs in their own projects. You want to enable each team to monitor slot usage within their projects. What should you do?

  • A. Create a Stackdriver Monitoring dashboard based on the BigQuery metric query/scanned_bytes
  • B. Create a Stackdriver Monitoring dashboard based on the BigQuery metric slots/allocated_for_project
  • C. Create a log export for each project, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs, and create a Stackdriver Monitoring dashboard based on the custom metric
  • D. Create an aggregated log export at the organization level, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs, and create a Stackdriver Monitoring dashboard based on the custom metric

Answer: D

NEW QUESTION 3

All Google Cloud Bigtable client requests go through a front-end server they are sent to a Cloud Bigtable node.

  • A. before
  • B. after
  • C. only if
  • D. once

Answer: A

Explanation:
In a Cloud Bigtable architecture all client requests go through a front-end server before they are sent to a Cloud Bigtable node.
The nodes are organized into a Cloud Bigtable cluster, which belongs to a Cloud Bigtable instance, which is a container for the cluster. Each node in the cluster handles a subset of the requests to the cluster.
When additional nodes are added to a cluster, you can increase the number of simultaneous requests that the cluster can handle, as well as the maximum throughput for the entire cluster.
Reference: https://cloud.google.com/bigtable/docs/overview

NEW QUESTION 4

You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should you do?

  • A. Use Cloud Dataproc to run your transformation
  • B. Monitor CPU utilization for the cluste
  • C. Resize the number of worker nodes in your cluster via the command line.
  • D. Use Cloud Dataproc to run your transformation
  • E. Use the diagnose command to generate an operational output archiv
  • F. Locate the bottleneck and adjust cluster resources.
  • G. Use Cloud Dataflow to run your transformation
  • H. Monitor the job system lag with Stackdrive
  • I. Use the default autoscaling setting for worker instances.
  • J. Use Cloud Dataflow to run your transformation
  • K. Monitor the total execution time for a sampling of job
  • L. Configure the job to use non-default Compute Engine machine types when needed.

Answer: B

NEW QUESTION 5

You want to use a BigQuery table as a data sink. In which writing mode(s) can you use BigQuery as a sink?

  • A. Both batch and streaming
  • B. BigQuery cannot be used as a sink
  • C. Only batch
  • D. Only streaming

Answer: A

Explanation:
When you apply a BigQueryIO.Write transform in batch mode to write to a single table, Dataflow invokes a BigQuery load job. When you apply a BigQueryIO.Write transform in streaming mode or in batch mode using a function to specify the destination table, Dataflow uses BigQuery's streaming inserts
Reference: https://cloud.google.com/dataflow/model/bigquery-io

NEW QUESTION 6

You use BigQuery as your centralized analytics platform. New data is loaded every day, and an ETL pipeline modifies the original data and prepares it for the final users. This ETL pipeline is regularly modified and can generate errors, but sometimes the errors are detected only after 2 weeks. You need to provide a method to recover from these errors, and your backups should be optimized for storage costs. How should you organize your data in BigQuery and store your backups?

  • A. Organize your data in a single table, export, and compress and store the BigQuery data in Cloud Storage.
  • B. Organize your data in separate tables for each month, and export, compress, and store the data in Cloud Storage.
  • C. Organize your data in separate tables for each month, and duplicate your data on a separate dataset in BigQuery.
  • D. Organize your data in separate tables for each month, and use snapshot decorators to restore the table to a time prior to the corruption.

Answer: D

NEW QUESTION 7

Which action can a Cloud Dataproc Viewer perform?

  • A. Submit a job.
  • B. Create a cluster.
  • C. Delete a cluster.
  • D. List the jobs.

Answer: D

Explanation:
A Cloud Dataproc Viewer is limited in its actions based on its role. A viewer can only list clusters, get cluster details, list jobs, get job details, list operations, and get operation details.
Reference: https://cloud.google.com/dataproc/docs/concepts/iam#iam_roles_and_cloud_dataproc_operations_summary

NEW QUESTION 8

Which of these statements about BigQuery caching is true?

  • A. By default, a query's results are not cached.
  • B. BigQuery caches query results for 48 hours.
  • C. Query results are cached even if you specify a destination table.
  • D. There is no charge for a query that retrieves its results from cache.

Answer: D

Explanation:
When query results are retrieved from a cached results table, you are not charged for the query. BigQuery caches query results for 24 hours, not 48 hours.
Query results are not cached if you specify a destination table.
A query's results are always cached except under certain conditions, such as if you specify a destination table. Reference: https://cloud.google.com/bigquery/querying-data#query-caching

NEW QUESTION 9

If you want to create a machine learning model that predicts the price of a particular stock based on its recent price history, what type of estimator should you use?

  • A. Unsupervised learning
  • B. Regressor
  • C. Classifier
  • D. Clustering estimator

Answer: B

Explanation:
Regression is the supervised learning task for modeling and predicting continuous, numeric variables. Examples include predicting real-estate prices, stock price movements, or student test scores.
Classification is the supervised learning task for modeling and predicting categorical variables. Examples include predicting employee churn, email spam, financial fraud, or student letter grades.
Clustering is an unsupervised learning task for finding natural groupings of observations (i.e. clusters) based on the inherent structure within your dataset. Examples include customer segmentation, grouping similar items in e-commerce, and social network analysis.
Reference: https://elitedatascience.com/machine-learning-algorithms

NEW QUESTION 10

When running a pipeline that has a BigQuery source, on your local machine, you continue to get permission denied errors. What could be the reason for that?

  • A. Your gcloud does not have access to the BigQuery resources
  • B. BigQuery cannot be accessed from local machines
  • C. You are missing gcloud on your machine
  • D. Pipelines cannot be run locally

Answer: A

Explanation:
When reading from a Dataflow source or writing to a Dataflow sink using DirectPipelineRunner, the Cloud Platform account that you configured with the gcloud executable will need access to the corresponding source/sink
Reference:
https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/runners/DirectPipelineRun

NEW QUESTION 11

You store historic data in Cloud Storage. You need to perform analytics on the historic data. You want to use a solution to detect invalid data entries and perform data transformations that will not require programming or knowledge of SQL.
What should you do?

  • A. Use Cloud Dataflow with Beam to detect errors and perform transformations.
  • B. Use Cloud Dataprep with recipes to detect errors and perform transformations.
  • C. Use Cloud Dataproc with a Hadoop job to detect errors and perform transformations.
  • D. Use federated tables in BigQuery with queries to detect errors and perform transformations.

Answer: A

NEW QUESTION 12

Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you recommend they do?

  • A. Rewrite the job in Pig.
  • B. Rewrite the job in Apache Spark.
  • C. Increase the size of the Hadoop cluster.
  • D. Decrease the size of the Hadoop cluster but also rewrite the job in Hive.

Answer: A

NEW QUESTION 13

When using Cloud Dataproc clusters, you can access the YARN web interface by configuring a browser to connect through a proxy.

  • A. HTTPS
  • B. VPN
  • C. SOCKS
  • D. HTTP

Answer: C

Explanation:
When using Cloud Dataproc clusters, configure your browser to use the SOCKS proxy. The SOCKS proxy routes data intended for the Cloud Dataproc cluster through an SSH tunnel.
Reference: https://cloud.google.com/dataproc/docs/concepts/cluster-web-interfaces#interfaces

NEW QUESTION 14

You receive data files in CSV format monthly from a third party. You need to cleanse this data, but every third month the schema of the files changes. Your requirements for implementing these transformations include:
Professional-Data-Engineer dumps exhibit Executing the transformations on a schedule
Professional-Data-Engineer dumps exhibit Enabling non-developer analysts to modify transformations
Professional-Data-Engineer dumps exhibit Providing a graphical tool for designing transformations
What should you do?

  • A. Use Cloud Dataprep to build and maintain the transformation recipes, and execute them on a scheduled basis
  • B. Load each month’s CSV data into BigQuery, and write a SQL query to transform the data to a standard schem
  • C. Merge the transformed tables together with a SQL query
  • D. Help the analysts write a Cloud Dataflow pipeline in Python to perform the transformatio
  • E. The Python code should be stored in a revision control system and modified as the incoming data’s schema changes
  • F. Use Apache Spark on Cloud Dataproc to infer the schema of the CSV file before creating a Dataframe.Then implement the transformations in Spark SQL before writing the data out to Cloud Storage and loading into BigQuery

Answer: D

NEW QUESTION 15

You are designing a data processing pipeline. The pipeline must be able to scale automatically as load increases. Messages must be processed at least once, and must be ordered within windows of 1 hour. How should you design the solution?

  • A. Use Apache Kafka for message ingestion and use Cloud Dataproc for streaming analysis.
  • B. Use Apache Kafka for message ingestion and use Cloud Dataflow for streaming analysis.
  • C. Use Cloud Pub/Sub for message ingestion and Cloud Dataproc for streaming analysis.
  • D. Use Cloud Pub/Sub for message ingestion and Cloud Dataflow for streaming analysis.

Answer: C

NEW QUESTION 16

Which Cloud Dataflow / Beam feature should you use to aggregate data in an unbounded data source every hour based on the time when the data entered the pipeline?

  • A. An hourly watermark
  • B. An event time trigger
  • C. The with Allowed Lateness method
  • D. A processing time trigger

Answer: D

Explanation:
When collecting and grouping data into windows, Beam uses triggers to determine when to emit the aggregated results of each window.
Processing time triggers. These triggers operate on the processing time – the time when the data element is processed at any given stage in the pipeline.
Event time triggers. These triggers operate on the event time, as indicated by the timestamp on each data
element. Beam’s default trigger is event time-based.
Reference: https://beam.apache.org/documentation/programming-guide/#triggers

NEW QUESTION 17

You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight data. How can you adjust your application design?

  • A. Re-write the application to load accumulated data every 2 minutes.
  • B. Convert the streaming insert code to batch load for individual messages.
  • C. Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.
  • D. Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.

Answer: A

NEW QUESTION 18

You work for a global shipping company. You want to train a model on 40 TB of data to predict which ships in each geographic region are likely to cause delivery delays on any given day. The model will be based on multiple attributes collected from multiple sources. Telemetry data, including location in GeoJSON format, will be pulled from each ship and loaded every hour. You want to have a dashboard that shows how many and which ships are likely to cause delays within a region. You want to use a storage solution that has native functionality for prediction and geospatial processing. Which storage solution should you use?

  • A. BigQuery
  • B. Cloud Bigtable
  • C. Cloud Datastore
  • D. Cloud SQL for PostgreSQL

Answer: A

NEW QUESTION 19

You have a data pipeline with a Cloud Dataflow job that aggregates and writes time series metrics to Cloud Bigtable. This data feeds a dashboard used by thousands of users across the organization. You need to support additional concurrent users and reduce the amount of time required to write the data. Which two actions should you take? (Choose two.)

  • A. Configure your Cloud Dataflow pipeline to use local execution
  • B. Increase the maximum number of Cloud Dataflow workers by setting maxNumWorkers in PipelineOptions
  • C. Increase the number of nodes in the Cloud Bigtable cluster
  • D. Modify your Cloud Dataflow pipeline to use the Flatten transform before writing to Cloud Bigtable
  • E. Modify your Cloud Dataflow pipeline to use the CoGroupByKey transform before writing to Cloud Bigtable

Answer: DE

NEW QUESTION 20

You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in BigQuery as cheaply as possible. What should you do?

  • A. Load the data every 30 minutes into a new partitioned table in BigQuery.
  • B. Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery
  • C. Store the data in Google Cloud Datastor
  • D. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Cloud Datastore
  • E. Store the data in a file in a regional Google Cloud Storage bucke
  • F. Use Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Google Cloud Storage.

Answer: A

NEW QUESTION 21

You are responsible for writing your company’s ETL pipelines to run on an Apache Hadoop cluster. The pipeline will require some checkpointing and splitting pipelines. Which method should you use to write the pipelines?

  • A. PigLatin using Pig
  • B. HiveQL using Hive
  • C. Java using MapReduce
  • D. Python using MapReduce

Answer: D

NEW QUESTION 22

Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?

  • A. Store the common data in BigQuery as partitioned tables.
  • B. Store the common data in BigQuery and expose authorized views.
  • C. Store the common data encoded as Avro in Google Cloud Storage.
  • D. Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.

Answer: B

NEW QUESTION 23

You decided to use Cloud Datastore to ingest vehicle telemetry data in real time. You want to build a storage system that will account for the long-term data growth, while keeping the costs low. You also want to create snapshots of the data periodically, so that you can make a point-in-time (PIT) recovery, or clone a copy of the data for Cloud Datastore in a different environment. You want to archive these snapshots for a long time. Which two methods can accomplish this? Choose 2 answers.

  • A. Use managed export, and store the data in a Cloud Storage bucket using Nearline or Coldline class.
  • B. Use managed exportm, and then import to Cloud Datastore in a separate project under a unique namespace reserved for that export.
  • C. Use managed export, and then import the data into a BigQuery table created just for that export, and delete temporary export files.
  • D. Write an application that uses Cloud Datastore client libraries to read all the entitie
  • E. Treat each entity as a BigQuery table row via BigQuery streaming inser
  • F. Assign an export timestamp for each export, and attach it as an extra column for each ro
  • G. Make sure that the BigQuery table is partitioned using the export timestamp column.
  • H. Write an application that uses Cloud Datastore client libraries to read all the entitie
  • I. Format the exported data into a JSON fil
  • J. Apply compression before storing the data in Cloud Source Repositories.

Answer: CE

NEW QUESTION 24
......

Recommend!! Get the Full Professional-Data-Engineer dumps in VCE and PDF From Certshared, Welcome to Download: https://www.certshared.com/exam/Professional-Data-Engineer/ (New 239 Q&As Version)