How Many Questions Of Professional-Data-Engineer Testing Software

100% Guarantee of Professional-Data-Engineer free download materials and vce for Google certification for customers, Real Success Guaranteed with Updated Professional-Data-Engineer pdf dumps vce Materials. 100% PASS Google Professional Data Engineer Exam exam Today!

Also have Professional-Data-Engineer free dumps questions for you:

NEW QUESTION 1

Which of these rules apply when you add preemptible workers to a Dataproc cluster (select 2 answers)?

  • A. Preemptible workers cannot use persistent disk.
  • B. Preemptible workers cannot store data.
  • C. If a preemptible worker is reclaimed, then a replacement worker must be added manually.
  • D. A Dataproc cluster cannot have only preemptible workers.

Answer: BD

Explanation:
The following rules will apply when you use preemptible workers with a Cloud Dataproc cluster: Processing only—Since preemptibles can be reclaimed at any time, preemptible workers do not store data.
Preemptibles added to a Cloud Dataproc cluster only function as processing nodes.
No preemptible-only clusters—To ensure clusters do not lose all workers, Cloud Dataproc cannot create preemptible-only clusters.
Persistent disk size—As a default, all preemptible workers are created with the smaller of 100GB or the primary worker boot disk size. This disk space is used for local caching of data and is not available through HDFS.
The managed group automatically re-adds workers lost due to reclamation as capacity permits. Reference: https://cloud.google.com/dataproc/docs/concepts/preemptible-vms

NEW QUESTION 2

You are a retailer that wants to integrate your online sales capabilities with different in-home assistants, such as Google Home. You need to interpret customer voice commands and issue an order to the backend systems. Which solutions should you choose?

  • A. Cloud Speech-to-Text API
  • B. Cloud Natural Language API
  • C. Dialogflow Enterprise Edition
  • D. Cloud AutoML Natural Language

Answer: D

NEW QUESTION 3

As your organization expands its usage of GCP, many teams have started to create their own projects. Projects are further multiplied to accommodate different stages of deployments and target audiences. Each project requires unique access control configurations. The central IT team needs to have access to all projects. Furthermore, data from Cloud Storage buckets and BigQuery datasets must be shared for use in other projects in an ad hoc way. You want to simplify access control management by minimizing the number of policies. Which two steps should you take? Choose 2 answers.

  • A. Use Cloud Deployment Manager to automate access provision.
  • B. Introduce resource hierarchy to leverage access control policy inheritance.
  • C. Create distinct groups for various teams, and specify groups in Cloud IAM policies.
  • D. Only use service accounts when sharing data for Cloud Storage buckets and BigQuery datasets.
  • E. For each Cloud Storage bucket or BigQuery dataset, decide which projects need acces
  • F. Find all the active members who have access to these projects, and create a Cloud IAM policy to grant access to all these users.

Answer: AC

NEW QUESTION 4

To run a TensorFlow training job on your own computer using Cloud Machine Learning Engine, what would your command start with?

  • A. gcloud ml-engine local train
  • B. gcloud ml-engine jobs submit training
  • C. gcloud ml-engine jobs submit training local
  • D. You can't run a TensorFlow program on your own computer using Cloud ML Engine .

Answer: A

Explanation:
gcloud ml-engine local train - run a Cloud ML Engine training job locally
This command runs the specified module in an environment similar to that of a live Cloud ML Engine Training Job.
This is especially useful in the case of testing distributed models, as it allows you to validate that you are
properly interacting with the Cloud ML Engine cluster configuration. Reference: https://cloud.google.com/sdk/gcloud/reference/ml-engine/local/train

NEW QUESTION 5

You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you do?

  • A. Make a call to the Stackdriver API to list all logs, and apply an advanced filter.
  • B. In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.
  • C. In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.
  • D. Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.

Answer: B

NEW QUESTION 6

You need to migrate a 2TB relational database to Google Cloud Platform. You do not have the resources to significantly refactor the application that uses this database and cost to operate is of primary concern.
Which service do you select for storing and serving your data?

  • A. Cloud Spanner
  • B. Cloud Bigtable
  • C. Cloud Firestore
  • D. Cloud SQL

Answer: D

NEW QUESTION 7

If you're running a performance test that depends upon Cloud Bigtable, all the choices except one below are recommended steps. Which is NOT a recommended step to follow?

  • A. Do not use a production instance.
  • B. Run your test for at least 10 minutes.
  • C. Before you test, run a heavy pre-test for several minutes.
  • D. Use at least 300 GB of data.

Answer: A

Explanation:
If you're running a performance test that depends upon Cloud Bigtable, be sure to follow these steps as you
plan and execute your test:
Use a production instance. A development instance will not give you an accurate sense of how a production instance performs under load.
Use at least 300 GB of data. Cloud Bigtable performs best with 1 TB or more of data. However, 300 GB of data is enough to provide reasonable results in a performance test on a 3-node cluster. On larger clusters, use 100 GB of data per node.
Before you test, run a heavy pre-test for several minutes. This step gives Cloud Bigtable a chance to balance data across your nodes based on the access patterns it observes.
Run your test for at least 10 minutes. This step lets Cloud Bigtable further optimize your data, and it helps ensure that you will test reads from disk as well as cached reads from memory.
Reference: https://cloud.google.com/bigtable/docs/performance

NEW QUESTION 8

You currently have a single on-premises Kafka cluster in a data center in the us-east region that is responsible for ingesting messages from IoT devices globally. Because large parts of globe have poor internet connectivity, messages sometimes batch at the edge, come in all at once, and cause a spike in load on your Kafka cluster. This is becoming difficult to manage and prohibitively expensive. What is the
Google-recommended cloud native architecture for this scenario?

  • A. Edge TPUs as sensor devices for storing and transmitting the messages.
  • B. Cloud Dataflow connected to the Kafka cluster to scale the processing of incoming messages.
  • C. An IoT gateway connected to Cloud Pub/Sub, with Cloud Dataflow to read and process the messages from Cloud Pub/Sub.
  • D. A Kafka cluster virtualized on Compute Engine in us-east with Cloud Load Balancing to connect to the devices around the world.

Answer: C

NEW QUESTION 9

You set up a streaming data insert into a Redis cluster via a Kafka cluster. Both clusters are running on Compute Engine instances. You need to encrypt data at rest with encryption keys that you can create, rotate, and destroy as needed. What should you do?

  • A. Create a dedicated service account, and use encryption at rest to reference your data stored in your Compute Engine cluster instances as part of your API service calls.
  • B. Create encryption keys in Cloud Key Management Servic
  • C. Use those keys to encrypt your data in all of the Compute Engine cluster instances.
  • D. Create encryption keys locall
  • E. Upload your encryption keys to Cloud Key Management Servic
  • F. Use those keys to encrypt your data in all of the Compute Engine cluster instances.
  • G. Create encryption keys in Cloud Key Management Servic
  • H. Reference those keys in your API service calls when accessing the data in your Compute Engine cluster instances.

Answer: C

NEW QUESTION 10

Which row keys are likely to cause a disproportionate number of reads and/or writes on a particular node in a Bigtable cluster (select 2 answers)?

  • A. A sequential numeric ID
  • B. A timestamp followed by a stock symbol
  • C. A non-sequential numeric ID
  • D. A stock symbol followed by a timestamp

Answer: AB

Explanation:
using a timestamp as the first element of a row key can cause a variety of problems.
In brief, when a row key for a time series includes a timestamp, all of your writes will target a single node; fill that node; and then move onto the next node in the cluster, resulting in hotspotting.
Suppose your system assigns a numeric ID to each of your application's users. You might be tempted to use the user's numeric ID as the row key for your table. However, since new users are more likely to be active users, this approach is likely to push most of your traffic to a small number of nodes. [https://cloud.google.com/bigtable/docs/schema-design]
Reference:
https://cloud.google.com/bigtable/docs/schema-design-time-series#ensure_that_your_row_key_avoids_hotspotti

NEW QUESTION 11

You are training a spam classifier. You notice that you are overfitting the training data. Which three actions can you take to resolve this problem? (Choose three.)

  • A. Get more training examples
  • B. Reduce the number of training examples
  • C. Use a smaller set of features
  • D. Use a larger set of features
  • E. Increase the regularization parameters
  • F. Decrease the regularization parameters

Answer: ADF

NEW QUESTION 12

Your company receives both batch- and stream-based event data. You want to process the data using Google Cloud Dataflow over a predictable time period. However, you realize that in some instances data can arrive late or out of order. How should you design your Cloud Dataflow pipeline to handle data that is late or out of order?

  • A. Set a single global window to capture all the data.
  • B. Set sliding windows to capture all the lagged data.
  • C. Use watermarks and timestamps to capture the lagged data.
  • D. Ensure every datasource type (stream or batch) has a timestamp, and use the timestamps to define the logic for lagged data.

Answer: B

NEW QUESTION 13

You have a data pipeline that writes data to Cloud Bigtable using well-designed row keys. You want to monitor your pipeline to determine when to increase the size of you Cloud Bigtable cluster. Which two actions can you take to accomplish this? Choose 2 answers.

  • A. Review Key Visualizer metric
  • B. Increase the size of the Cloud Bigtable cluster when the Read pressure index is above 100.
  • C. Review Key Visualizer metric
  • D. Increase the size of the Cloud Bigtable cluster when the Write pressure index is above 100.
  • E. Monitor the latency of write operation
  • F. Increase the size of the Cloud Bigtable cluster when there is a sustained increase in write latency.
  • G. Monitor storage utilizatio
  • H. Increase the size of the Cloud Bigtable cluster when utilization increases above 70% of max capacity.
  • I. Monitor latency of read operation
  • J. Increase the size of the Cloud Bigtable cluster of read operations take longer than 100 ms.

Answer: AC

NEW QUESTION 14

You work on a regression problem in a natural language processing domain, and you have 100M labeled exmaples in your dataset. You have randomly shuffled your data and split your dataset into train and test samples (in a 90/10 ratio). After you trained the neural network and evaluated your model on a test set, you discover that the root-mean-squared error (RMSE) of your model is twice as high on the train set as on the test set. How should you improve the performance of your model?

  • A. Increase the share of the test sample in the train-test split.
  • B. Try to collect more data and increase the size of your dataset.
  • C. Try out regularization techniques (e.g., dropout of batch normalization) to avoid overfitting.
  • D. Increase the complexity of your model by, e.g., introducing an additional layer or increase sizing the size of vocabularies or n-grams used.

Answer: D

NEW QUESTION 15

Your infrastructure includes a set of YouTube channels. You have been tasked with creating a process for sending the YouTube channel data to Google Cloud for analysis. You want to design a solution that allows your world-wide marketing teams to perform ANSI SQL and other types of analysis on up-to-date YouTube channels log data. How should you set up the log data transfer into Google Cloud?

  • A. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
  • B. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Regional bucket as a final destination.
  • C. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
  • D. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Regional storage bucket as a final destination.

Answer: B

NEW QUESTION 16

Which methods can be used to reduce the number of rows processed by BigQuery?

  • A. Splitting tables into multiple tables; putting data in partitions
  • B. Splitting tables into multiple tables; putting data in partitions; using the LIMIT clause
  • C. Putting data in partitions; using the LIMIT clause
  • D. Splitting tables into multiple tables; using the LIMIT clause

Answer: A

Explanation:
If you split a table into multiple tables (such as one table for each day), then you can limit your query to the data in specific tables (such as for particular days). A better method is to use a partitioned table, as long as your data can be separated by the day.
If you use the LIMIT clause, BigQuery will still process the entire table. Reference: https://cloud.google.com/bigquery/docs/partitioned-tables

NEW QUESTION 17

Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in Google BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure the data warehouse. You need to discover what everyone is doing. What should you do first?

  • A. Use Google Stackdriver Audit Logs to review data access.
  • B. Get the identity and access management IIAM) policy of each table
  • C. Use Stackdriver Monitoring to see the usage of BigQuery query slots.
  • D. Use the Google Cloud Billing API to see what account the warehouse is being billed to.

Answer: C

NEW QUESTION 18

Which role must be assigned to a service account used by the virtual machines in a Dataproc cluster so they can execute jobs?

  • A. Dataproc Worker
  • B. Dataproc Viewer
  • C. Dataproc Runner
  • D. Dataproc Editor

Answer: A

Explanation:
Service accounts used with Cloud Dataproc must have Dataproc/Dataproc Worker role (or have all the permissions granted by Dataproc Worker role).
Reference: https://cloud.google.com/dataproc/docs/concepts/service-accounts#important_notes

NEW QUESTION 19

Which of the following job types are supported by Cloud Dataproc (select 3 answers)?

  • A. Hive
  • B. Pig
  • C. YARN
  • D. Spark

Answer: ABD

Explanation:
Cloud Dataproc provides out-of-the box and end-to-end support for many of the most popular job types, including Spark, Spark SQL, PySpark, MapReduce, Hive, and Pig jobs.
Reference: https://cloud.google.com/dataproc/docs/resources/faq#what_type_of_jobs_can_i_run

NEW QUESTION 20

You want to archive data in Cloud Storage. Because some data is very sensitive, you want to use the “Trust No One” (TNO) approach to encrypt your data to prevent the cloud provider staff from decrypting your data. What should you do?

  • A. Use gcloud kms keys create to create a symmetric ke
  • B. Then use gcloud kms encrypt to encrypt each archival file with the key and unique additional authenticated data (AAD). Use gsutil cp to upload each encrypted file to the Cloud Storage bucket, and keep the AAD outside of Google Cloud.
  • C. Use gcloud kms keys create to create a symmetric ke
  • D. Then use gcloud kms encrypt to encrypt each archival file with the ke
  • E. Use gsutil cp to upload each encrypted file to the Cloud Storage bucke
  • F. Manually destroy the key previously used for encryption, and rotate the key once and rotate the key once.
  • G. Specify customer-supplied encryption key (CSEK) in the .boto configuration fil
  • H. Use gsutil cp to upload each archival file to the Cloud Storage bucke
  • I. Save the CSEK in Cloud Memorystore as permanent storage of the secret.
  • J. Specify customer-supplied encryption key (CSEK) in the .boto configuration fil
  • K. Use gsutil cp to upload each archival file to the Cloud Storage bucke
  • L. Save the CSEK in a different project that only the security team can access.

Answer: B

NEW QUESTION 21

You want to migrate an on-premises Hadoop system to Cloud Dataproc. Hive is the primary tool in use, and the data format is Optimized Row Columnar (ORC). All ORC files have been successfully copied to a Cloud Storage bucket. You need to replicate some data to the cluster’s local Hadoop Distributed File System (HDFS) to maximize performance. What are two ways to start using Hive in Cloud Dataproc? (Choose two.)

  • A. Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to HDF
  • B. Mount the Hive tables locally.
  • C. Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to any node of the Dataproc cluste
  • D. Mount the Hive tables locally.
  • E. Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to the master node of the Dataproc cluste
  • F. Then run the Hadoop utility to copy them do HDF
  • G. Mount the Hive tables from HDFS.
  • H. Leverage Cloud Storage connector for Hadoop to mount the ORC files as external Hive table
  • I. Replicate external Hive tables to the native ones.
  • J. Load the ORC files into BigQuer
  • K. Leverage BigQuery connector for Hadoop to mount the BigQuery tables as external Hive table
  • L. Replicate external Hive tables to the native ones.

Answer: BC

NEW QUESTION 22

You need to choose a database to store time series CPU and memory usage for millions of computers. You need to store this data in one-second interval samples. Analysts will be performing real-time, ad hoc analytics against the database. You want to avoid being charged for every query executed and ensure that the schema design will allow for future growth of the dataset. Which database and data model should you choose?

  • A. Create a table in BigQuery, and append the new samples for CPU and memory to the table
  • B. Create a wide table in BigQuery, create a column for the sample value at each second, and update the row with the interval for each second
  • C. Create a narrow table in Cloud Bigtable with a row key that combines the Computer Engine computer identifier with the sample time at each second
  • D. Create a wide table in Cloud Bigtable with a row key that combines the computer identifier with the sample time at each minute, and combine the values for each second as column data.

Answer: D

NEW QUESTION 23

Flowlogistic’s CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they’ve purchased a visualization tool to simplify the creation of BigQuery reports. However, they’ve been overwhelmed by all the data in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way. What should you do?

  • A. Export the data into a Google Sheet for virtualization.
  • B. Create an additional table with only the necessary columns.
  • C. Create a view on the table to present to the virtualization tool.
  • D. Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.

Answer: C

NEW QUESTION 24
......

P.S. Easily pass Professional-Data-Engineer Exam with 239 Q&As Dumps-hub.com Dumps & pdf Version, Welcome to Download the Newest Dumps-hub.com Professional-Data-Engineer Dumps: https://www.dumps-hub.com/Professional-Data-Engineer-dumps.html (239 New Questions)