How Many Questions Of DAS-C01 Dumps

Your success in Amazon-Web-Services DAS-C01 is our sole target and we develop all our DAS-C01 braindumps in a way that facilitates the attainment of this target. Not only is our DAS-C01 study material the best you can find, it is also the most detailed and the most updated. DAS-C01 Practice Exams for Amazon-Web-Services DAS-C01 are written to the highest standards of technical accuracy.

Online DAS-C01 free questions and answers of New Version:

NEW QUESTION 1
A media content company has a streaming playback application. The company wants to collect and analyze the data to provide near-real-time feedback on playback issues. The company needs to consume this data and return results within 30 seconds according to the service-level agreement (SLA). The company needs the consumer to identify playback issues, such as quality during a specified timeframe. The data will be emitted as JSON and may change schemas over time.
Which solution will allow the company to collect data for processing while meeting these requirements?

  • A. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Configure an S3 event trigger an AWS Lambda function to process the dat
  • B. The Lambda function will consume the data and process it to identify potential playback issue
  • C. Persist the raw data to Amazon S3.
  • D. Send the data to Amazon Managed Streaming for Kafka and configure an Amazon Kinesis Analytics for Java application as the consume
  • E. The application will consume the data and process it to identify potential playback issue
  • F. Persist the raw data to Amazon DynamoDB.
  • G. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Configure Amazon S3 to trigger an event for AWS Lambda to proces
  • H. The Lambda function will consume the data and process it to identify potential playback issue
  • I. Persist the raw data to Amazon DynamoDB.
  • J. Send the data to Amazon Kinesis Data Streams and configure an Amazon Kinesis Analytics for Java application as the consume
  • K. The application will consume the data and process it to identify potential playback issue
  • L. Persist the raw data to Amazon S3.

Answer: D

Explanation:
https://aws.amazon.com/blogs/aws/new-amazon-kinesis-data-analytics-for-java/

NEW QUESTION 2
A data analytics specialist is setting up workload management in manual mode for an Amazon Redshift environment. The data analytics specialist is defining query monitoring rules to manage system performance and user experience of an Amazon Redshift cluster.
Which elements must each query monitoring rule include?

  • A. A unique rule name, a query runtime condition, and an AWS Lambda function to resubmit any failed queries in off hours
  • B. A queue name, a unique rule name, and a predicate-based stop condition
  • C. A unique rule name, one to three predicates, and an action
  • D. A workload name, a unique rule name, and a query runtime-based condition

Answer: C

NEW QUESTION 3
An Amazon Redshift database contains sensitive user data. Logging is necessary to meet compliance requirements. The logs must contain database authentication attempts, connections, and disconnections. The logs must also contain each query run against the database and record which database user ran each query.
Which steps will create the required logs?

  • A. Enable Amazon Redshift Enhanced VPC Routin
  • B. Enable VPC Flow Logs to monitor traffic.
  • C. Allow access to the Amazon Redshift database using AWS IAM onl
  • D. Log access using AWS CloudTrail.
  • E. Enable audit logging for Amazon Redshift using the AWS Management Console or the AWS CLI.
  • F. Enable and download audit reports from AWS Artifact.

Answer: C

NEW QUESTION 4
A data analyst is using Amazon QuickSight for data visualization across multiple datasets generated by applications. Each application stores files within a separate Amazon S3 bucket. AWS Glue Data Catalog is used as a central catalog across all application data in Amazon S3. A new application stores its data within a separate S3 bucket. After updating the catalog to include the new application data source, the data analyst created a new Amazon QuickSight data source from an Amazon Athena table, but the import into SPICE failed.
How should the data analyst resolve the issue?

  • A. Edit the permissions for the AWS Glue Data Catalog from within the Amazon QuickSight console.
  • B. Edit the permissions for the new S3 bucket from within the Amazon QuickSight console.
  • C. Edit the permissions for the AWS Glue Data Catalog from within the AWS Glue console.
  • D. Edit the permissions for the new S3 bucket from within the S3 console.

Answer: B

NEW QUESTION 5
A company has 1 million scanned documents stored as image files in Amazon S3. The documents contain typewritten application forms with information including the applicant first name, applicant last name, application date, application type, and application text. The company has developed a machine learning algorithm to extract the metadata values from the scanned documents. The company wants to allow internal data analysts to analyze and find applications using the applicant name, application date, or application text. The original images should also be downloadable. Cost control is secondary to query performance.
Which solution organizes the images and metadata to drive insights while meeting the requirements?

  • A. For each image, use object tags to add the metadat
  • B. Use Amazon S3 Select to retrieve the files based on the applicant name and application date.
  • C. Index the metadata and the Amazon S3 location of the image file in Amazon Elasticsearch Service.Allow the data analysts to use Kibana to submit queries to the Elasticsearch cluster.
  • D. Store the metadata and the Amazon S3 location of the image file in an Amazon Redshift tabl
  • E. Allow the data analysts to run ad-hoc queries on the table.
  • F. Store the metadata and the Amazon S3 location of the image files in an Apache Parquet file in Amazon S3, and define a table in the AWS Glue Data Catalo
  • G. Allow data analysts to use Amazon Athena to submit custom queries.

Answer: B

Explanation:
https://aws.amazon.com/blogs/machine-learning/automatically-extract-text-and-structured-data-from-documents

NEW QUESTION 6
An ecommerce company stores customer purchase data in Amazon RDS. The company wants a solution to store and analyze historical data. The most recent 6 months of data will be queried frequently for analytics workloads. This data is several terabytes large. Once a month, historical data for the last 5 years must be accessible and will be joined with the more recent data. The company wants to optimize performance and cost.
Which storage solution will meet these requirements?

  • A. Create a read replica of the RDS database to store the most recent 6 months of dat
  • B. Copy the historical data into Amazon S3. Create an AWS Glue Data Catalog of the data in Amazon S3 and Amazon RD
  • C. Run historical queries using Amazon Athena.
  • D. Use an ETL tool to incrementally load the most recent 6 months of data into an Amazon Redshift cluste
  • E. Run more frequent queries against this cluste
  • F. Create a read replica of the RDS database to run queries on the historical data.
  • G. Incrementally copy data from Amazon RDS to Amazon S3. Create an AWS Glue Data Catalog of the data in Amazon S3. Use Amazon Athena to query the data.
  • H. Incrementally copy data from Amazon RDS to Amazon S3. Load and store the most recent 6 months of data in Amazon Redshif
  • I. Configure an Amazon Redshift Spectrum table to connect to all historical data.

Answer: D

NEW QUESTION 7
A transport company wants to track vehicular movements by capturing geolocation records. The records are 10 B in size and up to 10,000 records are captured each second. Data transmission delays of a few minutes are acceptable, considering unreliable network conditions. The transport company decided to use Amazon Kinesis Data Streams to ingest the data. The company is looking for a reliable mechanism to send data to Kinesis Data Streams while maximizing the throughput efficiency of the Kinesis shards.
Which solution will meet the company’s requirements?

  • A. Kinesis Agent
  • B. Kinesis Producer Library (KPL)
  • C. Kinesis Data Firehose
  • D. Kinesis SDK

Answer: B

NEW QUESTION 8
A financial company uses Apache Hive on Amazon EMR for ad-hoc queries. Users are complaining of sluggish performance.
A data analyst notes the following:
DAS-C01 dumps exhibit Approximately 90% of queries are submitted 1 hour after the market opens.
DAS-C01 dumps exhibit Hadoop Distributed File System (HDFS) utilization never exceeds 10%.
Which solution would help address the performance issues?

  • A. Create instance fleet configurations for core and task node
  • B. Create an automatic scaling policy to scale out the instance groups based on the Amazon CloudWatch CapacityRemainingGB metri
  • C. Create an automatic scaling policy to scale in the instance fleet based on the CloudWatch CapacityRemainingGB metric.
  • D. Create instance fleet configurations for core and task node
  • E. Create an automatic scaling policy to scale out the instance groups based on the Amazon CloudWatch YARNMemoryAvailablePercentage metri
  • F. Create an automatic scaling policy to scale in the instance fleet based on the CloudWatch YARNMemoryAvailablePercentage metric.
  • G. Create instance group configurations for core and task node
  • H. Create an automatic scaling policy to scale out the instance groups based on the Amazon CloudWatch CapacityRemainingGB metri
  • I. Create anautomatic scaling policy to scale in the instance groups based on the CloudWatch CapacityRemainingGB metric.
  • J. Create instance group configurations for core and task node
  • K. Create an automatic scaling policy to scale out the instance groups based on the Amazon CloudWatch YARNMemoryAvailablePercentage metri
  • L. Create an automatic scaling policy to scale in the instance groups based on the CloudWatch YARNMemoryAvailablePercentage metric.

Answer: D

Explanation:
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-instances-guidelines.html

NEW QUESTION 9
A marketing company is storing its campaign response data in Amazon S3. A consistent set of sources has generated the data for each campaign. The data is saved into Amazon S3 as .csv files. A business analyst will use Amazon Athena to analyze each campaign’s data. The company needs the cost of ongoing data analysis with Athena to be minimized.
Which combination of actions should a data analytics specialist take to meet these requirements? (Choose two.)

  • A. Convert the .csv files to Apache Parquet.
  • B. Convert the .csv files to Apache Avro.
  • C. Partition the data by campaign.
  • D. Partition the data by source.
  • E. Compress the .csv files.

Answer: AC

Explanation:
https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-tips-for-amazon-athena/

NEW QUESTION 10
An ecommerce company is migrating its business intelligence environment from on premises to the AWS Cloud. The company will use Amazon Redshift in a public subnet and Amazon QuickSight. The tables already are loaded into Amazon Redshift and can be accessed by a SQL tool.
The company starts QuickSight for the first time. During the creation of the data source, a data analytics specialist enters all the information and tries to validate the connection. An error with the following message occurs: “Creating a connection to your data source timed out.”
How should the data analytics specialist resolve this error?

  • A. Grant the SELECT permission on Amazon Redshift tables.
  • B. Add the QuickSight IP address range into the Amazon Redshift security group.
  • C. Create an IAM role for QuickSight to access Amazon Redshift.
  • D. Use a QuickSight admin user for creating the dataset.

Answer: A

Explanation:
Connection to the database times out
Your client connection to the database appears to hang or time out when running long queries, such as a COPY command. In this case, you might observe that the Amazon Redshift console displays that the query has completed, but the client tool itself still appears to be running the query. The results of the query might be missing or incomplete depending on when the connection stopped.

NEW QUESTION 11
A company uses Amazon Elasticsearch Service (Amazon ES) to store and analyze its website clickstream data. The company ingests 1 TB of data daily using Amazon Kinesis Data Firehose and stores one day’s worth of data in an Amazon ES cluster.
The company has very slow query performance on the Amazon ES index and occasionally sees errors from Kinesis Data Firehose when attempting to write to the index. The Amazon ES cluster has 10 nodes running a single index and 3 dedicated master nodes. Each data node has 1.5 TB of Amazon EBS storage attached and the cluster is configured with 1,000 shards. Occasionally, JVMMemoryPressure errors are found in the cluster logs.
Which solution will improve the performance of Amazon ES?

  • A. Increase the memory of the Amazon ES master nodes.
  • B. Decrease the number of Amazon ES data nodes.
  • C. Decrease the number of Amazon ES shards for the index.
  • D. Increase the number of Amazon ES shards for the index.

Answer: C

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/high-jvm-memory-pressure-elasticsearch/

NEW QUESTION 12
A retail company leverages Amazon Athena for ad-hoc queries against an AWS Glue Data Catalog. The data analytics team manages the data catalog and data access for the company. The data analytics team wants to separate queries and manage the cost of running those queries by different workloads and teams. Ideally, the data analysts want to group the queries run by different users within a team, store the query results in individual Amazon S3 buckets specific to each team, and enforce cost constraints on the queries run against the Data Catalog.
Which solution meets these requirements?

  • A. Create IAM groups and resource tags for each team within the compan
  • B. Set up IAM policies that controluser access and actions on the Data Catalog resources.
  • C. Create Athena resource groups for each team within the company and assign users to these group
  • D. Add S3 bucket names and other query configurations to the properties list for the resource groups.
  • E. Create Athena workgroups for each team within the compan
  • F. Set up IAM workgroup policies that control user access and actions on the workgroup resources.
  • G. Create Athena query groups for each team within the company and assign users to the groups.

Answer: C

Explanation:
https://aws.amazon.com/about-aws/whats-new/2019/02/athena_workgroups/

NEW QUESTION 13
A company analyzes historical data and needs to query data that is stored in Amazon S3. New data is
generated daily as .csv files that are stored in Amazon S3. The company’s analysts are using Amazon Athena to perform SQL queries against a recent subset of the overall data. The amount of data that is ingested into Amazon S3 has increased substantially over time, and the query latency also has increased.
Which solutions could the company implement to improve query performance? (Choose two.)

  • A. Use MySQL Workbench on an Amazon EC2 instance, and connect to Athena by using a JDBC or ODBC connecto
  • B. Run the query from MySQL Workbench instead of Athena directly.
  • C. Use Athena to extract the data and store it in Apache Parquet format on a daily basi
  • D. Query the extracted data.
  • E. Run a daily AWS Glue ETL job to convert the data files to Apache Parquet and to partition the converted file
  • F. Create a periodic AWS Glue crawler to automatically crawl the partitioned data on a daily basis.
  • G. Run a daily AWS Glue ETL job to compress the data files by using the .gzip forma
  • H. Query the compressed data.
  • I. Run a daily AWS Glue ETL job to compress the data files by using the .lzo forma
  • J. Query the compressed data.

Answer: BC

NEW QUESTION 14
A company wants to use an automatic machine learning (ML) Random Cut Forest (RCF) algorithm to visualize complex real-world scenarios, such as detecting seasonality and trends, excluding outers, and imputing missing values.
The team working on this project is non-technical and is looking for an out-of-the-box solution that will require the LEAST amount of management overhead.
Which solution will meet these requirements?

  • A. Use an AWS Glue ML transform to create a forecast and then use Amazon QuickSight to visualize the data.
  • B. Use Amazon QuickSight to visualize the data and then use ML-powered forecasting to forecast the key business metrics.
  • C. Use a pre-build ML AMI from the AWS Marketplace to create forecasts and then use Amazon QuickSight to visualize the data.
  • D. Use calculated fields to create a new forecast and then use Amazon QuickSight to visualize the data.

Answer: A

NEW QUESTION 15
A manufacturing company uses Amazon S3 to store its data. The company wants to use AWS Lake Formation to provide granular-level security on those data assets. The data is in Apache Parquet format. The company has set a deadline for a consultant to build a data lake.
How should the consultant create the MOST cost-effective solution that meets these requirements?

  • A. Run Lake Formation blueprints to move the data to Lake Formatio
  • B. Once Lake Formation has the data, apply permissions on Lake Formation.
  • C. To create the data catalog, run an AWS Glue crawler on the existing Parquet dat
  • D. Register the Amazon S3 path and then apply permissions through Lake Formation to provide granular-level security.
  • E. Install Apache Ranger on an Amazon EC2 instance and integrate with Amazon EM
  • F. Using Ranger policies, create role-based access control for the existing data assets in Amazon S3.
  • G. Create multiple IAM roles for different users and group
  • H. Assign IAM roles to different data assets in Amazon S3 to create table-based and column-based access controls.

Answer: A

Explanation:
https://aws.amazon.com/blogs/big-data/building-securing-and-managing-data-lakes-with-aws-lake-formation/

NEW QUESTION 16
A global pharmaceutical company receives test results for new drugs from various testing facilities worldwide. The results are sent in millions of 1 KB-sized JSON objects to an Amazon S3 bucket owned by the company. The data engineering team needs to process those files, convert them into Apache Parquet format, and load them into Amazon Redshift for data analysts to perform dashboard reporting. The engineering team uses AWS Glue to process the objects, AWS Step Functions for process orchestration, and Amazon CloudWatch for job scheduling.
More testing facilities were recently added, and the time to process files is increasing. What will MOST efficiently decrease the data processing time?

  • A. Use AWS Lambda to group the small files into larger file
  • B. Write the files back to Amazon S3. Process the files using AWS Glue and load them into Amazon Redshift tables.
  • C. Use the AWS Glue dynamic frame file grouping option while ingesting the raw input file
  • D. Process the files and load them into Amazon Redshift tables.
  • E. Use the Amazon Redshift COPY command to move the files from Amazon S3 into Amazon Redshift tables directl
  • F. Process the files in Amazon Redshift.
  • G. Use Amazon EMR instead of AWS Glue to group the small input file
  • H. Process the files in Amazon EMR and load them into Amazon Redshift tables.

Answer: A

NEW QUESTION 17
A medical company has a system with sensor devices that read metrics and send them in real time to an Amazon Kinesis data stream. The Kinesis data stream has multiple shards. The company needs to calculate the average value of a numeric metric every second and set an alarm for whenever the value is above one threshold or below another threshold. The alarm must be sent to Amazon Simple Notification Service (Amazon SNS) in less than 30 seconds.
Which architecture meets these requirements?

  • A. Use an Amazon Kinesis Data Firehose delivery stream to read the data from the Kinesis data stream with an AWS Lambda transformation function that calculates the average per second and sends the alarm to Amazon SNS.
  • B. Use an AWS Lambda function to read from the Kinesis data stream to calculate the average per second and sent the alarm to Amazon SNS.
  • C. Use an Amazon Kinesis Data Firehose deliver stream to read the data from the Kinesis data stream and store it on Amazon S3. Have Amazon S3 trigger an AWS Lambda function that calculates the average per second and sends the alarm to Amazon SNS.
  • D. Use an Amazon Kinesis Data Analytics application to read from the Kinesis data stream and calculatethe average per secon
  • E. Send the results to an AWS Lambda function that sends the alarm to Amazon SNS.

Answer: D

NEW QUESTION 18
A retail company wants to use Amazon QuickSight to generate dashboards for web and in-store sales. A group of 50 business intelligence professionals will develop and use the dashboards. Once ready, the dashboards will be shared with a group of 1,000 users.
The sales data comes from different stores and is uploaded to Amazon S3 every 24 hours. The data is partitioned by year and month, and is stored in Apache Parquet format. The company is using the AWS Glue Data Catalog as its main data catalog and Amazon Athena for querying. The total size of the uncompressed data that the dashboards query from at any point is 200 GB.
Which configuration will provide the MOST cost-effective solution that meets these requirements?

  • A. Load the data into an Amazon Redshift cluster by using the COPY comman
  • B. Configure 50 author users and 1,000 reader user
  • C. Use QuickSight Enterprise editio
  • D. Configure an Amazon Redshift data source with a direct query option.
  • E. Use QuickSight Standard editio
  • F. Configure 50 author users and 1,000 reader user
  • G. Configure an Athena data source with a direct query option.
  • H. Use QuickSight Enterprise editio
  • I. Configure 50 author users and 1,000 reader user
  • J. Configure an Athena data source and import the data into SPIC
  • K. Automatically refresh every 24 hours.
  • L. Use QuickSight Enterprise editio
  • M. Configure 1 administrator and 1,000 reader user
  • N. Configure an S3 data source and import the data into SPIC
  • O. Automatically refresh every 24 hours.

Answer: C

NEW QUESTION 19
......

100% Valid and Newest Version DAS-C01 Questions & Answers shared by Dumps-files.com, Get Full Dumps HERE: https://www.dumps-files.com/files/DAS-C01/ (New 130 Q&As)