What Free BDS-C00 Dumps Is
It is more faster and easier to pass the Amazon-Web-Services BDS-C00 exam by using Real Amazon-Web-Services AWS Certified Big Data -Speciality questuins and answers. Immediate access to the Up to the immediate present BDS-C00 Exam and find the same core area BDS-C00 questions with professionally verified answers, then PASS your exam with a high score now.
Also have BDS-C00 free dumps questions for you:
NEW QUESTION 1
An organization has configured a VPC with an Internet Gateway (IGW). Pairs of public and private
subnets (each with one subnet per Availability Zone), and an Elastic Load Balancer (ELB) configured to use the public subnets. The application’s web tier leverages the ELB. Auto Scaling and a multi-AZ
RDS database instance the organization would like to eliminate any potential single points of failure in this design.
What step should you take to achieve this organization's objective?
- A. Nothing, there are no single points of failure in this architecture.
- B. Create and attach a second IGW to provide redundant internet connectivity.
- C. Create and configure a second Elastic Load Balancer to provide a redundant load balancer.
- D. Create a second multi-AZ RDS instance in another Availability Zone and configure replication to provide a redundant database.
Answer: A
NEW QUESTION 2
An Amazon EMR cluster using EMRFS has access to Megabytes of data on Amazon S3, originating
from multiple unique data sources. The customer needs to query common fields across some of the data sets to be able to perform interactive joins and then display results quickly.
Which technology is most appropriate to enable this capability?
- A. Presto
- B. MicroStrategy
- C. Pig
- D. R Studio
Answer: A
NEW QUESTION 3
The project you are working on currently uses a single AWS CloudFormation template to deploy its
AWS infrastructure, which supports a multi-tier web application. You have been tasked with organizing the AWS CloudFormation resources so that they can be maintained in the future, and so that different departments such as Networking and Security can review the architecture before it goes to Production.
How should you do this in a way that accommodates each department, using their existing workflows?
- A. Organize the AWS CloudFormation template so that related resources are next to each other in the template, such as VPC subnets and routing rules for Networking and Security groups and IAM information for Security
- B. Separate the AWS CloudFormation template into a nested structure that has individual templates for the resources that are to be governed by different departments, and use the outputs from the networking and security stacks for the application template that you control
- C. Organize the AWS CloudFormation template so that related resources are next to each other in the template for each department’s use, leverage your existing continuous integration tool to constantly deploy changes from all parties to the Production environment, and then run tests for validation
- D. Use a custom application and the AWS SDK to replicate the resources defined in the current AWS CloudFormation template, and use the existing code review system to allow other departments toapprove changes before altering the application for future deployments
Answer: B
NEW QUESTION 4
A Redshift data warehouse has different user teams that need to query the same table with very
different query types. These user teams are experiencing poor performance. Which action improves performance for the user teams in this situation?
- A. Create custom table views
- B. Add interleaved sort keys per team
- C. Maintain team-specific copies of the table
- D. Add support for workload management queue hopping
Answer: B
NEW QUESTION 5
You have an application running on an Amazon Elastic Compute Cloud instance, that uploads 5 GB
video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application?
- A. Enable enhanced networking
- B. Use Amazon S3 multipart upload
- C. Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency.
- D. Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance
Answer: B
NEW QUESTION 6
You have been tasked with implementing an automated data backup solution for your application
servers that run on Amazon EC2 with Amazon EBS volumes. You want to use a distributed data store for your backups to avoid single points of failure and to increase the durability of the data. Daily backups should be retained for 30 days so that you can restore data within an hour.
How can you implement this through a script that a scheduling deamon runs daily on the application servers?
- A. Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current data time group, and copy backup data to a second Amazon EBS volum
- B. Use the ec2-describe- volumes API to enumerate existing backup volume
- C. Call the ec2-delete-volume API to prune backup volumes that are tagged with a date-time group older than 30 days
- D. Write the script to call the Amazon Glacier upload archive API, and tag the backup archive with the current date-time grou
- E. Use the list vaults API to enumerate existing backup archive
- F. Call the delete vault API to prune backup archives that are tagged with a date-time group older than30 days
- G. Write the script to call the ec2-create-snapshot API, and tag the Amazon EBS snapshot with the current date-time grou
- H. Use the ec2-describe-snapshot API to enumerate existing Amazon EBS snapshot
- I. Call the ec2-delete-snapshot API to prune Amazon EBs snapshots that are tagged with a date-time group older than 30 days
- J. Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date-time group, and use the ec2-copy-snapshot API to backup data to the new Amazon EBS volum
- K. Use the ec2-describe-snapshot API to enumerate existing backup volume
- L. Call the ec2- delete- snapshot API to prune backup Amazon EBS volumes that are tagged with a date-time group older than 30 days
Answer: C
NEW QUESTION 7
You have identified network throughput as a bottleneck on your m1.small EC2 instance when uploading data Into Amazon S3 In the same region. How do you remedy this situation?
- A. Add an additional ENI
- B. Change to a larger Instance
- C. Use DirectConnect between EC2 and S3
- D. Use EBS PIOPS on the local volume
Answer: B
NEW QUESTION 8
You have started a new job and are reviewing your company's infrastructure on AWS You notice one
web application where they have an Elastic Load Balancer (&B) in front of web instances in an Auto Scaling Group When you check the metrics for the ELB in CloudWatch you see four healthy instances in Availability Zone (AZ) A and zero in AZ B There are zero unhealthy instances.
What do you need to fix to balance the instances across AZs?
- A. Set the ELB to only be attached to another AZ
- B. Make sure Auto Scaling is configured to launch in both AZs
- C. Make sure your AMI is available in both AZs
- D. Make sure the maximum size of the Auto Scaling Group is greater than 4
Answer: B
NEW QUESTION 9
Your application uses CloudFormation to orchestrate your application’s resources. During your testing phase before application went live, your Amazon RDS instance type was changed and caused the instance to be re-created, resulting in the loss of test data.
How should you prevent this from occurring in the future?
- A. Within the AWS CloudFormation parameter with which users can select the Amazon RDS instance type, set AllowedValues to only contain the current instance type
- B. Use an AWS CloudFormation stack policy to deny updates to the instanc
- C. Only allow UpdateStack permission to IAM principles that are denied SetStackPolicy
- D. In the AWS CloudFormation template, set the AWS::RDS::DBInstance’s DBInstanceClass property to be read-only
- E. Subscribe to the AWS CloudFormation notification “BeforeResourceUpdate” and call CancelStackUpdate if the resource identified is the Amazon RDS instance
- F. In the AWS ClousFormation template, set the DeletionPolicy of the AWS::RDS::DBInstance’s DeletionPolicy property to “Retain”
Answer: E
NEW QUESTION 10
You currently run your infrastructure on Amazon EC2 instances behind on Auto Scaling group. All logs
for your application are currently written to ephemeral storage. Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet. This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug.
Which technique should you use to make sure you are able to review your logs after your instances have shut down?
- A. Configure the ephemeral policies on your Auto Scaling group to back up on terminate
- B. Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate
- C. Install the CloudWatch logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs
- D. Install the CloudWatch monitoring agent on your AMI, and set up a new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive
- E. Install the CloudWatch Logs Agent on your AM
- F. Update your Scaling policy to enable automated CloudWatch Log copy
Answer: C
NEW QUESTION 11
A user has launched an EC2 instance from an instance store backed AMI. The infrastructure team
wants to create an AMI from the running instance. Which of the below mentioned steps will not be performed while creating the AMI?
- A. Define the AMI launch permissions
- B. Upload the bundled volume
- C. Register the AMI
- D. Bundle the volume
Answer: A
NEW QUESTION 12
You have a web application that is currently running on a collection of micro instance types in a single AZ behind a single load balancer. You have an Auto Scaling group configured to scale from 2 to 64 instances. When reviewing your CloudWatch metrics, you see that sometimes your Scaling group is running 64 micro instances. The web application is reading and writing to a DyanamoDB-configured backend and configured with 800 Write Capacity units and 800 Read Capacity units. Your customers are complaining that they are experiencing load times when viewing you website. You have investigated the DynamoDB CloudWatch metrics; you are under the provisioned read and Write Capacity units and there is no throttling.
How do you scale your service to improve the load times and ensure the principles of high availability?
- A. Change your Auto Scaling group configuration to include multiple AZs
- B. Change you Auto Scaling group configuration to include multiple AZs, and increase the number of Read Capacity units in your DynamoDB table by a factor of three, because you will need to be calling DynamoDB from three AZs
- C. Add a second load balancer to your Auto Scaling group so that you can support more inbound connections per second
- D. Change your Auto Scaling group configuration to use larger instances and include multiple AZs instead of one
Answer: D
NEW QUESTION 13
You have been tasked with deployment a solution for your company that will store images, which the
marketing department will use for its campaigns. Employees are able to upload images via a web interface, and once uploaded, each image must be resized and watermarked with the company logo. Image resize and watermark is not time-sensitive and can be completed days after upload if required.
How should you design this solution in the most highly available and cost-effective way?
- A. Configure your web application to upload images to the Amazon Elastic Transcoder servic
- B. Use the Amazon Elastic Transcoder watermark feature to add the company logo as a watermark on your images and then upload the final image into an Amazon s3 bucket
- C. Configure your web application to upload images to Amazon S3, and send the Amazon S3 bucket URI to an Amazon SQS queu
- D. Create an Auto Scaling group and configure it to use Spot instances, specifying a price you are willing to pa
- E. Configure the instances in this Auto Scaling group to poll the SQS queue for new images and then resize and watermark the image before uploading the final images into Amazon S3
- F. Configure your web application to upload images to Amazon S3, and send the S3 object URI to an Amazon SQS queu
- G. Create an Auto Scaling launch configuration that uses Spot instances, specifying a price you are willing to pa
- H. Configure the instances in this Auto Scaling group to poll the Amazon SQS queue for new images and then resize and watermark the image before uploading the new images into Amazon S3 and deleting the message from the Amazon SQS queue
- I. Configure your web application to upload images to the local storage of the web serve
- J. Create a cronjob to execute a script daily that scans this directory for new files and then uses the Amazon EC2 Service API to launch 10 new Amazon EC2 instances, which will resize and watermark the images daily
Answer: C
NEW QUESTION 14
A media advertising company handles a large number of real-time messages sourced from over 200
websites. The company’s data engineer needs to collect and process records in real time for analysis using Spark Streaming on Amazon Elastic MapReduce (EMR). The data engineer needs to fulfill a corporate mandate to keep ALL raw messages as they are received as a top priority.
Which Amazon Kinesis configuration meets these requirements?
- A. Publish messages to Amazon Kinesis Firehose backed by Amazon Simple Storage Service (S3). Pull messages off Firehose with Spark Streaming in parallel to persistence to Amazon S3
- B. Publish messages to Amazon Kinesis Stream
- C. Pull messages off Stream with Spark Streaming in parallel to AWS messages from Streams to Firehose backed by Amazon Simple Storage Service (S3)
- D. Publish messages to Amazon Kinesis Firehose backed by Amazon Simple Storage (S3). Use AWS Lambda messages from Firehose to Streams for processing with Spark Streaming
- E. Publish messages to Amazon Kinesis Streams, pull messages off with Spark Streaming and write data new data to Amazon Simple Storage Service (S3) before and after processing
Answer: D
NEW QUESTION 15
An organization uses a custom map reduce application to build monthly reports based on many small
data files in an Amazon S3 bucket. The data is submitted from various business units on a frequent
but unpredictable schedule. As the dataset continues to grow, it becomes increasingly difficult to process all of the data in one day. The organization has scaled up its Amazon EMR cluster, but other optimizations could improve performance.
The organization needs to improve performance minimal changes to existing processes and applications.
What action should the organization take?
- A. Use Amazon S3 Event Notifications and AWS Lambda to create a quick search file index in DynamoDB.
- B. Add Spark to the Amazon EMR cluster and utilize Resilient Distributed Datasets in-memory.
- C. Use Amazon S3 Event Notifications and AWS Lambda to index each file into an Amazon Elasticsearch Service cluster.
- D. Schedule a daily AWS Data Pipeline process that aggregates content into larger files using S3DistCp.
- E. Have business units submit data via Amazon Kinesis Firehose to aggregate data hourly into Amazon S3.
Answer: A
NEW QUESTION 16
A customer has a machine learning workflow that consist of multiple quick cycles of reads-writes-
reads on Amazon S3. The customer needs to run the workflow on EMR but is concerned that the reads in subsequent cycles will miss new data critical to the machine learning from the prior cycles.
How should the customer accomplish this?
- A. Turn on EMRFS consistent view when configuring the EMR cluster
- B. Use AWS Data Pipeline to orchestrate the data processing cycles
- C. Set Hadoop.data.consistency = true in the core-site.xml file
- D. Set Hadoop.s3.consistency = true in the core-site.xml file
Answer: B
NEW QUESTION 17
An administrator needs to design a distribution strategy for a star schema in a Redshift cluster. The
administrator needs to determine the optimal distribution style for the tables in the Redshift schema. In which three circumstances would choosing Key-based distribution be most appropriate? (Select three)
- A. When the administrator needs to optimize a large, slowly changing dimension table
- B. When the administrator needs to reduce cross-node traffic
- C. When the administrator needs to optimize the fact table for parity with the number of slices
- D. When the administrator needs to balance data distribution and collocation of data
- E. When the administrator needs to take advantage of data locality on a local node of joins and aggregates
Answer: ADE
NEW QUESTION 18
An Amazon Kinesis stream needs to be encrypted. Which approach should be used to accomplish this task?
- A. Perform a client-side encryption of the data before it enters the Amazon Kinesis stream on the producer
- B. Use a partition key to segment the data by MD5 hash functions which makes indecipherable while in transit
- C. Perform a client-side encryption of the data before it enters the Amazon Kinesis stream on the consumer
- D. Use a shard to segment the data which has built-in functionality to make it indecipherable while in transit
Answer: B
NEW QUESTION 19
A user has provisioned 2000 IOPS to the EBS volume. The application hosted on that EBS is experiencing less IOPS than provisioned. Which of the below mentioned options does not affect the IOPS of the volume?
- A. The application does not have enough IO for the volume
- B. The instance is EBS optimized
- C. The EC2 instance has 10 Gigabit Network connectivity
- D. The volume size is too large
Answer: D
NEW QUESTION 20
You want to securely distribute credentials for your Amazon RDS instance to your fleet of web server
instances. The credentials are stored in a file that is controlled by a configuration management system.
How do you securely deploy the credentials in an automated manner across the fleet of web server instances, which can number in the hundreds, while retaining the ability to roll back if needed?
- A. Store your credential files in an Amazon S3 bucke
- B. Use Amazon S3 server-side encryption on the credential file
- C. Have a scheduled job that pulls down the credential files into the instances every 10 minutes
- D. Store the credential files in your version-controlled repository with the rest of your cod
- E. Have a post-commit action in version control that kicks off a job in your continuous integration system which securely copies the new credentials files to all web server instances
- F. Insert credential files into user data and use an instance lifecycle policy to periodically refresh the files from the user data
- G. Keep credential files as a binary blob in an Amazon RDS MySQL DB instance, and have a script on each Amazon EC2 instance that pulls the files down from the RDS instance
- H. Store the credential files in your version-controlled repository with the rest of your cod
- I. Use a parallel file copy program to send the credential files from your local machine to the Amazon EC2 instances
Answer: D
NEW QUESTION 21
A media advertising company handles a large number of real-time messages sourced from over 200
websites in real time. Processing latency must be kept low. Based on calculations, a 60- shared Amazon Kinesis stream is more then sufficient to handle the maximum data throughput, even with traffic spikes. The company also uses an Amazon Kinesis Client Library (KCL) application running on Amazon Elastic Compute Cloud (EC2) managed by an Auto Scaling group. Amazon CloudWatch indicates an average of 25% CPU and a modest level of network traffic across all running servers.
The company reports a 150% to 200% increase in latency of processing messages from Amazon kinesis during peak times. There are NO reports of delay from the sites publishing to Amazon Kinesis. What is the appropriate solution to address the latency?
- A. Increase the number of shared in the Amazon Kinesis stream to 80 for greater concurrency
- B. Increate the size of the Amazon EC2 instances to increase network throughput
- C. Increase the minimum number of instances in the Auto Scaling group
- D. Increase Amazon DynamoDB throughput on the checkpointing table
Answer: A
NEW QUESTION 22
......
Recommend!! Get the Full BDS-C00 dumps in VCE and PDF From Certshared, Welcome to Download: https://www.certshared.com/exam/BDS-C00/ (New 264 Q&As Version)