High Quality AWS-Certified-DevOps-Engineer-Professional Testing Engine 2021
Your success in Amazon AWS-Certified-DevOps-Engineer-Professional is our sole target and we develop all our AWS-Certified-DevOps-Engineer-Professional braindumps in a way that facilitates the attainment of this target. Not only is our AWS-Certified-DevOps-Engineer-Professional study material the best you can find, it is also the most detailed and the most updated. AWS-Certified-DevOps-Engineer-Professional Practice Exams for Amazon Amazon Other Exam AWS-Certified-DevOps-Engineer-Professional are written to the highest standards of technical accuracy.
Free demo questions for Amazon AWS-Certified-DevOps-Engineer-Professional Exam Dumps Below:
NEW QUESTION 1
You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?
- A. Subscribe your queue to an SNS topic instead.
- B. Use as long of a poll as possible, instead of short polls.
- C. Alter your visibility timeout to be shorter.
- D. Use <code>sqsd</code> on your EC2 instance
Answer: B
Explanation:
One benefit of long polling with Amazon SQS is the reduction of the number of empty responses, when there are no messages available to return, in reply to a ReceiveMessage request sent to an Amazon SQS queue. Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response.
Reference:
http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/sqs-long-polling.html
NEW QUESTION 2
For AWS Auto Scaling, what is the first transition state a new instance enters after leaving steady state when scaling out due to increased load?
- A. EnteringStandby
- B. Pending
- C. Terminating:Wait
- D. Detaching
Answer: B
Explanation:
When a scale out event occurs, the Auto Scaling group launches the required number of EC2 instances, using its assigned launch configuration. These instances start in the Pending state. If you add a lifecycle hook to your Auto Scaling group, you can perform a custom action here. For more information, see Lifecycle Hooks.
Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveIoperGuide/AutoScaIingGroupLifecycIe.html
NEW QUESTION 3
When thinking of AWS OpsWorks, which of the following is true?
- A. Stacks have many layers, layers have many instances.
- B. Instances have many stacks, stacks have many layers.
- C. Layers have many stacks, stacks have many instances.
- D. Layers have many instances, instances have many stack
Answer: A
Explanation:
The stack is the core AWS OpsWorks component. It is basically a container for AWS resources—Amazon EC2 instances, Amazon RDS database instances, and so on—that have a common purpose and should
be logically managed together. You define the stack's constituents by adding one or more layers. A layer represents a set of Amazon EC2 instances that serve a particular purpose, such as serving applications or hosting a database server. An instance represents a single computing resource, such as an Amazon EC2 instance.
Reference: http://docs.aws.amazon.com/opsworks/latest/userguide/weIcome.htmI
NEW QUESTION 4
You need to deploy a new application version to production. Because the deployment is high-risk, you need to roll the new version out to users over a number of hours, to make sure everything is working correctly. You need to be able to control the proportion of users seeing the new version of the application down to the percentage point.
You use ELB and EC2 with Auto Scaling Groups and custom AMIs with your code pre-installed assigned to Launch Configurations. There are no database-level changes during your deployment. You have been told you cannot spend too much money, so you must not increase the number of EC2 instances much at all during the deployment, but you also need to be able to switch back to the original version of code quickly if something goes wrong. What is the best way to meet these requirements?
- A. Create a second ELB, Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuratio
- B. Create AM|s with all code pre-installe
- C. Assign the new AMI to the second Auto Scaling Launch Configuratio
- D. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELBs.
- E. Use the Blue-Green deployment method to enable the fastest possible rollback if neede
- F. Create a full second stack of instances and cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed.
- G. Create AMIs with all code pre-installe
- H. Assign the new AMI to the Auto Scaling Launch Configuration, to replace the old on
- I. Gradually terminate instances running the old code (launched with the old Launch Configuration) and allow the new AMIs to boot to adjust the traffic balance to the new cod
- J. On rollback, reverse the process by doing the same thing, but changing the AMI on the Launch Config back to the original code.
- K. Migrate to use AWS Elastic Beanstal
- L. Use the established and well-tested Rolling Deployment setting AWS provides on the new Application Environment, publishing a zip bundle of the new code and adjusting the wait period to spread the deployment over tim
- M. Re-deploy the old code bundle to rollback if needed.
Answer: A
Explanation:
Only Weighted Round Robin DNS Records and reverse proxies allow such fine-grained tuning of traffic splits. The Blue-Green option does not meet the requirement that we mitigate costs and keep overall EC2 fileet size consistent, so we must select the 2 ELB and ASG option with WRR DNS tuning. This method is called A/B deployment and/or Canary deployment.
Reference: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf
NEW QUESTION 5
You need to grant a vendor access to your AWS account. They need to be able to read protected messages in a private S3 bucket at their leisure. They also use AWS. What is the best way to accomplish this?
- A. Create an IAM User with API Access Key
- B. Grant the User permissions to access the bucke
- C. Give the vendor the AWS Access Key ID and AWS Secret Access Key for the User.
- D. Create an EC2 Instance Profile on your accoun
- E. Grant the associated IAM role full access to the bucke
- F. Start an EC2 instance with this Profile and give SSH access to the instance to the vendor.
- G. Create a cross-account IAM Role with permission to access the bucket, and grant permission to use the Role to the vendor AWS account.
- H. Generate a signed S3 PUT URL and a signed S3 PUT URL, both with wildcard values and 2 year duration
- I. Pass the URLs to the vendor.
Answer: C
Explanation:
When third parties require access to your organization's AWS resources, you can use roles to delegate access to them. For example, a third party might provide a service for managing your AWS resources. With IAM roles, you can grant these third parties access to your AWS resources without sharing your AWS security credentials. Instead, the third party can access your AWS resources by assuming a role that you create in your AWS account.
Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roIes_common-scenarios_third-party.html
NEW QUESTION 6
You need to process long-running jobs once and only once. How might you do this?
- A. Use an SNS queue and set the visibility timeout to long enough forjobs to process.
- B. Use an SQS queue and set the reprocessing timeout to long enough forjobs to process.
- C. Use an SQS queue and set the visibility timeout to long enough forjobs to process.
- D. Use an SNS queue and set the reprocessing timeout to long enough forjobs to proces
Answer: C
Explanation:
The message timeout defines how long after a successful receive request SQS waits before allowing jobs to be seen by other components, and proper configuration prevents duplicate processing.
Reference: http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/MessageLifecycIe.ht ml
NEW QUESTION 7
You are building a game high score table in DynamoDB. You will store each user's highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What's the best DynamoDB key structure?
- A. HighestScore as the hash / only key.
- B. GameID as the hash key, HighestScore as the range key.
- C. GameID as the hash / only key.
- D. GameID as the range / only ke
Answer: B
Explanation:
Since access and storage for games is uniform, and you need to have ordering within each game for the scores (to access the highest value), your hash (partition) key should be the GameID, and there should be a range key for HighestScore.
Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuideIinesForTabIes.htmI#GuideIi nesForTabIes.Partitions
NEW QUESTION 8
You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CIoudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CIoudFormation. How should you overcome this chaHenge?
- A. Use a CIoudFormation Custom Resource Template by selecting an API call to proxy for create, update, and delete action
- B. CIoudFormation will use the AWS SDK, CLI, or API method of your choosing as the state transition function for the resource type you are modeling.
- C. Submit a ticket to the AWS Forum
- D. AWS extends CIoudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHu
- E. Their response time is usually 1 day, and they complete requests within a week or two.
- F. Instead of depending on CIoudFormation, use Chef, Puppet, or Ansible to author Heat templates, which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment.
- G. Create a CIoudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda.
Answer: D
Explanation:
Custom resources provide a way for you to write custom provisioning logic in AWS CIoudFormation template and have AWS CIoudFormation run it during a stack operation, such as when you create, update or delete a stack. For more information, see Custom Resources.
Reference:
http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/template-custom-resources.html
NEW QUESTION 9
How does Amazon RDS multi Availability Zone model work?
- A. A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication.
- B. A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication.
- C. A second, standby database is deployed and maintained in a different region from master using asynchronous replication.
- D. A second, standby database is deployed and maintained in a different region from master using synchronous replication.
Answer: A
Explanation:
In a MuIti-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.Mu|tiAZ.htmI
NEW QUESTION 10
Which of these is not a Pseudo Parameter in AWS CIoudFormation?
- A. AWS::StackName
- B. AWS::AccountId
- C. AWS::StackArn
- D. AWS::NotificationARNs
Answer: C
Explanation:
This is the complete list of Pseudo Parameters: AWS::Account|d, AWS::NotificationARNs, AWS::NoVaIue, AWS::Region, AWS::StackId, AWS::StackName
Reference:
http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/pseudo-parameter-reference.html
NEW QUESTION 11
Your company wants to understand where cost is coming from in the company's production AWS account. There are a number of applications and services running at any given time. Without expending too much initial development time, how best can you give the business a good understanding of which applications cost the most per month to operate?
- A. Create an automation script which periodically creates AWS Support tickets requesting detailed intra-month information about your bill.
- B. Use custom CIoudWatch Metrics in your system, and put a metric data point whenever cost is incurred.
- C. Use AWS Cost Allocation Tagging for all resources which support i
- D. Use the Cost Explorer to analyze costs throughout the month.
- E. Use the AWS Price API and constantly running resource inventory scripts to calculate total price based on multiplication of consumed resources over time.
Answer: C
Explanation:
Cost Allocation Tagging is a built-in feature of AWS, and when coupled with the Cost Explorer, provides a simple and robust way to track expenses.
You can also use tags to filter views in Cost Explorer. Note that before you can filter views by tags in Cost Explorer, you must have applied tags to your resources and activate them, as described in the following sections. For more information about Cost Explorer, see Analyzing Your Costs with Cost Explorer. Reference: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
NEW QUESTION 12
When thinking of AWS Elastic Beanstalk, which statement is true?
- A. Worker tiers pull jobs from SNS.
- B. Worker tiers pull jobs from HTTP.
- C. Worker tiers pull jobs from JSON.
- D. Worker tiers pull jobs from SQ
Answer: D
Explanation:
Elastic Beanstalk installs a daemon on each Amazon EC2 instance in the Auto Scaling group to process Amazon SQS messages in the worker environment. The daemon pulls data off the Amazon SQS queue, inserts it into the message body of an HTTP POST request, and sends it to a user-configurable URL path on the local host. The content type for the message body within an HTTP POST request is application/json by default.
Reference:
http://docs.aws.amazon.com/elasticbeanstaIk/latest/dg/using-features-managing-env-tiers.htmI
NEW QUESTION 13
When thinking of AWS OpsWorks, which of the following is not an instance type you can allocate in a stack layer?
- A. 24/7 instances
- B. Spot instances
- C. Time-based instances
- D. Load-based instances
Answer: B
Explanation:
AWS OpsWorks supports the following instance types, which are characterized by how they are started and stopped. 24/7 instances are started manually and run until you stop them.Time-based instances are run by AWS OpsWorks on a specified daily and weekly schedule. They allow your stack to automatically adjust the number of instances to accommodate predictable usage patterns. Load-based instances are automatically started and stopped by AWS OpsWorks, based on specified load metrics, such as CPU utilization. They allow your stack to automatically adjust the number of instances to accommodate variations in incoming traffic. Load-based instances are available only for Linux-based stacks. Reference: http://docs.aws.amazon.com/opsworks/latest/userguide/weIcome.htmI
NEW QUESTION 14
You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?
- A. AWS Elasticsearch Service
- B. AWS RedShift
- C. AWS EMR
- D. AWS DynamoDB
Answer: A
Explanation:
Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.
Reference:
http://docs.aws.amazon.com/elasticsearch-service/Iatest/developerguide/what-is-amazon-elasticsearch-s ervice.htmI
NEW QUESTION 15
You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first?
- A. Kinesis Firehose + RDS
- B. Kinesis Firehose + RedShift
- C. EMR using Hive
- D. EMR running Apache Spark
Answer: B
Explanation:
Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily.
Reference: https://aws.amazon.com/kinesis/firehose/detai|s/
NEW QUESTION 16
Which of these is not an instrinsic function in AWS CloudFormation?
- A. Fn::EquaIs
- B. Fn::|f
- C. Fn::Not
- D. Fn::Parse
Answer: D
Explanation:
This is the complete list of Intrinsic Functions...: Fn::Base64, Fn::And, Fn::EquaIs, Fn::If, Fn::Not, Fn::Or, Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::Join, Fn::Se|ect, Ref
Reference:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html
NEW QUESTION 17
Fill the blanks: helps us track AWS API calls and transitions, helps to understand what resources we have now, and allows auditing credentials and logins.
- A. AWS Config, CIoudTraiI, IAM Credential Reports
- B. CIoudTraiI, IAM Credential Reports, AWS Config
- C. CIoudTraiI, AWS Config, IAM Credential Reports
- D. AWS Config, IAM Credential Reports, CIoudTraiI
Answer: C
Explanation:
You can use AWS CIoudTraiI to get a history of AWS API calls and related events for your account. This includes calls made by using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services.
Reference: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html
NEW QUESTION 18
You need to create a simple, holistic check for your system's general availablity and uptime. Your system presents itself as an HTTP-speaking API. What is the most simple tool on AWS to achieve this with?
- A. Route53 Health Checks
- B. CIoudWatch Health Checks
- C. AWS ELB Health Checks
- D. EC2 Health Checks
Answer: A
Explanation:
You can create a health check that will run into perpetuity using Route53, in one API call, which will ping your service via HTTP every 10 or 30 seconds.
Amazon Route 53 must be able to establish a TCP connection with the endpoint within four seconds. In addition, the endpoint must respond with an HTTP status code of 200 or greater and less than 400 within two seconds after connecting.
Reference:
http://docs.aws.amazon.com/Route53/latest/DeveIoperGuide/dns-failover-determining-health-of-endpoint s.htmI
NEW QUESTION 19
Your system uses a multi-master, multi-region DynamoDB configuration spanning two regions to achieve high availablity. For the first time since launching your system, one of the AWS Regions in which you operate over went down for 3 hours, and the failover worked correctly. However, after recovery, your users are experiencing strange bugs, in which users on different sides of the globe see different data. What is a likely design issue that was not accounted for when launching?
- A. The system does not have Lambda Functor Repair Automations, to perform table scans and chack for corrupted partition blocks inside the Table in the recovered Region.
- B. The system did not implement DynamoDB Table Defragmentation for restoring partition performance in the Region that experienced an outage, so data is served stale.
- C. The system did not include repair logic and request replay buffering logic for post-failure, to re-synchronize data to the Region that was unavailable for a number of hours.
- D. The system did not use DynamoDB Consistent Read requests, so the requests in different areas are not utilizing consensus across Regions at runtime.
Answer: C
Explanation:
When using multi-region DynamoDB systems, it is of paramount importance to make sure that all requests made to one Region are replicated to the other. Under normal operation, the system in question would correctly perform write replays into the other Region. If a whole Region went down, the system would be unable to perform these writes for the period of downtime. Without buffering write requests somehow, there would be no way for the system to replay dropped cross-region writes, and the requests would be serviced differently depending on the Region from which they were served after recovery. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepI.htmI
NEW QUESTION 20
From a compliance and security perspective, which of these statements is true?
- A. You do not ever need to rotate access keys for AWS IAM Users.
- B. You do not ever need to rotate access keys for AWS IAM Roles, nor AWS IAM Users.
- C. None of the other statements are true.
- D. You do not ever need to rotate access keys for AWS IAM Role
Answer: D
Explanation:
IAM Role Access Keys are auto-rotated by AWS on your behalf; you do not need to rotate them.
The application is granted the permissions for the actions and resources that you've defined for the role through the security credentials associated with the role. These security credentials are temporary and we
rotate them automatically. We make new credentials available at least five minutes prior to the expiration of the old credentials.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
NEW QUESTION 21
......
Recommend!! Get the Full AWS-Certified-DevOps-Engineer-Professional dumps in VCE and PDF From prep-labs.com, Welcome to Download: https://www.prep-labs.com/dumps/AWS-Certified-DevOps-Engineer-Professional/ (New 371 Q&As Version)