Amazon AWS-Solution-Architect-Associate Dump 2021
Master the AWS-Solution-Architect-Associate Amazon AWS Certified Solutions Architect - Associate content and be ready for exam day success quickly with this Certleader AWS-Solution-Architect-Associate practice question. We guarantee it!We make it a reality and give you real AWS-Solution-Architect-Associate questions in our Amazon AWS-Solution-Architect-Associate braindumps.Latest 100% VALID Amazon AWS-Solution-Architect-Associate Exam Questions Dumps at below page. You can use our Amazon AWS-Solution-Architect-Associate braindumps and pass your exam.
Amazon AWS-Solution-Architect-Associate Free Dumps Questions Online, Read and Test Now.
NEW QUESTION 1
You're trying to delete an SSL certificate from the IAM certificate store, and you're getting the message "Certificate: <certificate-id> is being used by CIoudFront." Which of the following statements is probably the reason why you are getting this error?
- A. Before you can delete an SSL certificate, you need to either rotate SSL certificates or revert from using a custom SSL certificate to using the default CIoudFront certificate.
- B. You can't delete SSL certificates . You need to request it from AWS.
- C. Before you can delete an SSL certificate, you need to set up the appropriate access level in IAM
- D. Before you can delete an SSL certificate you need to set up https on your serve
Answer: A
Explanation:
CIoudFront is a web service that speeds up distribution of your static and dynamic web content, for example, .htmI, .css, .php, and image files, to end users.
Every CIoudFront web distribution must be associated either with the default CIoudFront certificate or with a custom SSL certificate. Before you can delete an SSL certificate, you need to either rotate SSL certificates (replace the current custom SSL certificate with another custom SSL certificate) or revert from using a custom SSL certificate to using the default CIoudFront certificate.
Reference: http://docs.aws.amazon.com/AmazonCIoudFront/latest/Deve|operGuide/Troubleshooting.htm|
NEW QUESTION 2
Is there a limit to how many groups a user can be in?
- A. Yes for all users
- B. Yes for all users except root
- C. No
- D. Yes unless special permission granted
Answer: A
NEW QUESTION 3
A user is making a scalable web application with compartmentalization. The user wants the log module to be able to be accessed by all the application functionalities in an asynchronous way. Each module of the application sends data to the log module, and based on the resource availability it will process the logs. Which AWS service helps this functionality?
- A. AWS Simple Queue Service.
- B. AWS Simple Notification Service.
- C. AWS Simple Workflow Service.
- D. AWS Simple Email Servic
Answer: A
Explanation:
Amazon Simple Queue Service (SQS) is a highly reliable distributed messaging system for storing messages as they travel between computers. By using Amazon SQS, developers can simply move data between distributed application components. It is used to achieve compartmentalization or loose coupling. In this case all the modules will send a message to the logger queue and the data will be processed by queue as per the resource availability.
Reference: http://media.amazonwebservices.com/AWS_Building_FauIt_To|erant_AppIications.pdf
NEW QUESTION 4
You have decided to change the instance type for instances running in your application tier that is using Auto Scaling. In which area below would you change the instance type definition?
- A. Auto Scaling policy
- B. Auto Scaling group
- C. Auto Scaling tags
- D. Auto Scaling launch configuration
Answer: D
NEW QUESTION 5
An organization has a statutory requirement to protect the data at rest for the S3 objects. Which of the below mentioned options need not be enabled by the organization to achieve data security?
- A. MFA delete for S3 objects
- B. Client side encryption
- C. Bucket versioning
- D. Data replication
Answer: D
Explanation:
AWS S3 provides multiple options to achieve the protection of data at REST. The options include Permission (Policy), Encryption (Client and Server Side), Bucket Versioning and MFA based delete. The user can enable any of these options to achieve data protection. Data replication is an internal facility by AWS where S3 replicates each object across all the Availability Zones and the organization need not
enable it in this case.
Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf
NEW QUESTION 6
A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for
greater scalability and elasticity The web server currently shares read-only data using a network distributed file system The app server tier uses a clustering mechanism for discovery and shared session state that depends on I P multicast The database tier uses shared-storage clustering to provide database fail over capability, and uses several read slaves for scaling Data on all sewers and the distributed file system directory is backed up weekly to off-site tapes
Which AWS storage and database architecture meets the requirements of the application?
- A. Web sewers: store read-only data in 53, and copy from 53 to root volume at boot tim
- B. App servers: share state using a combination of DynamoDB and IP unicas
- C. Database: use RDS with multi-AZ deployment and one or more read replica
- D. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots.
- E. Web sewers: store read-only data in an EC2 NFS server, mount to each web server at boot tim
- F. App servers: share state using a combination of DynamoDB and IP multicas
- G. Database: use RDS with multi-AZ deployment and one or more Read Replica
- H. Backup: web and app servers backed up weekly via AM Is, database backed up via DB snapshots.
- I. Web servers: store read-only data in 53, and copy from 53 to root volume at boot tim
- J. App servers: share state using a combination of DynamoDB and IP unicas
- K. Database: use RDS with multi-AZ deployment and one or more Read Replica
- L. Backup: web and app servers backed up weekly viaAM Is, database backed up via DB snapshots.
- M. Web servers: store read-only data in 53, and copy from 53 to root volume at boot tim
- N. App servers: share state using a combination of DynamoDB and IP unicas
- O. Database: use RDS with multi-AZ deploymen
- P. Backup: web and app sewers backed up weekly via ANI Is, database backed up via DB snapshots.
Answer: C
Explanation:
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Benefits
Enhanced Durability
MuIti-AZ deployments for the MySQL, Oracle, and PostgreSQL engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. MuIti-AZ deployments for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL
Server-native Mrroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone.
If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment: in case of a Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This operation can take several hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available.
Amazon Aurora employs a highly durable, SSD-backed virtualized storage layer purpose-built for
database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and replaced automatically.
Increased Availability
You also benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other database engines (see the RDS FAQ for details).
The availability benefits of MuIti-AZ deployments also extend to planned maintenance and backups.
In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic fail over to complete.
Unlike Single-AZ deployments, 1/0 actMty is not suspended on your primary during backup for MuIti-AZ deployments for the MySOL, Oracle, and PostgreSQL engines, because the backup is taken from the standby. However, note that you may still experience elevated latencies for a few minutes during backups for Mu|ti-AZ deployments.
On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS MuIti-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically.
No Administrative Intervention
DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a failover automatically in response to a variety of failure conditions.
Failover conditions
Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following:
Loss of availability in primary Availability Zone Loss of network connectMty to primary Compute unit failure on primary
Storage failure on primary
Note: When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.
NEW QUESTION 7
Can I test my DB Instance against a new version before upgrading?
- A. No
- B. Yes
- C. Only in VPC
Answer: B
NEW QUESTION 8
You have a Business support plan with AWS. One of your EC2 instances is running Mcrosoft Windows Server 2008 R2 and you are having problems with the software. Can you receive support from AWS for this software?
- A. Yes
- B. No, AWS does not support any third-party software.
- C. No, Mcrosoft Windows Server 2008 R2 is not supported.
- D. No, you need to be on the enterprise support pla
Answer: A
Explanation:
Third-party software support is available only to AWS Support customers enrolled for Business or Enterprise Support. Third-party support applies only to software running on Amazon EC2 and does not extend to assisting with on-premises software. An exception to this is a VPN tunnel configuration running supported devices for Amazon VPC.
Reference: https://aws.amazon.com/premiumsupport/features/
NEW QUESTION 9
The new DB Instance that is created when you promote a Read Replica retains the backup window penod.
- A. TRUE
- B. FALSE
Answer: A
NEW QUESTION 10
Just when you thought you knew every possible storage option on AWS you hear someone mention Reduced Redundancy Storage (RRS) within Amazon S3. What is the ideal scenario to use Reduced Redundancy Storage (RRS)?
- A. Huge volumes of data
- B. Sensitve data
- C. Non-critical or reproducible data
- D. Critical data
Answer: C
Explanation:
Reduced Redundancy Storage (RRS) is a new storage option within Amazon S3 that enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. RRS provides a lower cost, less durable, highly available storage option that is designed to sustain the loss of data in a single facility.
RRS is ideal for non-critical or reproducible data.
For example, RRS is a cost-effective solution for sharing media content that is durably stored elsewhere. RRS also makes sense if you are storing thumbnails and other resized images that can be easily reproduced from an original image.
Reference: https://aws.amazon.com/s3/faqs/
NEW QUESTION 11
Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours?
What is the best approach to meet your customer's requirements?
- A. Send all the log events to Amazon SQ
- B. Setup an Auto Scaling group of EC2 sewers to consume the logs and apply the heuristics.
- C. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs
- D. Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs
- E. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on 53 use EMR to apply heuristics on the logs
Answer: B
Explanation:
The throughput of an Amazon Kinesis stream is designed to scale without limits via increasing the number of shards within a stream. However, there are certain limits you should keep in mind while using Amazon Kinesis Streams:
By default, Records of a stream are accessible for up to 24 hours from the time they are added to the stream. You can raise this limit to up to 7 days by enabling extended data retention.
The maximum size of a data blob (the data payload before Base64-encoding) within one record is 1 megabyte (MB).
Each shard can support up to 1000 PUT records per second.
For more information about other API level limits, see Amazon Kinesis Streams Limits.
NEW QUESTION 12
What does Amazon Route53 provide?
- A. A global Content Delivery Network.
- B. None of these.
- C. A scalable Domain Name System.
- D. An SSH endpoint for Amazon EC2.
Answer: C
NEW QUESTION 13
Do the system resources on the Micro instance meet the recommended configuration for Oracle?
- A. Yes completely
- B. Yes but only for certain situations
- C. Not in any circumstance
Answer: B
NEW QUESTION 14
Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (50GB) Oracle database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and indMdual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database
Which backup architecture will meet these requirements?
- A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMs andsupplement with file-level backup to 53 using traditional enterprise backup software to provide fi Ie level restore
- B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to 53 to provide file level restore.
- C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore
- D. Backup RDS database to 53 using Oracle RMAN Backup the EC2 instances using Amis, and supplement with EBS snapshots for indMdual volume restore.
Answer: A
Explanation:
Point-In-Time Recovery
In addition to the daily automated backup, Amazon RDS archives database change logs. This enables you to recover your database to any point in time during the backup retention period, up to the last five minutes of database usage.
Amazon RDS stores multiple copies of your data, but for Single-AZ DB instances these copies are stored in a single availability zone. If for any reason a Single-AZ DB instance becomes unusable, you can use point-in-time recovery to launch a new DB instance with the latest restorable data. For more information on working with point-in-time recovery, go to Restoring a DB Instance to a Specified Time.
Note
Mu|ti-AZ deployments store copies of your data in different Availability Zones for greater levels of data durability. For more information on Multi-AZ deployments, see High Availability (MuIti-AZ).
NEW QUESTION 15
Can a 'user' be associated with multiple AWS accounts?
- A. No
- B. Yes
Answer: A
NEW QUESTION 16
You want to use AWS Import/Export to send data from your S3 bucket to several of your branch offices. What should you do if you want to send 10 storage units to AWS?
- A. Make sure your disks are encrypted prior to shipping.
- B. Make sure you format your disks prior to shipping.
- C. Make sure your disks are 1TB or more.
- D. Make sure you submit a separate job request for each devic
Answer: D
Explanation:
When using Amazon Import/Export, a separate job request needs to be submitted for each physical device even if they belong to the same import or export job.
Reference: http://docs.aws.amazon.com/AWSImportExport/latest/DG/Concepts.html
NEW QUESTION 17
What is one key difference between an Amazon EBS-backed and an instance-store backed instance?
- A. Amazon EBS-backed instances can be stopped and restarted.
- B. Instance-store backed instances can be stopped and restarted.
- C. Auto scaling requires using Amazon EBS-backed instances.
- D. Virtual Private Cloud requires EBS backed instance
Answer: A
Explanation:
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAIV|is.htmI#storage-for-theroot-devi ce
NEW QUESTION 18
How many types of block devices does Amazon EC2 support A
- A. 2
- B. 3
- C. 4
- D. 1
Answer: A
NEW QUESTION 19
You are setting up your first Amazon Virtual Private Cloud (Amazon VPC) so you decide to use the VPC wizard in the AWS console to help make it easier for you. Which of the following statements is correct regarding instances that you launch into a default subnet via the VPC wizard?
- A. Instances that you launch into a default subnet receive a public IP address and 10 private IP addresses.
- B. Instances that you launch into a default subnet receive both a public IP address and a private IP address.
- C. Instances that you launch into a default subnet don't receive any ip addresses and you need to define them manually.
- D. Instances that you launch into a default subnet receive a public IP address and 5 private IP addresse
Answer: B
Explanation:
Instances that you launch into a default subnet receive both a public IP address and a private IP address. Instances in a default subnet also receive both public and private DNS hostnames. Instances that you launch into a nondefault subnet in a default VPC don't receive a public IP address or a DNS hostname. You can change your subnet's default public IP addressing behavior.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html
NEW QUESTION 20
Read Replicas require a transactional storage engine and are only supported for the _ _ storage engine
- A. OracIeISAM
- B. MSSQLDB
- C. InnoDB
- D. IV|y|SAIV|
Answer: C
NEW QUESTION 21
A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers
- A. Amazon Simple Email Service
- B. Amazon CIoudWatch
- C. Amazon Simple Queue Service
- D. Amazon Route 53
- E. Amazon Simple Notification Service
Answer: BE
NEW QUESTION 22
True or False: Common points of failures like generators and cooling equipment are shared across Availability Zones.
- A. TRUE
- B. FALSE
Answer: B
NEW QUESTION 23
When should I choose Provisioned IOPS over Standard RDS storage?
- A. If you have batch-oriented workloads
- B. If you use production online transaction processing (OLTP) workloads.
- C. If you have workloads that are not sensitive to consistent performance
Answer: A
NEW QUESTION 24
True orfalsez A VPC contains multiple subnets, where each subnet can span multiple Availability Zones.
- A. This is true only if requested during the set-up of VPC.
- B. This is true.
- C. This is false.
- D. This is true only for US region
Answer: C
Explanation:
A VPC can span several Availability Zones. In contrast, a subnet must reside within a single Availability Zone.
Reference: https://aws.amazon.com/vpc/faqs/
NEW QUESTION 25
You have been asked to build AWS infrastructure for disaster recovery for your local applications and within that you should use an AWS Storage Gateway as part of the solution. Which of the following best describes the function of an AWS Storage Gateway?
- A. Accelerates transferring large amounts of data between the AWS cloud and portable storage devices .
- B. A web service that speeds up distribution of your static and dynamic web content.
- C. Connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between your on-premises IT environment and AWS's storage infrastructure.
- D. Is a storage service optimized for infrequently used data, or "cold data."
Answer: C
Explanation:
AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the Amazon Web Services (AWS) storage infrastructure. You can use the service to store data in the AWS cloud for scalable and cost-effective storage that helps maintain data security. AWS Storage Gateway offers both volume-based and tape-based storage solutions:
Volume gateways Gateway-cached volumes Gateway-stored volumes
Gateway-virtual tape library (VTL)
Reference:
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_disasterrecovery_07.pdf
NEW QUESTION 26
What is the time period with which metric data is sent to CIoudWatch when detailed monitoring is enabled on an Amazon EC2 instance?
- A. 15 minutes
- B. 5 minutes
- C. 1 minute
- D. 45 seconds
Answer: C
Explanation:
By default, Amazon EC2 metric data is automatically sent to CIoudWatch in 5-minute periods. However, you can, enable detailed monitoring on an Amazon EC2 instance, which sends data to CIoudWatch in
1-minute periods
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch.htmI
NEW QUESTION 27
You need to quickly set up an email-sending service because a client needs to start using it in the next hour. Amazon Simple Email Service (Amazon SES) seems to be the logical choice but there are several options available to set it up. Which of the following options to set up SES would best meet the needs of the client?
- A. Amazon SES console
- B. AWS CIoudFormation
- C. SMTP Interface
- D. AWS Elastic Beanstalk
Answer: A
Explanation:
Amazon SES is an outbound-only email-sending service that provides an easy, cost-effective way for you to send email.
There are several ways that you can send an email by using Amazon SES. You can use the Amazon SES console, the Simple Mail Transfer Protocol (SMTP) interface, or you can call the Amazon SES API. Amazon SES consoIe—This method is the quickest way to set up your system
Reference: http://docs.aws.amazon.com/ses/latest/DeveIoperGuide/NeIcome.html
NEW QUESTION 28
......
Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From DumpSolutions, Welcome to Download: https://www.dumpsolutions.com/AWS-Solution-Architect-Associate-dumps/ (New 1487 Q&As Version)