How Many Questions Of DBS-C01 Practice
It is impossible to pass Amazon-Web-Services DBS-C01 exam without any help in the short term. Come to Actualtests soon and find the most advanced, correct and guaranteed Amazon-Web-Services DBS-C01 practice questions. You will get a surprising result by our Improved AWS Certified Database - Specialty practice guides.
Online DBS-C01 free questions and answers of New Version:
NEW QUESTION 1
A gaming company has recently acquired a successful iOS game, which is particularly popular during theholiday season. The company has decided to add a leaderboard to the game that uses Amazon DynamoDB.The application load is expected to ramp up over the holiday season.
Which solution will meet these requirements at the lowest cost?
- A. DynamoDB Streams
- B. DynamoDB with DynamoDB Accelerator
- C. DynamoDB with on-demand capacity mode
- D. DynamoDB with provisioned capacity mode with Auto Scaling
NEW QUESTION 2
A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQLDB instance. Immediately after creating the read replica, users that query it report slow response times.
What could be causing these slow response times?
- A. New volumes created from snapshots load lazily in the background
- B. Long-running statements on the master
- C. Insufficient resources on the master
- D. Overload of a single replication thread by excessive writes on the master
NEW QUESTION 3
A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.
What is the quickest way for the company to gather data on the migration compatibility?
- A. Perform a logical dump from the Db2 database and restore it to an Aurora DB cluste
- B. Identify the gaps andcompatibility of the objects migrated by comparing row counts from source and target tables.
- C. Run AWS DMS from the Db2 database to an Aurora DB cluste
- D. Identify the gaps and compatibility of theobjects migrated by comparing the row counts from source and target tables.
- E. Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate themigration compatibility.
- F. Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster.Create a migration assessment report to evaluate the migration compatibility.
NEW QUESTION 4
A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group.
What should the Database Specialist do to automatically collect the database logs for the Administrator?
- A. Enable DocumentDB to export the logs to Amazon CloudWatch Logs
- B. Enable DocumentDB to export the logs to AWS CloudTrail
- C. Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
- D. Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operationand store the logs in Amazon S3
NEW QUESTION 5
A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora.
Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?
- A. Stop the DB cluster and analyze how the website responds
- B. Use Aurora fault injection to crash the master DB instance
- C. Remove the DB cluster endpoint to simulate a master DB instance failure
- D. Use Aurora Backtrack to crash the DB cluster
NEW QUESTION 6
A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on-premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.
Which approach has the least risk and the highest likelihood of a successful data transfer?
- A. Set up a VPN tunnel for encrypting data over the network from the data center to AW
- B. Leverage AWSSCT and apply the converted schema to Amazon Redshif
- C. Once complete, start an AWS DMS task tomove the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to AmazonRedshift.
- D. Leverage AWS SCT and apply the converted schema to Amazon Redshif
- E. Start an AWS DMS task withtwo AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption.Use AWS DMS to finish copying data to Amazon Redshift.
- F. Leverage AWS SCT and apply the converted schema to Amazon Redshif
- G. Once complete, use a fleet of10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data fromon-premises toAmazon S3 with AWS KMS encryptio
- H. Use AWS Glue to load the data to Amazon redshift.
- I. Set up a VPN tunnel for encrypting data over the network from the data center to AW
- J. Leverage a nativedatabase export feature to export the data and compress the file
- K. Use the aws S3 cp multi-port uploadcommand to upload these files to Amazon S3 with AWS KMS encryptio
- L. Once complete, load the data toAmazon Redshift using AWS Glue.
NEW QUESTION 7
A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.
How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?
- A. Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
- B. Connect to the Aurora host and create cron jobs to run the maintenance jobs following the requiredschedule.
- C. Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatchEvents.
- D. Create the maintenance job using the Amazon CloudWatch job scheduling plugin.
NEW QUESTION 8
A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:
“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”
Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)
- A. Check that Amazon S3 has an IAM role granting read access to Neptune
- B. Check that an Amazon S3 VPC endpoint exists
- C. Check that a Neptune VPC endpoint exists
- D. Check that Amazon EC2 has an IAM role granting read access to Amazon S3
- E. Check that Neptune has an IAM role granting read access to Amazon S3
NEW QUESTION 9
A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL-based connections should be disallowed access to the database.
Which solution addresses these requirements?
- A. Set the rds.force_ssl=0 parameter in DB parameter group
- B. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=allow.
- C. Set the rds.force_ssl=1 parameter in DB parameter group
- D. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=disable.
- E. Set the rds.force_ssl=0 parameter in DB parameter group
- F. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=verify-ca.
- G. Set the rds.force_ssl=1 parameter in DB parameter group
- H. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=verify-full.
NEW QUESTION 10
A manufacturing company’s website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)
- A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
- B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
- C. Edit and enable Aurora DB cluster cache management in parameter groups.
- D. Set TCP keepalive parameters to a high value.
- E. Set JDBC connection string timeout variables to a low value.
- F. Set Java DNS caching timeouts to a high value.
NEW QUESTION 11
An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:
Update scores in real time whenever a player is playing the game.
Retrieve a player’s score details for a specific game session.
A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each game has a unique game_id.
Which choice of keys is recommended for the DynamoDB table?
- A. Create a global secondary index with game_id as the partition key
- B. Create a global secondary index with user_id as the partition key
- C. Create a composite primary key with game_id as the partition key and user_id as the sort key
- D. Create a composite primary key with user_id as the partition key and game_id as the sort key
NEW QUESTION 12
A company is planning to close for several days. A Database Specialist needs to stop all applications alongwith the DB instances to ensure employees do not have access to the systems during this time. All databasesare running on Amazon RDS for MySQL.
The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs,the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.
How should the Database Specialist edit the script to fix this issue?
- A. Stop the source instances before stopping their read replicas
- B. Delete each read replica before stopping its corresponding source instance
- C. Stop the read replicas before stopping their source instances
- D. Use the AWS CLI to stop each read replica and source instance at the same
NEW QUESTION 13
A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.
The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.
Which solution will meet these requirements with minimal effort?
- A. Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RD
- B. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
- C. Create an AWS Lambda function to trigger on AWS CloudTrail API call
- D. Filter on specific RDS API calls and write the output to the tracking systems.
- E. Create RDS event subscription
- F. Have the tracking systems subscribe to specific RDS event system notifications.
- G. Write RDS logs to Amazon Kinesis Data Firehos
- H. Create an AWS Lambda function to act on theserules and write the output to the tracking systems.
NEW QUESTION 14
A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)
- A. Review the stack drift before modifying the template
- B. Create and review a change set before applying it
- C. Export the database resources as stack outputs
- D. Define the database resources in a nested stack
- E. Set a stack policy for the database resources
NEW QUESTION 15
A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database.
Which approach will MOST effectively meet these requirements?
- A. Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluste
- B. Verify the datatype of the columns.
- C. Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
- D. Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigrationchecklist to make sure there are no issues with the conversion.
- E. Enable AWS DMS data validation on the task so the AWS DMS task compares the source and targetrecords, and reports any mismatches.
NEW QUESTION 16
A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one mediumsized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas.
In the event of a primary failure, what will occur?
- A. Aurora will promote an Aurora Replica that is of the same size as the primary instance
- B. Aurora will promote an arbitrary Aurora Replica
- C. Aurora will promote the largest-sized Aurora Replica
- D. Aurora will not promote an Aurora Replica
NEW QUESTION 17
A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.
How should the Database Specialist apply the parameter group change for the DB instance?
- A. Select the option to apply the change immediately
- B. Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
- C. Apply the change manually by rebooting the DB instance during the approved maintenance window
- D. Reboot the secondary Multi-AZ DB instance
NEW QUESTION 18
A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.
Which approach should the Database Specialist take?
- A. Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp).Rundata transformations in AWS Glu
- B. Load the data from the S3 bucket to the Aurora DB cluster.
- C. Order an AWS Snowball appliance and copy the Oracle backup to the Snowball applianc
- D. Once theSnowball data is delivered to Amazon S3, create a new Aurora DB cluste
- E. Enable the S3 integration tomigrate the data directly from Amazon S3 to Amazon RDS.
- F. Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during theschema migratio
- G. Use AWS DMS to perform the full load and change data capture (CDC) tasks.
- H. Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an AmazonEC2 instanc
- I. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to anAurora DB cluster.
NEW QUESTION 19
A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.
Which process should the Database Specialist recommend to meet these requirements?
- A. Organize common and environmental-specific parameters hierarchically in the AWS Systems ManagerParameter Store, then reference the parameters dynamically from an AWS CloudFormation template.Deploy the CloudFormation stack using the environment name as a parameter.
- B. Create a parameterized AWS CloudFormation template that builds the required object
- C. Keep separateenvironment parameter files in separate Amazon S3 bucket
- D. Provide an AWS CLI command that deploysthe CloudFormation stack directly referencing the appropriate parameter bucket.
- E. Create a parameterized AWS CloudFormation template that builds the required object
- F. Import thetemplate into the CloudFormation interface in the AWS Management Consol
- G. Make the required changesto the parameters and deploy the CloudFormation stack.
- H. Create an AWS Lambda function that builds the required objects using an AWS SD
- I. Set the requiredparameter values in a test event in the Lambda console for each environment that the Application team canmodify, as neede
- J. Deploy the infrastructure by triggering the test event in the console.
NEW QUESTION 20
A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after-the-fact analyses.
What should a Database Specialist do to meet these requirements with minimal effort?
- A. Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucke
- B. Set a lifecycle policy to expire the objects after 90 days.
- C. Modify the RDS databases to publish log to Amazon CloudWatch Log
- D. Change the log retention policy for each log group to expire the events after 90 days.
- E. Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucke
- F. Set a lifecycle policy to expire the objects after 90 days.
- G. Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Log
- H. Change the log retention policy for the log group to expire the events after 90 days.
NEW QUESTION 21
Thanks for reading the newest DBS-C01 exam dumps! We recommend you to try the PREMIUM Thedumpscentre.com DBS-C01 dumps in VCE and PDF here: https://www.thedumpscentre.com/DBS-C01-dumps/ (85 Q&As Dumps)