What Practical CCDAK Answers Is
It is more faster and easier to pass the Confluent CCDAK exam by using Top Quality Confluent Confluent Certified Developer for Apache Kafka Certification Examination questuins and answers. Immediate access to the Abreast of the times CCDAK Exam and find the same core area CCDAK questions with professionally verified answers, then PASS your exam with a high score now.
Also have CCDAK free dumps questions for you:
NEW QUESTION 1
You are doing complex calculations using a machine learning framework on records fetched from a Kafka topic. It takes more about 6 minutes to process a record batch, and the consumer enters rebalances even though it's still running. How can you improve this scenario?
- A. Increase max.poll.interval.ms to 600000
- B. Increase heartbeat.interval.ms to 600000
- C. Increase session.timeout.ms to 600000
- D. Add consumers to the consumer group and kill them right away
Answer: A
Explanation:
Here, we need to change the setting max.poll.interval.ms (default 300000) to its double in order to tell Kafka a consumer should be considered dead if the consumer only if it hasn't called the .poll() method in 10 minutes instead of 5.
NEW QUESTION 2
How will you find out all the partitions where one or more of the replicas for the partition are not in-sync with the leader?
- A. kafka-topics.sh --bootstrap-server localhost:9092 --describe --unavailable- partitions
- B. kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable- partitions
- C. kafka-topics.sh --broker-list localhost:9092 --describe --under-replicated-partitions
- D. kafka-topics.sh --zookeeper localhost:2181 --describe --under-replicated-partitions
Answer: D
NEW QUESTION 3
In Avro, removing or adding a field that has a default is a schema evolution
- A. full
- B. backward
- C. breaking
- D. forward
Answer: A
Explanation:
Clients with new schema will be able to read records saved with old schema and clients with old schema will be able to read records saved with new schema.
NEW QUESTION 4
A kafka topic has a replication factor of 3 and min.insync.replicas setting of 2. How many brokers can go down before a producer with acks=1 can't produce?
- A. 3
- B. 1
- C. 2
Answer: D
Explanation:
min.insync.replicas does not impact producers when acks=1 (only when acks=all)
NEW QUESTION 5
Once sent to a topic, a message can be modified
- A. No
- B. Yes
Answer: A
Explanation:
Kafka logs are append-only and the data is immutable
NEW QUESTION 6
What is the protocol used by Kafka clients to securely connect to the Confluent REST Proxy?
- A. Kerberos
- B. SASL
- C. HTTPS (SSL/TLS)
- D. HTTP
Answer: C
Explanation:
TLS - but it is still called SSL.
NEW QUESTION 7
Select all that applies (select THREE)
- A. min.insync.replicas is a producer setting
- B. acks is a topic setting
- C. acks is a producer setting
- D. min.insync.replicas is a topic setting
- E. min.insync.replicas matters regardless of the values of acks
- F. min.insync.replicas only matters if acks=all
Answer: CDF
Explanation:
acks is a producer setting min.insync.replicas is a topic or broker setting and is only effective when acks=all
NEW QUESTION 8
How does a consumer commit offsets in Kafka?
- A. It directly sends a message to the consumer_offsets topic
- B. It interacts with the Group Coordinator broker
- C. It directly commits the offsets in Zookeeper
Answer: B
Explanation:
Consumers do not directly write to the consumer_offsets topic, they instead interact with a broker that has been elected to manage that topic, which is the Group Coordinator broker
NEW QUESTION 9
The kafka-console-consumer CLI, when used with the default options
- A. uses a random group id
- B. always uses the same group id
- C. does not use a group id
Answer: A
Explanation:
If a group is not specified, the kafka-console-consumer generates a random consumer group.
NEW QUESTION 10
What is true about partitions? (select two)
- A. A broker can have a partition and its replica on its disk
- B. You cannot have more partitions than the number of brokers in your cluster
- C. A broker can have different partitions numbers for the same topic on its disk
- D. Only out of sync replicas are replicas, the remaining partitions that are in sync are also leader
- E. A partition has one replica that is a leader, while the other replicas are followers
Answer: CE
Explanation:
Only one of the replicas is elected as partition leader. And a broker can definitely hold many partitions from the same topic on its disk, try creating a topic with 12 partitions on one broker!
NEW QUESTION 11
A consumer starts and has auto.offset.reset=none, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group has committed the offset 10 for the topic before. Where will the consumer read from?
- A. offset 45
- B. offset 10
- C. it will crash
- D. offset 2311
Answer: C
Explanation:
auto.offset.reset=none means that the consumer will crash if the offsets it's recovering from have been deleted from Kafka, which is the case here, as 10 < 45
NEW QUESTION 12
When using the Confluent Kafka Distribution, where does the schema registry reside?
- A. As a separate JVM component
- B. As an in-memory plugin on your Zookeeper cluster
- C. As an in-memory plugin on your Kafka Brokers
- D. As an in-memory plugin on your Kafka Connect Workers
Answer: A
Explanation:
Schema registry is a separate application that provides RESTful interface for storing and retrieving Avro schemas.
NEW QUESTION 13
An ecommerce website maintains two topics - a high volume "purchase" topic with 5 partitions and low volume "customer" topic with 3 partitions. You would like to do a stream- table join of these topics. How should you proceed?
- A. Repartition the purchase topic to have 3 partitions
- B. Repartition customer topic to have 5 partitions
- C. Model customer as a GlobalKTable
- D. Do a KStream / KTable join after a repartition step
Answer: C
Explanation:
In case of KStream-KStream join, both need to be co-partitioned. This restriction is not applicable in case of join with GlobalKTable, which is the most efficient here.
NEW QUESTION 14
How do Kafka brokers ensure great performance between the producers and consumers? (select two)
- A. It compresses the messages as it writes to the disk
- B. It leverages zero-copy optimisations to send data straight from the page-cache
- C. It buffers the messages on disk, and sends messages from the disk reads
- D. It transforms the messages into a binary format
- E. It does not transform the messages
Answer: BE
Explanation:
Kafka transfers data with zero-copy and sends the raw bytes it receives from the producer
straight to the consumer, leveraging the RAM available as page cache
NEW QUESTION 15
If I produce to a topic that does not exist, and the broker setting auto.create.topic.enable=true, what will happen?
- A. Kafka will automatically create the topic with 1 partition and 1 replication factor
- B. Kafka will automatically create the topic with the indicated producer settings num.partitions and default.replication.factor
- C. Kafka will automatically create the topic with the broker settings num.partitions and default.replication.factor
- D. Kafka will automatically create the topic with num.partitions=#of brokers and replication.factor=3
Answer: C
Explanation:
The broker settings comes into play when a topic is auto created
NEW QUESTION 16
A consumer application is using KafkaAvroDeserializer to deserialize Avro messages. What happens if message schema is not present in AvroDeserializer local cache?
- A. Throws SerializationException
- B. Fails silently
- C. Throws DeserializationException
- D. Fetches schema from Schema Registry
Answer: D
Explanation:
First local cache is checked for the message schema. In case of cache miss, schema is pulled from the schema registry. An exception will be thrown in the Schema Registry does not have the schema (which should never happen if you set it up properly)
NEW QUESTION 17
If I want to send binary data through the REST proxy, it needs to be base64 encoded. Which component needs to encode the binary data into base 64?
- A. The Producer
- B. The Kafka Broker
- C. Zookeeper
- D. The REST Proxy
Answer: A
Explanation:
The REST Proxy requires to receive data over REST that is already base64 encoded, hence it is the responsibility of the producer
NEW QUESTION 18
......
Thanks for reading the newest CCDAK exam dumps! We recommend you to try the PREMIUM Dumps-files.com CCDAK dumps in VCE and PDF here: https://www.dumps-files.com/files/CCDAK/ (150 Q&As Dumps)