Details
-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
0.9.0.Beta1
-
None
Description
With:
transforms = [InsertTopic, InsertSourceDetails] transforms.InsertSourceDetails.offset.field = null transforms.InsertSourceDetails.partition.field = null transforms.InsertSourceDetails.static.field = messagesource transforms.InsertSourceDetails.static.value = Debezium CDC from Oracle on asgard transforms.InsertSourceDetails.timestamp.field = null transforms.InsertSourceDetails.topic.field = null transforms.InsertSourceDetails.type = class org.apache.kafka.connect.transforms.InsertField$Value transforms.InsertTopic.offset.field = null transforms.InsertTopic.partition.field = null transforms.InsertTopic.static.field = null transforms.InsertTopic.static.value = null transforms.InsertTopic.timestamp.field = null transforms.InsertTopic.topic.field = messagetopic transforms.InsertTopic.type = class org.apache.kafka.connect.transforms.InsertField$Value
The Debezium Oracle connector failed with :
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104) at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:44) at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:292) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:228) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Only Map objects supported in absence of schema for [field insertion], found: null at org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38) at org.apache.kafka.connect.transforms.InsertField.applySchemaless(InsertField.java:138) at org.apache.kafka.connect.transforms.InsertField.apply(InsertField.java:131) at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:44) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162) ... 11 more
Full config:
{ "name": "ora-source-debezium-xstream", "config": { "connector.class": "io.debezium.connector.oracle.OracleConnector", "database.server.name" : "asgard", "database.hostname" : "oracle", "database.port" : "1521", "database.user" : "c##xstrm", "database.password" : "xs", "database.dbname" : "ORCLCDB", "database.pdb.name" : "ORCLPDB1", "database.out.server.name" : "dbzxout_new", "database.history.kafka.bootstrap.servers" : "kafka:29092", "database.history.kafka.topic": "schema-changes.inventory", "include.schema.changes": "true", "key.converter": "io.confluent.connect.avro.AvroConverter", "key.converter.schema.registry.url": "http://schema-registry:8081", "value.converter": "io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url": "http://schema-registry:8081", "transforms": "InsertTopic,InsertSourceDetails", "transforms.InsertTopic.type":"org.apache.kafka.connect.transforms.InsertField$Value", "transforms.InsertTopic.topic.field":"messagetopic", "transforms.InsertSourceDetails.type":"org.apache.kafka.connect.transforms.InsertField$Value", "transforms.InsertSourceDetails.static.field":"messagesource", "transforms.InsertSourceDetails.static.value":"Debezium CDC from Oracle on asgard" } }
This occurred after a Delete event was sent from the database, so maybe the connector's not happy with the insertfield? Even though there's still a payload (from before and source)