-
Bug
-
Resolution: Cannot Reproduce
-
Major
-
1.3.1.Final
-
None
-
False
-
False
-
Undefined
-
I have browsed a number of suggestions in the chat room and looked at related issue: https://issues.redhat.com/browse/DBZ-683
My dbhistory topic has always had infinite retention setup. I have also confirmed with the application team that no recent DDL change have happened on the table $foo in question in recent history, so this could not have been caused by an actual schema change.
Here is my error:
org.apache.kafka.connect.errors.ConnectException: Encountered change event for table $foo whose schema isn't known to this connector \tat io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230) \tat io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:207) \tat io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:604) \tat com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1100) \tat com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:951) \tat com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:594) \tat com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:838) \tat java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.apache.kafka.connect.errors.ConnectException: Encountered change event for table $foo whose schema isn't known to this connector \tat io.debezium.connector.mysql.BinlogReader.informAboutUnknownTableIfRequired(BinlogReader.java:872) \tat io.debezium.connector.mysql.BinlogReader.handleUpdateTableMetadata(BinlogReader.java:846) \tat io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:587) \t... 5 more
I have already done a schema only recovery, including deleting and recreating the db history topics, which got data flowing again fine to the topic in question. But then the exact same issue recurred.
There is some uniqueness to how I encountered the error. I was adding 2 tables to the connector using the option snapshot.new.tables = parallel. The second table encountered an error with max.request.size being exceeded.
However, upon restarting the connector after that error is seen, I see the above error about an unrelated table (neither of the 2 I added).
How to debug this further? This is a bit of a showstopper for us as I can't get the connector working again. Thanks!