Using BOOTSTRAP_SERVERS=REPLACEME:9092,REPLACEME:9092 Plugins are loaded from /kafka/connect Using the following environment variables: GROUP_ID=1 CONFIG_STORAGE_TOPIC=debezium_producer_mysql_configs OFFSET_STORAGE_TOPIC=debezium_producer_mysql_offsets STATUS_STORAGE_TOPIC=debezium_producer_mysql_statuses BOOTSTRAP_SERVERS=REPLACEME:9092,REPLACEME:9092 REST_HOST_NAME=10.1.11.60 REST_PORT=8083 ADVERTISED_HOST_NAME=10.1.11.60 ADVERTISED_PORT=8083 KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter OFFSET_FLUSH_INTERVAL_MS=60000 OFFSET_FLUSH_TIMEOUT_MS=5000 SHUTDOWN_TIMEOUT=10000 --- Setting property from CONNECT_REST_ADVERTISED_PORT: rest.advertised.port=8083 --- Setting property from CONNECT_OFFSET_STORAGE_TOPIC: offset.storage.topic=debezium_producer_mysql_offsets --- Setting property from CONNECT_KEY_CONVERTER: key.converter=org.apache.kafka.connect.json.JsonConverter --- Setting property from CONNECT_CONFIG_STORAGE_TOPIC: config.storage.topic=debezium_producer_mysql_configs --- Setting property from CONNECT_GROUP_ID: group.id=1 --- Setting property from CONNECT_REST_ADVERTISED_HOST_NAME: rest.advertised.host.name=10.1.11.60 --- Setting property from CONNECT_REST_HOST_NAME: rest.host.name=10.1.11.60 --- Setting property from CONNECT_VALUE_CONVERTER: value.converter=org.apache.kafka.connect.json.JsonConverter --- Setting property from CONNECT_REST_PORT: rest.port=8083 --- Setting property from CONNECT_STATUS_STORAGE_TOPIC: status.storage.topic=debezium_producer_mysql_statuses --- Setting property from CONNECT_OFFSET_FLUSH_TIMEOUT_MS: offset.flush.timeout.ms=5000 --- Setting property from CONNECT_PLUGIN_PATH: plugin.path=/kafka/connect --- Setting property from CONNECT_OFFSET_FLUSH_INTERVAL_MS: offset.flush.interval.ms=60000 --- Setting property from CONNECT_BOOTSTRAP_SERVERS: bootstrap.servers=REPLACEME:9092,REPLACEME:9092 --- Setting property from CONNECT_TASK_SHUTDOWN_GRACEFUL_TIMEOUT_MS: task.shutdown.graceful.timeout.ms=10000 2024-04-23 13:10:07,929 INFO || Kafka Connect worker initializing ... [org.apache.kafka.connect.cli.AbstractConnectCli] 2024-04-23 13:10:07,933 INFO || WorkerInfo values: jvm.args = -Xms256M, -Xmx2G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/kafka/logs, -Dlog4j.configuration=file:/kafka/config/log4j.properties jvm.spec = Red Hat, Inc., OpenJDK 64-Bit Server VM, 11.0.20, 11.0.20+8 jvm.classpath = /kafka/libs/activation-1.1.1.jar:/kafka/libs/aopalliance-repackaged-2.6.1.jar:/kafka/libs/argparse4j-0.7.0.jar:/kafka/libs/audience-annotations-0.12.0.jar:/kafka/libs/caffeine-2.9.3.jar:/kafka/libs/checker-qual-3.19.0.jar:/kafka/libs/commons-beanutils-1.9.4.jar:/kafka/libs/commons-cli-1.4.jar:/kafka/libs/commons-collections-3.2.2.jar:/kafka/libs/commons-digester-2.1.jar:/kafka/libs/commons-io-2.11.0.jar:/kafka/libs/commons-lang3-3.8.1.jar:/kafka/libs/commons-logging-1.2.jar:/kafka/libs/commons-validator-1.7.jar:/kafka/libs/connect-api-3.6.1.jar:/kafka/libs/connect-basic-auth-extension-3.6.1.jar:/kafka/libs/connect-json-3.6.1.jar:/kafka/libs/connect-mirror-3.6.1.jar:/kafka/libs/connect-mirror-client-3.6.1.jar:/kafka/libs/connect-runtime-3.6.1.jar:/kafka/libs/connect-transforms-3.6.1.jar:/kafka/libs/error_prone_annotations-2.10.0.jar:/kafka/libs/hk2-api-2.6.1.jar:/kafka/libs/hk2-locator-2.6.1.jar:/kafka/libs/hk2-utils-2.6.1.jar:/kafka/libs/jackson-annotations-2.13.5.jar:/kafka/libs/jackson-core-2.13.5.jar:/kafka/libs/jackson-databind-2.13.5.jar:/kafka/libs/jackson-dataformat-csv-2.13.5.jar:/kafka/libs/jackson-datatype-jdk8-2.13.5.jar:/kafka/libs/jackson-jaxrs-base-2.13.5.jar:/kafka/libs/jackson-jaxrs-json-provider-2.13.5.jar:/kafka/libs/jackson-module-jaxb-annotations-2.13.5.jar:/kafka/libs/jackson-module-scala_2.13-2.13.5.jar:/kafka/libs/jakarta.activation-api-1.2.2.jar:/kafka/libs/jakarta.annotation-api-1.3.5.jar:/kafka/libs/jakarta.inject-2.6.1.jar:/kafka/libs/jakarta.validation-api-2.0.2.jar:/kafka/libs/jakarta.ws.rs-api-2.1.6.jar:/kafka/libs/jakarta.xml.bind-api-2.3.3.jar:/kafka/libs/javassist-3.29.2-GA.jar:/kafka/libs/javax.activation-api-1.2.0.jar:/kafka/libs/javax.annotation-api-1.3.2.jar:/kafka/libs/javax.servlet-api-3.1.0.jar:/kafka/libs/javax.ws.rs-api-2.1.1.jar:/kafka/libs/jaxb-api-2.3.1.jar:/kafka/libs/jersey-client-2.39.1.jar:/kafka/libs/jersey-common-2.39.1.jar:/kafka/libs/jersey-container-servlet-2.39.1.jar:/kafka/libs/jersey-container-servlet-core-2.39.1.jar:/kafka/libs/jersey-hk2-2.39.1.jar:/kafka/libs/jersey-server-2.39.1.jar:/kafka/libs/jetty-client-9.4.52.v20230823.jar:/kafka/libs/jetty-continuation-9.4.52.v20230823.jar:/kafka/libs/jetty-http-9.4.52.v20230823.jar:/kafka/libs/jetty-io-9.4.52.v20230823.jar:/kafka/libs/jetty-security-9.4.52.v20230823.jar:/kafka/libs/jetty-server-9.4.52.v20230823.jar:/kafka/libs/jetty-servlet-9.4.52.v20230823.jar:/kafka/libs/jetty-servlets-9.4.52.v20230823.jar:/kafka/libs/jetty-util-9.4.52.v20230823.jar:/kafka/libs/jetty-util-ajax-9.4.52.v20230823.jar:/kafka/libs/jline-3.22.0.jar:/kafka/libs/jolokia-jvm-1.7.2.jar:/kafka/libs/jopt-simple-5.0.4.jar:/kafka/libs/jose4j-0.9.3.jar:/kafka/libs/jsr305-3.0.2.jar:/kafka/libs/kafka-clients-3.6.1.jar:/kafka/libs/kafka-group-coordinator-3.6.1.jar:/kafka/libs/kafka-log4j-appender-3.6.1.jar:/kafka/libs/kafka-metadata-3.6.1.jar:/kafka/libs/kafka-raft-3.6.1.jar:/kafka/libs/kafka-server-common-3.6.1.jar:/kafka/libs/kafka-shell-3.6.1.jar:/kafka/libs/kafka-storage-3.6.1.jar:/kafka/libs/kafka-storage-api-3.6.1.jar:/kafka/libs/kafka-streams-3.6.1.jar:/kafka/libs/kafka-streams-examples-3.6.1.jar:/kafka/libs/kafka-streams-scala_2.13-3.6.1.jar:/kafka/libs/kafka-streams-test-utils-3.6.1.jar:/kafka/libs/kafka-tools-3.6.1.jar:/kafka/libs/kafka-tools-api-3.6.1.jar:/kafka/libs/kafka_2.13-3.6.1.jar:/kafka/libs/lz4-java-1.8.0.jar:/kafka/libs/maven-artifact-3.8.8.jar:/kafka/libs/metrics-core-2.2.0.jar:/kafka/libs/metrics-core-4.1.12.1.jar:/kafka/libs/netty-buffer-4.1.100.Final.jar:/kafka/libs/netty-codec-4.1.100.Final.jar:/kafka/libs/netty-common-4.1.100.Final.jar:/kafka/libs/netty-handler-4.1.100.Final.jar:/kafka/libs/netty-resolver-4.1.100.Final.jar:/kafka/libs/netty-transport-4.1.100.Final.jar:/kafka/libs/netty-transport-classes-epoll-4.1.100.Final.jar:/kafka/libs/netty-transport-native-epoll-4.1.100.Final.jar:/kafka/libs/netty-transport-native-unix-common-4.1.100.Final.jar:/kafka/libs/osgi-resource-locator-1.0.3.jar:/kafka/libs/paranamer-2.8.jar:/kafka/libs/pcollections-4.0.1.jar:/kafka/libs/plexus-utils-3.3.1.jar:/kafka/libs/reflections-0.10.2.jar:/kafka/libs/reload4j-1.2.25.jar:/kafka/libs/rocksdbjni-7.9.2.jar:/kafka/libs/scala-collection-compat_2.13-2.10.0.jar:/kafka/libs/scala-java8-compat_2.13-1.0.2.jar:/kafka/libs/scala-library-2.13.11.jar:/kafka/libs/scala-logging_2.13-3.9.4.jar:/kafka/libs/scala-reflect-2.13.11.jar:/kafka/libs/slf4j-api-1.7.36.jar:/kafka/libs/slf4j-reload4j-1.7.36.jar:/kafka/libs/snappy-java-1.1.10.5.jar:/kafka/libs/swagger-annotations-2.2.8.jar:/kafka/libs/trogdor-3.6.1.jar:/kafka/libs/zookeeper-3.8.3.jar:/kafka/libs/zookeeper-jute-3.8.3.jar:/kafka/libs/zstd-jni-1.5.5-1.jar os.spec = Linux, amd64, 5.10.209-198.858.amzn2.x86_64 os.vcpus = 1 [org.apache.kafka.connect.runtime.WorkerInfo] 2024-04-23 13:10:07,933 INFO || Scanning for plugin classes. This might take a moment ... [org.apache.kafka.connect.cli.AbstractConnectCli] 2024-04-23 13:10:08,100 INFO || Loading plugin from: /kafka/connect/debezium-connector-db2 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:08,227 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:08,617 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-db2/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:08,619 INFO || Loading plugin from: /kafka/connect/debezium-connector-informix [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:08,636 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:08,735 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-informix/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:08,736 INFO || Loading plugin from: /kafka/connect/debezium-connector-jdbc [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:08,798 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:08,896 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-jdbc/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,130 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,142 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:09,306 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,307 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,320 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:09,424 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,428 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,525 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:09,621 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,621 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,630 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:09,712 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,721 INFO || Loading plugin from: /kafka/connect/debezium-connector-spanner [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,806 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:09,904 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-spanner/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,904 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:09,912 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:10,003 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:10,039 INFO || Loading plugin from: /kafka/connect/debezium-connector-vitess [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:10,106 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-04-23 13:10:10,137 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-vitess/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:10,137 INFO || Loading plugin from: classpath [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:10,199 INFO || Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@3d4eac69 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:10,199 INFO || Scanning plugins with ServiceLoaderScanner took 2099 ms [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:10,202 INFO || Loading plugin from: /kafka/connect/debezium-connector-db2 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:10,799 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-db2/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:10,800 INFO || Loading plugin from: /kafka/connect/debezium-connector-informix [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:11,027 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-informix/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:11,027 INFO || Loading plugin from: /kafka/connect/debezium-connector-jdbc [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:13,714 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-jdbc/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:13,723 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:14,104 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:14,104 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:14,905 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:14,906 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:17,007 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:17,007 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:17,242 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:17,242 INFO || Loading plugin from: /kafka/connect/debezium-connector-spanner [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:19,553 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-spanner/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:19,553 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:19,852 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:19,907 INFO || Loading plugin from: /kafka/connect/debezium-connector-vitess [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:21,234 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-vitess/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:21,234 INFO || Loading plugin from: classpath [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:23,425 INFO || Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@3d4eac69 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:23,425 INFO || Scanning plugins with ReflectionScanner took 13223 ms [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-04-23 13:10:23,430 WARN || All plugins have ServiceLoader manifests, consider reconfiguring plugin.discovery=service_load [org.apache.kafka.connect.runtime.isolation.Plugins] 2024-04-23 13:10:23,432 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,432 INFO || Added plugin 'org.apache.kafka.connect.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,432 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,432 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,432 INFO || Added plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,432 INFO || Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,432 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,432 INFO || Added plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,432 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,432 INFO || Added plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'io.debezium.connector.postgresql.transforms.timescaledb.TimescaleDb' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'io.debezium.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,433 INFO || Added plugin 'io.debezium.transforms.partitions.PartitionRouting' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,434 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'io.debezium.connector.postgresql.rest.DebeziumPostgresConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'io.debezium.transforms.HeaderToValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'io.debezium.transforms.SchemaChangeEventFilter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,435 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'io.debezium.connector.mongodb.rest.DebeziumMongoDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'io.debezium.transforms.TimezoneConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'io.debezium.transforms.ExtractChangedRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,436 INFO || Added plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'io.debezium.connector.sqlserver.rest.DebeziumSqlServerConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,437 INFO || Added plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'VitessConnector' to plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'MySql' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'MirrorCheckpointConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'HeaderToValue' to plugin 'io.debezium.transforms.HeaderToValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'SimpleHeaderConverter' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'SqlServerConnector' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'DirectoryConfigProvider' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'TimezoneConverter' to plugin 'io.debezium.transforms.TimezoneConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'Simple' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'DebeziumPostgres' to plugin 'io.debezium.connector.postgresql.rest.DebeziumPostgresConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'AllConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,440 INFO || Added alias 'MirrorSource' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'Directory' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'MirrorHeartbeat' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'JsonConverter' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'DebeziumMySql' to plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'JdbcSinkConnector' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'DebeziumPostgresConnectRestExtension' to plugin 'io.debezium.connector.postgresql.rest.DebeziumPostgresConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'SpannerConnector' to plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'MongoDb' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'Postgres' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'FileConfigProvider' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'SchemaChangeEventFilter' to plugin 'io.debezium.transforms.SchemaChangeEventFilter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'ConvertCloudEventToSaveableForm' to plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'FloatConverter' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'Spanner' to plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,441 INFO || Added alias 'ActivateTracingSpan' to plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'DebeziumSqlServerConnectRestExtension' to plugin 'io.debezium.connector.sqlserver.rest.DebeziumSqlServerConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'MirrorHeartbeatConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'Oracle' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'PrincipalConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'Filter' to plugin 'org.apache.kafka.connect.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'Informix' to plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'ExtractNewDocumentState' to plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'CloudEventsConverter' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'DebeziumOracle' to plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'EnvVar' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'EnvVarConfigProvider' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'MySqlConnector' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'DebeziumSqlServer' to plugin 'io.debezium.connector.sqlserver.rest.DebeziumSqlServerConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'PartitionRouting' to plugin 'io.debezium.transforms.partitions.PartitionRouting' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,442 INFO || Added alias 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'StringConverter' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'MongoDbConnector' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'IntegerConverter' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'LongConverter' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'DropHeaders' to plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'ExtractSchemaToNewRecord' to plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'BinaryData' to plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'ReadToInsertEvent' to plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'ShortConverter' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'CloudEvents' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'DebeziumOracleConnectRestExtension' to plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'ExtractNewRecordState' to plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'DebeziumMongoDb' to plugin 'io.debezium.connector.mongodb.rest.DebeziumMongoDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'Db2' to plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'Db2Connector' to plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'Vitess' to plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'InformixConnector' to plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'DebeziumMongoDbConnectRestExtension' to plugin 'io.debezium.connector.mongodb.rest.DebeziumMongoDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,443 INFO || Added alias 'MirrorCheckpoint' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'ExtractChangedRecordState' to plugin 'io.debezium.transforms.ExtractChangedRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'OracleConnector' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'SqlServer' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'DebeziumMySqlConnectRestExtension' to plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'JdbcSink' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'NoneConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'EventRouter' to plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'File' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'DoubleConverter' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'BinaryDataConverter' to plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'TimescaleDb' to plugin 'io.debezium.connector.postgresql.transforms.timescaledb.TimescaleDb' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,444 INFO || Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,445 INFO || Added alias 'MirrorSourceConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,445 INFO || Added alias 'PostgresConnector' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,445 INFO || Added alias 'MongoEventRouter' to plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-04-23 13:10:23,544 INFO || DistributedConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null auto.include.jmx.reporter = true bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] client.dns.lookup = use_all_dns_ips client.id = config.providers = [] config.storage.replication.factor = 1 config.storage.topic = debezium_producer_mysql_configs connect.protocol = sessioned connections.max.idle.ms = 540000 connector.client.config.override.policy = All exactly.once.source.support = disabled group.id = 1 header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter heartbeat.interval.ms = 3000 inter.worker.key.generation.algorithm = HmacSHA256 inter.worker.key.size = null inter.worker.key.ttl.ms = 3600000 inter.worker.signature.algorithm = HmacSHA256 inter.worker.verification.algorithms = [HmacSHA256] key.converter = class org.apache.kafka.connect.json.JsonConverter listeners = [http://:8083] metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 offset.flush.interval.ms = 60000 offset.flush.timeout.ms = 5000 offset.storage.partitions = 25 offset.storage.replication.factor = 1 offset.storage.topic = debezium_producer_mysql_offsets plugin.discovery = hybrid_warn plugin.path = [/kafka/connect] rebalance.timeout.ms = 60000 receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 40000 response.http.headers.config = rest.advertised.host.name = 10.1.11.60 rest.advertised.listener = null rest.advertised.port = 8083 rest.extension.classes = [] retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null scheduled.rebalance.max.delay.ms = 300000 security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS status.storage.partitions = 5 status.storage.replication.factor = 1 status.storage.topic = debezium_producer_mysql_statuses task.shutdown.graceful.timeout.ms = 10000 topic.creation.enable = true topic.tracking.allow.reset = true topic.tracking.enable = true value.converter = class org.apache.kafka.connect.json.JsonConverter worker.sync.timeout.ms = 3000 worker.unsync.backoff.ms = 300000 [org.apache.kafka.connect.runtime.distributed.DistributedConfig] 2024-04-23 13:10:23,546 INFO || Creating Kafka admin client [org.apache.kafka.connect.runtime.WorkerConfig] 2024-04-23 13:10:23,548 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2024-04-23 13:10:23,735 INFO || These configurations '[config.storage.topic, rest.advertised.host.name, status.storage.topic, group.id, rest.advertised.port, rest.host.name, task.shutdown.graceful.timeout.ms, plugin.path, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2024-04-23 13:10:23,736 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:23,736 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:23,736 INFO || Kafka startTimeMs: 1713877823736 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,327 INFO || Kafka cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.connect.runtime.WorkerConfig] 2024-04-23 13:10:24,328 INFO || App info kafka.admin.client for adminclient-1 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,334 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:24,334 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:24,334 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:24,337 INFO || PublicConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null listeners = [http://:8083] response.http.headers.config = rest.advertised.host.name = 10.1.11.60 rest.advertised.listener = null rest.advertised.port = 8083 rest.extension.classes = [] ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS topic.tracking.allow.reset = true topic.tracking.enable = true [org.apache.kafka.connect.runtime.rest.RestServerConfig$PublicConfig] 2024-04-23 13:10:24,365 INFO || Logging initialized @17341ms to org.eclipse.jetty.util.log.Slf4jLog [org.eclipse.jetty.util.log] 2024-04-23 13:10:24,449 INFO || Added connector for http://:8083 [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,450 INFO || Initializing REST server [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,467 INFO || jetty-9.4.52.v20230823; built: 2023-08-23T19:29:37.669Z; git: abdcda73818a1a2c705da276edb0bf6581e7997e; jvm 11.0.20+8 [org.eclipse.jetty.server.Server] 2024-04-23 13:10:24,535 INFO || Started http_8083@3e19f4e{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} [org.eclipse.jetty.server.AbstractConnector] 2024-04-23 13:10:24,535 INFO || Started @17512ms [org.eclipse.jetty.server.Server] 2024-04-23 13:10:24,551 INFO || Advertised URI: http://10.1.11.60:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,551 INFO || REST server listening at http://10.1.11.60:8083/, advertising URL http://10.1.11.60:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,551 INFO || Advertised URI: http://10.1.11.60:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,557 INFO || REST admin endpoints at http://10.1.11.60:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,558 INFO || Advertised URI: http://10.1.11.60:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,558 INFO || Setting up All Policy for ConnectorClientConfigOverride. This will allow all client configurations to be overridden [org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy] 2024-04-23 13:10:24,609 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2024-04-23 13:10:24,634 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,634 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,634 INFO || Kafka startTimeMs: 1713877824634 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,640 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2024-04-23 13:10:24,640 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2024-04-23 13:10:24,652 INFO || Advertised URI: http://10.1.11.60:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,723 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,723 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,723 INFO || Kafka startTimeMs: 1713877824723 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,726 INFO || Kafka Connect worker initialization took 16796ms [org.apache.kafka.connect.cli.AbstractConnectCli] 2024-04-23 13:10:24,726 INFO || Kafka Connect starting [org.apache.kafka.connect.runtime.Connect] 2024-04-23 13:10:24,728 INFO || Initializing REST resources [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,728 INFO || [Worker clientId=connect-1, groupId=1] Herder starting [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:24,732 INFO || Worker starting [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:24,732 INFO || Starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2024-04-23 13:10:24,732 INFO || Starting KafkaBasedLog with topic debezium_producer_mysql_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2024-04-23 13:10:24,732 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] client.dns.lookup = use_all_dns_ips client.id = 1-shared-admin connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2024-04-23 13:10:24,737 INFO || These configurations '[config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, group.id, rest.advertised.port, rest.host.name, task.shutdown.graceful.timeout.ms, plugin.path, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2024-04-23 13:10:24,737 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,737 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,737 INFO || Kafka startTimeMs: 1713877824737 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,828 INFO || Adding admin resources to main listener [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:24,920 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = 1-offsets compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:24,935 INFO || DefaultSessionIdManager workerName=node0 [org.eclipse.jetty.server.session] 2024-04-23 13:10:24,935 INFO || No SessionScavenger set, using defaults [org.eclipse.jetty.server.session] 2024-04-23 13:10:24,935 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:24,936 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,936 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,936 INFO || Kafka startTimeMs: 1713877824936 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:24,936 INFO || node0 Scavenging every 660000ms [org.eclipse.jetty.server.session] 2024-04-23 13:10:24,944 INFO || [Producer clientId=1-offsets] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:24,945 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = 1-offsets client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = 1 group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:25,041 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:25,041 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:25,041 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:25,041 INFO || Kafka startTimeMs: 1713877825041 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:25,047 INFO || [Consumer clientId=1-offsets, groupId=1] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:25,050 INFO || [Consumer clientId=1-offsets, groupId=1] Assigned to partition(s): debezium_producer_mysql_offsets-0, debezium_producer_mysql_offsets-5, debezium_producer_mysql_offsets-10, debezium_producer_mysql_offsets-20, debezium_producer_mysql_offsets-15, debezium_producer_mysql_offsets-9, debezium_producer_mysql_offsets-11, debezium_producer_mysql_offsets-16, debezium_producer_mysql_offsets-4, debezium_producer_mysql_offsets-17, debezium_producer_mysql_offsets-3, debezium_producer_mysql_offsets-24, debezium_producer_mysql_offsets-23, debezium_producer_mysql_offsets-13, debezium_producer_mysql_offsets-18, debezium_producer_mysql_offsets-22, debezium_producer_mysql_offsets-2, debezium_producer_mysql_offsets-8, debezium_producer_mysql_offsets-12, debezium_producer_mysql_offsets-19, debezium_producer_mysql_offsets-14, debezium_producer_mysql_offsets-1, debezium_producer_mysql_offsets-6, debezium_producer_mysql_offsets-7, debezium_producer_mysql_offsets-21 [org.apache.kafka.clients.consumer.KafkaConsumer] 2024-04-23 13:10:25,101 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,102 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-5 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,107 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-10 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,107 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-20 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,108 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-15 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,108 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-9 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,108 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-11 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,108 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-16 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,108 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,108 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-17 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,108 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,108 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-24 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,108 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-23 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-13 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-18 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-22 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-8 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-12 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-19 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-14 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,109 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-6 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,110 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-7 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,110 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_offsets-21 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,241 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-5 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,242 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-21 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,242 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,242 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-23 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,242 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-9 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,242 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-11 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,242 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-13 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,242 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-15 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,242 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,242 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-17 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,243 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,243 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-19 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=26}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-20 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-22 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-24 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-12 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-14 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-16 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-18 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-6 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,246 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-8 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,247 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-10 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,247 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,247 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition debezium_producer_mysql_offsets-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=16}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,810 INFO || Finished reading KafkaBasedLog for topic debezium_producer_mysql_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2024-04-23 13:10:25,810 INFO || Started KafkaBasedLog for topic debezium_producer_mysql_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2024-04-23 13:10:25,810 INFO || Finished reading offsets topic and starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2024-04-23 13:10:25,836 INFO || Worker started [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:25,836 INFO || Starting KafkaBasedLog with topic debezium_producer_mysql_statuses [org.apache.kafka.connect.util.KafkaBasedLog] 2024-04-23 13:10:25,856 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = 1-statuses compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:25,862 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:25,862 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:25,862 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:25,863 INFO || Kafka startTimeMs: 1713877825862 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:25,864 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = 1-statuses client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = 1 group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:25,867 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:25,867 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:25,867 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:25,867 INFO || Kafka startTimeMs: 1713877825867 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:25,897 INFO || [Consumer clientId=1-statuses, groupId=1] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:25,898 INFO || [Consumer clientId=1-statuses, groupId=1] Assigned to partition(s): debezium_producer_mysql_statuses-0, debezium_producer_mysql_statuses-1, debezium_producer_mysql_statuses-4, debezium_producer_mysql_statuses-2, debezium_producer_mysql_statuses-3 [org.apache.kafka.clients.consumer.KafkaConsumer] 2024-04-23 13:10:25,898 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_statuses-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,898 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_statuses-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,899 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_statuses-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,899 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_statuses-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,899 INFO || [Producer clientId=1-statuses] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:25,899 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_statuses-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,930 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition debezium_producer_mysql_statuses-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=28}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,930 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition debezium_producer_mysql_statuses-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=28}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,931 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition debezium_producer_mysql_statuses-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=28}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,937 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition debezium_producer_mysql_statuses-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=30}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:25,939 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition debezium_producer_mysql_statuses-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=30}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:26,031 INFO || Finished reading KafkaBasedLog for topic debezium_producer_mysql_statuses [org.apache.kafka.connect.util.KafkaBasedLog] 2024-04-23 13:10:26,032 INFO || Started KafkaBasedLog for topic debezium_producer_mysql_statuses [org.apache.kafka.connect.util.KafkaBasedLog] Apr 23, 2024 1:10:26 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.RootResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.RootResource will be ignored. Apr 23, 2024 1:10:26 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored. Apr 23, 2024 1:10:26 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.InternalConnectResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.InternalConnectResource will be ignored. Apr 23, 2024 1:10:26 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored. Apr 23, 2024 1:10:26 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource will be ignored. 2024-04-23 13:10:26,043 INFO || Starting KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2024-04-23 13:10:26,043 INFO || Starting KafkaBasedLog with topic debezium_producer_mysql_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2024-04-23 13:10:26,099 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = 1-configs compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:26,111 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:26,112 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:26,125 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:26,125 INFO || Kafka startTimeMs: 1713877826112 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:26,125 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = 1-configs client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = 1 group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:26,129 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:26,131 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:26,132 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:26,132 INFO || Kafka startTimeMs: 1713877826131 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:26,134 INFO || [Producer clientId=1-configs] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:26,142 INFO || [Consumer clientId=1-configs, groupId=1] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:26,143 INFO || [Consumer clientId=1-configs, groupId=1] Assigned to partition(s): debezium_producer_mysql_configs-0 [org.apache.kafka.clients.consumer.KafkaConsumer] 2024-04-23 13:10:26,143 INFO || [Consumer clientId=1-configs, groupId=1] Seeking to earliest offset of partition debezium_producer_mysql_configs-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:26,214 INFO || [Consumer clientId=1-configs, groupId=1] Resetting offset for partition debezium_producer_mysql_configs-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 1 rack: use1-az1)], epoch=28}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:26,301 INFO || Finished reading KafkaBasedLog for topic debezium_producer_mysql_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2024-04-23 13:10:26,301 INFO || Started KafkaBasedLog for topic debezium_producer_mysql_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2024-04-23 13:10:26,301 INFO || Started KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2024-04-23 13:10:26,301 INFO || [Worker clientId=connect-1, groupId=1] Herder started [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:26,345 INFO || [Worker clientId=connect-1, groupId=1] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:26,345 INFO || [Worker clientId=connect-1, groupId=1] Discovered group coordinator REPLACEME:9092 (id: 2147483645 rack: null) [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:26,423 INFO || [Worker clientId=connect-1, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:26,423 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:26,431 INFO || [Worker clientId=connect-1, groupId=1] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:26,431 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] Apr 23, 2024 1:10:26 PM org.glassfish.jersey.internal.Errors logErrors WARNING: The following warnings have been detected: WARNING: The (sub)resource method listLoggers in org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains empty path annotation. WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation. WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation. 2024-04-23 13:10:26,546 INFO || Started o.e.j.s.ServletContextHandler@c28234f{/,null,AVAILABLE} [org.eclipse.jetty.server.handler.ContextHandler] 2024-04-23 13:10:26,547 INFO || REST resources initialized; server is started and ready to handle requests [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:26,547 INFO || Kafka Connect started [org.apache.kafka.connect.runtime.Connect] 2024-04-23 13:10:29,433 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation Generation{generationId=13, memberId='connect-1-c71f5517-5374-4253-beea-2053b7386ca3', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:29,449 INFO || [Worker clientId=connect-1, groupId=1] Successfully synced group in generation Generation{generationId=13, memberId='connect-1-c71f5517-5374-4253-beea-2053b7386ca3', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:29,450 INFO || [Worker clientId=connect-1, groupId=1] Joined group at generation 13 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-c71f5517-5374-4253-beea-2053b7386ca3', leaderUrl='http://10.1.11.60:8083/', offset=13828, connectorIds=[mysql-connector], taskIds=[mysql-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:29,450 WARN || [Worker clientId=connect-1, groupId=1] Catching up to assignment's config offset. [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:29,450 INFO || [Worker clientId=connect-1, groupId=1] Current config state offset -1 is behind group assignment 13828, reading to end of config log [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:29,453 INFO || [Worker clientId=connect-1, groupId=1] Finished reading to end of log and updated config snapshot, new config log offset: 13828 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:29,453 INFO || [Worker clientId=connect-1, groupId=1] Starting connectors and tasks using config offset 13828 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:29,454 INFO || [Worker clientId=connect-1, groupId=1] Starting task mysql-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:29,455 INFO || [Worker clientId=connect-1, groupId=1] Starting connector mysql-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:29,458 INFO || Creating connector mysql-connector of type io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:29,459 INFO || Creating task mysql-connector-0 [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:29,461 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2024-04-23 13:10:29,461 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = mysql-connector predicates = [] tasks.max = 1 transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig] 2024-04-23 13:10:29,462 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-04-23 13:10:29,464 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = mysql-connector predicates = [] tasks.max = 1 transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-04-23 13:10:29,468 INFO || Instantiated connector mysql-connector with version 2.5.4.Final of type class io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:29,469 INFO || Finished creating connector mysql-connector [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:29,474 INFO || TaskConfig values: task.class = class io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2024-04-23 13:10:29,499 INFO || Instantiated task mysql-connector-0 with version 2.5.4.Final of type io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:29,500 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2024-04-23 13:10:29,500 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task mysql-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:29,501 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2024-04-23 13:10:29,501 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task mysql-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:29,501 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task mysql-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:29,505 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2024-04-23 13:10:29,505 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-04-23 13:10:29,506 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:29,506 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connector-producer-mysql-connector-0 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:29,510 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:29,511 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:29,511 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:29,511 INFO || Kafka startTimeMs: 1713877829511 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:29,519 INFO || [Worker clientId=connect-1, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:29,522 INFO || [Producer clientId=connector-producer-mysql-connector-0] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:29,524 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2024-04-23 13:10:29,524 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-04-23 13:10:29,615 INFO || Starting MySqlConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || connector.class = io.debezium.connector.mysql.MySqlConnector [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || snapshot.locking.mode = none [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || database.user = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || database.server.id = 123456 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || database.server.name = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || schema.history.internal.kafka.bootstrap.servers = REPLACEME:9092,REPLACEME:9092 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || event.processing.failure.handling.mode = ignore [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || database.port = 3306 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || include.schema.changes = true [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || topic.prefix = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || schema.history.internal.kafka.topic = mysql-REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || task.class = io.debezium.connector.mysql.MySqlConnectorTask [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || database.hostname = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || database.password = ******** [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || name = mysql-connector [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || table.include.list = REPLACEME.ticket, REPLACEME.entity_subscription, REPLACEME.channel, REPLACEME.channel_member, REPLACEME.channel_contact, REPLACEME.thread_swarm_timers, REPLACEME.member_mention, REPLACEME.ticket_last_note, REPLACEME.ticket_conjunctions, REPLACEME.ticket_site, REPLACEME.ticket_agreement, REPLACEME.ticket_ticket_type, REPLACEME.ticket_service_level_agreement,REPLACEME.ticket_ticket_category, REPLACEME.time_entry, REPLACEME.schedule_entries, REPLACEME.note [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,616 INFO || include.query = true [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,617 INFO || snapshot.mode = schema_only [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,617 INFO || database.include.list = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,617 INFO || snapshot.lock.timeout.ms = 60000 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,617 INFO || connect.timeout.ms = 60000 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:29,622 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2024-04-23 13:10:29,632 INFO || Using io.debezium.connector.mysql.strategy.mysql.MySqlConnectorAdapter [io.debezium.connector.mysql.MySqlConnectorConfig] 2024-04-23 13:10:29,635 INFO || Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy [io.debezium.config.CommonConnectorConfig] 2024-04-23 13:10:30,515 INFO || Found previous partition offset MySqlPartition [sourcePartition={server=REPLACEME}]: {transaction_id=null, file=mysql-bin-changelog.000502, pos=72675853, row=1, event=3} [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:30,619 INFO || KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=REPLACEME-schemahistory, bootstrap.servers=REPLACEME:9092,REPLACEME:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=REPLACEME-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2024-04-23 13:10:30,619 INFO || KafkaSchemaHistory Producer config: {retries=1, value.serializer=org.apache.kafka.common.serialization.StringSerializer, acks=1, batch.size=32768, max.block.ms=10000, bootstrap.servers=REPLACEME:9092,REPLACEME:9092, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=REPLACEME-schemahistory, linger.ms=0} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2024-04-23 13:10:30,620 INFO || Requested thread factory for connector MySqlConnector, id = REPLACEME named = db-history-config-check [io.debezium.util.Threads] 2024-04-23 13:10:30,622 INFO || Idempotence will be disabled because acks is set to 1, not set to 'all'. [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:30,622 INFO || ProducerConfig values: acks = 1 auto.include.jmx.reporter = true batch.size = 32768 bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] buffer.memory = 1048576 client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 10000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:30,626 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,626 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,626 INFO || Kafka startTimeMs: 1713877830626 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,630 INFO || [Producer clientId=REPLACEME-schemahistory] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:30,717 INFO || Closing connection before starting schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] 2024-04-23 13:10:30,721 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2024-04-23 13:10:30,722 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = REPLACEME-schemahistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:30,725 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,725 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,725 INFO || Kafka startTimeMs: 1713877830725 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,728 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:30,733 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,733 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,796 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,797 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,797 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,798 INFO || App info kafka.consumer for REPLACEME-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,799 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = REPLACEME-schemahistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:30,802 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,802 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,802 INFO || Kafka startTimeMs: 1713877830802 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,802 INFO || Creating thread debezium-mysqlconnector-REPLACEME-db-history-config-check [io.debezium.util.Threads] 2024-04-23 13:10:30,805 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory-topic-check connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2024-04-23 13:10:30,810 INFO || These configurations '[value.serializer, acks, batch.size, max.block.ms, buffer.memory, key.serializer, linger.ms]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2024-04-23 13:10:30,810 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,810 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,810 INFO || Kafka startTimeMs: 1713877830810 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,811 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:30,831 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,831 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,831 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,832 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,832 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,833 INFO || App info kafka.consumer for REPLACEME-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,834 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = REPLACEME-schemahistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:30,836 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,836 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,836 INFO || Kafka startTimeMs: 1713877830836 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,839 INFO || Database schema history topic 'mysql-REPLACEME' has correct settings [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2024-04-23 13:10:30,844 INFO || App info kafka.admin.client for REPLACEME-schemahistory-topic-check unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,896 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,896 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,844 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:30,898 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,899 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,898 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,902 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,902 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,902 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,903 INFO || App info kafka.consumer for REPLACEME-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,904 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = REPLACEME-schemahistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:30,907 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,907 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,907 INFO || Kafka startTimeMs: 1713877830907 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,920 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:30,925 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,925 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,925 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,925 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,925 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:30,926 INFO || App info kafka.consumer for REPLACEME-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,926 INFO || Started database schema history recovery [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:30,933 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = REPLACEME-schemahistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:10:30,935 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,935 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,935 INFO || Kafka startTimeMs: 1713877830935 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:30,935 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Subscribed to topic(s): mysql-REPLACEME [org.apache.kafka.clients.consumer.KafkaConsumer] 2024-04-23 13:10:30,939 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:30,944 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Discovered group coordinator REPLACEME:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,944 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,951 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Request joining group due to: need to re-join with the given member-id: REPLACEME-schemahistory-334eb0f8-b995-4cf8-92df-ff4bd71cc6e2 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,955 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:30,956 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:34,004 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Successfully joined group with generation Generation{generationId=1, memberId='REPLACEME-schemahistory-334eb0f8-b995-4cf8-92df-ff4bd71cc6e2', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:34,010 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Finished assignment for group at generation 1: {REPLACEME-schemahistory-334eb0f8-b995-4cf8-92df-ff4bd71cc6e2=Assignment(partitions=[mysql-REPLACEME-0])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:34,030 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Successfully synced group in generation Generation{generationId=1, memberId='REPLACEME-schemahistory-334eb0f8-b995-4cf8-92df-ff4bd71cc6e2', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:34,031 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Notifying assignor about the new Assignment(partitions=[mysql-REPLACEME-0]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:34,031 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Adding newly assigned partitions: mysql-REPLACEME-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:34,034 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Found no committed offset for partition mysql-REPLACEME-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:10:34,036 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Resetting offset for partition mysql-REPLACEME-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[REPLACEME:9092 (id: 2 rack: use1-az2)], epoch=116}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-04-23 13:10:34,048 INFO || Database schema history recovery in progress, recovered 1 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:34,729 INFO || Already applied 1 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:34,729 INFO || Database schema history recovery in progress, recovered 2 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:36,621 INFO || Already applied 157 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:36,623 INFO || Database schema history recovery in progress, recovered 158 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:37,244 INFO || 127.0.0.1 - - [23/Apr/2024:13:10:36 +0000] "GET /connectors HTTP/1.1" 200 19 "-" "curl/7.85.0" 318 [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:37,339 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2024-04-23 13:10:37,340 INFO || Using io.debezium.connector.mysql.strategy.mysql.MySqlConnectorAdapter [io.debezium.connector.mysql.MySqlConnectorConfig] 2024-04-23 13:10:37,496 INFO || Successfully tested connection for jdbc:mysql://REPLACEME:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=60000 with user 'REPLACEME' [io.debezium.connector.mysql.MySqlConnector] 2024-04-23 13:10:37,525 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2024-04-23 13:10:37,528 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2024-04-23 13:10:37,537 INFO || [Worker clientId=connect-1, groupId=1] Connector mysql-connector config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:37,541 INFO || [Worker clientId=connect-1, groupId=1] Handling connector-only config update by restarting connector mysql-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:37,542 INFO || Stopping connector mysql-connector [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:37,542 INFO || Scheduled shutdown for WorkerConnector{id=mysql-connector} [org.apache.kafka.connect.runtime.WorkerConnector] 2024-04-23 13:10:37,544 INFO || Completed shutdown for WorkerConnector{id=mysql-connector} [org.apache.kafka.connect.runtime.WorkerConnector] 2024-04-23 13:10:37,546 INFO || [Worker clientId=connect-1, groupId=1] Starting connector mysql-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:37,546 INFO || Creating connector mysql-connector of type io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:37,547 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2024-04-23 13:10:37,547 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-04-23 13:10:37,547 INFO || Instantiated connector mysql-connector with version 2.5.4.Final of type class io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:37,548 INFO || Finished creating connector mysql-connector [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:37,616 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2024-04-23 13:10:37,624 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-04-23 13:10:37,641 INFO || [Worker clientId=connect-1, groupId=1] Tasks [mysql-connector-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:37,719 INFO || [Worker clientId=connect-1, groupId=1] Handling task config update by stopping tasks [mysql-connector-0], which will be restarted after rebalance if still assigned to this worker [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:37,725 INFO || Stopping task mysql-connector-0 [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:37,732 INFO || 127.0.0.1 - - [23/Apr/2024:13:10:37 +0000] "PUT /connectors/mysql-connector/config HTTP/1.1" 200 1470 "-" "curl/7.85.0" 509 [org.apache.kafka.connect.runtime.rest.RestServer] 2024-04-23 13:10:38,618 INFO || Already applied 329 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:38,618 INFO || Database schema history recovery in progress, recovered 330 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:40,627 INFO || Already applied 567 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:40,627 INFO || Database schema history recovery in progress, recovered 568 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:42,618 INFO || Already applied 1051 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:42,619 INFO || Database schema history recovery in progress, recovered 1052 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:44,622 INFO || Already applied 1861 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:44,622 INFO || Database schema history recovery in progress, recovered 1862 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:46,621 INFO || Already applied 2846 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:46,621 INFO || Database schema history recovery in progress, recovered 2847 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:47,726 ERROR || Graceful stop of task mysql-connector-0 failed. [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:47,728 INFO || [Producer clientId=connector-producer-mysql-connector-0] Closing the Kafka producer with timeoutMillis = 0 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2024-04-23 13:10:47,728 INFO || [Producer clientId=connector-producer-mysql-connector-0] Proceeding to force close the producer since pending requests could not be completed within timeout 0 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2024-04-23 13:10:47,732 INFO || [Worker clientId=connect-1, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:47,732 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:47,733 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:47,733 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:47,733 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:10:47,733 INFO || App info kafka.producer for connector-producer-mysql-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:47,734 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation Generation{generationId=14, memberId='connect-1-c71f5517-5374-4253-beea-2053b7386ca3', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:47,741 INFO || [Worker clientId=connect-1, groupId=1] Successfully synced group in generation Generation{generationId=14, memberId='connect-1-c71f5517-5374-4253-beea-2053b7386ca3', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-04-23 13:10:47,741 INFO || [Worker clientId=connect-1, groupId=1] Joined group at generation 14 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-c71f5517-5374-4253-beea-2053b7386ca3', leaderUrl='http://10.1.11.60:8083/', offset=13831, connectorIds=[mysql-connector], taskIds=[mysql-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:47,741 INFO || [Worker clientId=connect-1, groupId=1] Starting connectors and tasks using config offset 13831 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:47,742 INFO || [Worker clientId=connect-1, groupId=1] Starting task mysql-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:47,742 INFO || Creating task mysql-connector-0 [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:47,743 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = mysql-connector predicates = [] tasks.max = 1 transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig] 2024-04-23 13:10:47,743 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = mysql-connector predicates = [] tasks.max = 1 transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-04-23 13:10:47,744 INFO || TaskConfig values: task.class = class io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2024-04-23 13:10:47,744 INFO || Instantiated task mysql-connector-0 with version 2.5.4.Final of type io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:47,745 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2024-04-23 13:10:47,745 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task mysql-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:47,745 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2024-04-23 13:10:47,745 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task mysql-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:47,745 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task mysql-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:47,746 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2024-04-23 13:10:47,746 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = mysql-connector offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-04-23 13:10:47,746 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2024-04-23 13:10:47,746 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connector-producer-mysql-connector-0 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:47,749 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:10:47,749 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:47,749 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:47,749 INFO || Kafka startTimeMs: 1713877847749 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:10:47,750 INFO || [Worker clientId=connect-1, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 13:10:47,752 INFO || Starting MySqlConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,752 INFO || connector.class = io.debezium.connector.mysql.MySqlConnector [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,752 INFO || snapshot.locking.mode = none [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,752 INFO || database.user = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,752 INFO || database.server.id = 123456 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,752 INFO || database.server.name = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,752 INFO || schema.history.internal.kafka.bootstrap.servers = REPLACEME:9092,REPLACEME:9092 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,752 INFO || event.processing.failure.handling.mode = ignore [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || database.port = 3306 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || include.schema.changes = true [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || topic.prefix = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || schema.history.internal.kafka.topic = debezium_producer_mysql_schema_history [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || task.class = io.debezium.connector.mysql.MySqlConnectorTask [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || database.hostname = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || database.password = ******** [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || name = mysql-connector [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || table.include.list = REPLACEME.ticket, REPLACEME.entity_subscription, REPLACEME.channel, REPLACEME.channel_member, REPLACEME.channel_contact, REPLACEME.thread_swarm_timers, REPLACEME.member_mention, REPLACEME.ticket_last_note, REPLACEME.ticket_conjunctions, REPLACEME.ticket_site, REPLACEME.ticket_agreement, REPLACEME.ticket_ticket_type, REPLACEME.ticket_service_level_agreement,REPLACEME.ticket_ticket_category, REPLACEME.time_entry, REPLACEME.schedule_entries, REPLACEME.note [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || include.query = true [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,753 INFO || snapshot.mode = schema_only [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,754 INFO || database.include.list = REPLACEME [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,754 INFO || snapshot.lock.timeout.ms = 60000 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,754 INFO || connect.timeout.ms = 60000 [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,754 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2024-04-23 13:10:47,755 INFO || Using io.debezium.connector.mysql.strategy.mysql.MySqlConnectorAdapter [io.debezium.connector.mysql.MySqlConnectorConfig] 2024-04-23 13:10:47,755 INFO || Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy [io.debezium.config.CommonConnectorConfig] 2024-04-23 13:10:47,813 INFO || [Producer clientId=connector-producer-mysql-connector-0] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:10:47,927 INFO || Found previous partition offset MySqlPartition [sourcePartition={server=REPLACEME}]: {transaction_id=null, file=mysql-bin-changelog.000502, pos=72675853, row=1, event=3} [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:10:47,941 INFO || KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=REPLACEME-schemahistory, bootstrap.servers=REPLACEME:9092,REPLACEME:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=REPLACEME-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2024-04-23 13:10:47,941 INFO || KafkaSchemaHistory Producer config: {retries=1, value.serializer=org.apache.kafka.common.serialization.StringSerializer, acks=1, batch.size=32768, max.block.ms=10000, bootstrap.servers=REPLACEME:9092,REPLACEME:9092, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=REPLACEME-schemahistory, linger.ms=0} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2024-04-23 13:10:47,941 INFO || Requested thread factory for connector MySqlConnector, id = REPLACEME named = db-history-config-check [io.debezium.util.Threads] 2024-04-23 13:10:47,942 WARN || Unable to register metrics as an old set with the same name exists, retrying in PT5S (attempt 1 out of 12) [io.debezium.pipeline.JmxUtils] 2024-04-23 13:10:48,618 INFO || Already applied 3780 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:48,620 INFO || Database schema history recovery in progress, recovered 3781 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:50,618 INFO || Database schema history recovery in progress, recovered 4775 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:50,620 INFO || Already applied 4775 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:52,618 INFO || Already applied 6031 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:52,619 INFO || Database schema history recovery in progress, recovered 6032 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:52,942 WARN || Unable to register metrics as an old set with the same name exists, retrying in PT5S (attempt 2 out of 12) [io.debezium.pipeline.JmxUtils] 2024-04-23 13:10:54,620 INFO || Already applied 7399 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:54,620 INFO || Database schema history recovery in progress, recovered 7400 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:56,618 INFO || Already applied 8868 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:56,618 INFO || Database schema history recovery in progress, recovered 8869 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:57,943 WARN || Unable to register metrics as an old set with the same name exists, retrying in PT5S (attempt 3 out of 12) [io.debezium.pipeline.JmxUtils] 2024-04-23 13:10:58,618 INFO || Already applied 9952 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:10:58,618 INFO || Database schema history recovery in progress, recovered 9953 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:11:00,620 INFO || Already applied 11511 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:11:00,620 INFO || Database schema history recovery in progress, recovered 11512 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:11:02,620 INFO || Already applied 13182 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:11:02,620 INFO || Database schema history recovery in progress, recovered 13183 records [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:11:02,943 WARN || Unable to register metrics as an old set with the same name exists, retrying in PT5S (attempt 4 out of 12) [io.debezium.pipeline.JmxUtils] 2024-04-23 13:11:03,919 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Revoke previously assigned partitions mysql-REPLACEME-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:11:03,919 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Member REPLACEME-schemahistory-334eb0f8-b995-4cf8-92df-ff4bd71cc6e2 sending LeaveGroup request to coordinator REPLACEME:9092 (id: 2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:11:03,920 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:11:03,920 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:11:04,419 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:04,419 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:04,420 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:04,420 INFO || App info kafka.consumer for REPLACEME-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:04,421 INFO || Finished database schema history recovery of 14335 change(s) in 33494 ms [io.debezium.relational.history.SchemaHistoryMetrics] 2024-04-23 13:11:04,447 INFO || Reconnecting after finishing schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] 2024-04-23 13:11:04,503 INFO || Get all known binlogs [io.debezium.connector.mysql.strategy.AbstractConnectorConnection] 2024-04-23 13:11:04,505 INFO || Server has the binlog file 'mysql-bin-changelog.000502' required by the connector [io.debezium.connector.mysql.strategy.AbstractConnectorConnection] 2024-04-23 13:11:04,519 INFO || Requested thread factory for connector MySqlConnector, id = REPLACEME named = SignalProcessor [io.debezium.util.Threads] 2024-04-23 13:11:04,533 INFO || Requested thread factory for connector MySqlConnector, id = REPLACEME named = change-event-source-coordinator [io.debezium.util.Threads] 2024-04-23 13:11:04,533 INFO || Requested thread factory for connector MySqlConnector, id = REPLACEME named = blocking-snapshot [io.debezium.util.Threads] 2024-04-23 13:11:04,534 INFO || Creating thread debezium-mysqlconnector-REPLACEME-change-event-source-coordinator [io.debezium.util.Threads] 2024-04-23 13:11:04,596 INFO || WorkerSourceTask{id=mysql-connector-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2024-04-23 13:11:04,598 INFO || Stopping down connector [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:11:04,602 INFO MySQL|REPLACEME|snapshot Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] 2024-04-23 13:11:04,603 INFO MySQL|REPLACEME|snapshot Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] 2024-04-23 13:11:04,609 INFO MySQL|REPLACEME|snapshot A previous offset indicating a completed snapshot has been found. Neither schema nor data will be snapshotted. [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-04-23 13:11:04,716 INFO MySQL|REPLACEME|snapshot Snapshot ended with SnapshotResult [status=SKIPPED, offset=MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin-changelog.000502, currentBinlogPosition=72675853, currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, currentQuery=null, tableIds=[], databaseName=null], snapshotCompleted=false, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=mysql-bin-changelog.000502, restartBinlogPosition=72675853, restartRowsToSkip=1, restartEventsToSkip=3, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]]] [io.debezium.pipeline.ChangeEventSourceCoordinator] 2024-04-23 13:11:04,718 INFO || Creating thread debezium-mysqlconnector-REPLACEME-SignalProcessor [io.debezium.util.Threads] 2024-04-23 13:11:04,719 INFO || SignalProcessor stopped [io.debezium.pipeline.signal.SignalProcessor] 2024-04-23 13:11:04,719 INFO || Debezium ServiceRegistry stopped. [io.debezium.service.DefaultServiceRegistry] 2024-04-23 13:11:04,722 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2024-04-23 13:11:04,722 INFO || [Producer clientId=REPLACEME-schemahistory] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2024-04-23 13:11:04,724 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:04,724 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:04,724 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:04,724 INFO || App info kafka.producer for REPLACEME-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:04,724 INFO || [Producer clientId=connector-producer-mysql-connector-0] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2024-04-23 13:11:04,724 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:04,724 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:04,724 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:04,724 INFO || App info kafka.producer for connector-producer-mysql-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,944 INFO || Idempotence will be disabled because acks is set to 1, not set to 'all'. [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:11:07,944 INFO || ProducerConfig values: acks = 1 auto.include.jmx.reporter = true batch.size = 32768 bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] buffer.memory = 1048576 client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 10000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2024-04-23 13:11:07,950 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,950 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,950 INFO || Kafka startTimeMs: 1713877867950 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,951 INFO || Closing connection before starting schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] 2024-04-23 13:11:07,953 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2024-04-23 13:11:07,955 INFO || [Producer clientId=REPLACEME-schemahistory] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:11:07,955 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = REPLACEME-schemahistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:11:07,957 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,957 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,957 INFO || Kafka startTimeMs: 1713877867957 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,959 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:11:07,961 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:11:07,961 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:11:07,961 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,962 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,962 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,962 INFO || App info kafka.consumer for REPLACEME-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,962 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = REPLACEME-schemahistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-04-23 13:11:07,964 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,964 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,964 INFO || Kafka startTimeMs: 1713877867964 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,965 INFO || Creating thread debezium-mysqlconnector-REPLACEME-db-history-config-check [io.debezium.util.Threads] 2024-04-23 13:11:07,965 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.servers = [REPLACEME:9092, REPLACEME:9092] client.dns.lookup = use_all_dns_ips client.id = REPLACEME-schemahistory-topic-check connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2024-04-23 13:11:07,967 INFO || These configurations '[value.serializer, acks, batch.size, max.block.ms, buffer.memory, key.serializer, linger.ms]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2024-04-23 13:11:07,967 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,967 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,967 INFO || Kafka startTimeMs: 1713877867967 [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,968 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Cluster ID: CjnbrFF3Q9-SPe1k8xi41Q [org.apache.kafka.clients.Metadata] 2024-04-23 13:11:07,971 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:11:07,971 INFO || [Consumer clientId=REPLACEME-schemahistory, groupId=REPLACEME-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-04-23 13:11:07,971 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,971 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,971 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,972 INFO || App info kafka.consumer for REPLACEME-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,972 WARN || Database schema history was not found but was expected [io.debezium.connector.mysql.MySqlConnectorTask] 2024-04-23 13:11:07,973 ERROR || WorkerSourceTask{id=mysql-connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask] io.debezium.DebeziumException: The db history topic is missing. You may attempt to recover it by reconfiguring the connector to SCHEMA_ONLY_RECOVERY at io.debezium.connector.mysql.MySqlConnectorTask.validateAndLoadSchemaHistory(MySqlConnectorTask.java:332) at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:118) at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:141) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.initializeAndStart(AbstractWorkerSourceTask.java:280) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:202) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:77) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:236) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2024-04-23 13:11:07,974 INFO || Stopping down connector [io.debezium.connector.common.BaseSourceTask] 2024-04-23 13:11:07,974 INFO || [Producer clientId=REPLACEME-schemahistory] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2024-04-23 13:11:07,975 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,975 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,975 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,975 INFO || App info kafka.producer for REPLACEME-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,976 INFO || [Producer clientId=connector-producer-mysql-connector-0] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2024-04-23 13:11:07,978 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,978 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,978 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,978 INFO || App info kafka.producer for connector-producer-mysql-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,981 INFO || Database schema history topic 'debezium_producer_mysql_schema_history' has correct settings [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2024-04-23 13:11:07,981 INFO || App info kafka.admin.client for REPLACEME-schemahistory-topic-check unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-04-23 13:11:07,982 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,982 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:11:07,982 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-04-23 13:15:24,909 INFO || [AdminClient clientId=1-shared-admin] Node -2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:19:25,138 INFO || [Consumer clientId=1-offsets, groupId=1] Node -2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:19:25,171 INFO || [Producer clientId=1-offsets] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:19:26,011 INFO || [Consumer clientId=1-statuses, groupId=1] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:19:26,133 INFO || [Producer clientId=1-statuses] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:19:26,287 INFO || [Consumer clientId=1-configs, groupId=1] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:19:26,375 INFO || [Producer clientId=1-configs] Node -2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:19:26,399 INFO || [Worker clientId=connect-1, groupId=1] Node -1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:20:25,011 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:25:25,095 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:30:25,203 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:35:25,278 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:40:25,368 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:45:25,471 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:50:25,566 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 13:52:28,061 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 14:00:25,760 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:05:25,868 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:10:25,972 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:15:26,078 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:20:26,179 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:25:26,285 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:30:26,363 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:35:26,467 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:40:26,571 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:45:26,639 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:50:26,743 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 14:52:28,063 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 15:00:26,947 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:05:27,048 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:10:27,151 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:15:27,255 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:20:27,344 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:25:27,448 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:30:27,543 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:35:27,643 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:40:27,743 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:45:27,827 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:50:27,927 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 15:52:28,064 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 16:00:28,105 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:05:28,207 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:10:28,311 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:15:28,415 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:20:28,519 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:25:28,611 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:30:28,711 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:35:28,815 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:40:28,913 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:45:28,987 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:50:29,091 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 16:52:28,061 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-04-23 17:00:29,295 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 17:05:29,395 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 17:10:29,499 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 17:15:29,603 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 17:20:29,701 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 17:25:29,803 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 17:30:29,902 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 17:35:30,001 INFO || [AdminClient clientId=1-shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2024-04-23 17:40:30,103 INFO || [AdminClient clientId=1-shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient]