Attaching to test-debezium-local_dbz_1 dbz_1 | Plugins are loaded from /kafka/connect dbz_1 | WARNING: it is recommended to specify the STATUS_STORAGE_TOPIC variable for defining the name of the topic where connector statuses will be stored. dbz_1 | This topic may have multiple partitions, be highly replicated (e.g., 3x or more) and should be configured for compaction. dbz_1 | As no value is given, the default of 'connect-status' will be used. dbz_1 | Using the following environment variables: dbz_1 | GROUP_ID=1 dbz_1 | CONFIG_STORAGE_TOPIC=my_connect_configs dbz_1 | OFFSET_STORAGE_TOPIC=my_connect_offsets dbz_1 | BOOTSTRAP_SERVERS=kafka:9092 dbz_1 | REST_HOST_NAME=192.168.32.5 dbz_1 | REST_PORT=8083 dbz_1 | ADVERTISED_HOST_NAME=192.168.32.5 dbz_1 | ADVERTISED_PORT=8083 dbz_1 | KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter dbz_1 | VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter dbz_1 | INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter dbz_1 | INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter dbz_1 | OFFSET_FLUSH_INTERVAL_MS=60000 dbz_1 | OFFSET_FLUSH_TIMEOUT_MS= dbz_1 | SHUTDOWN_TIMEOUT=10000 dbz_1 | --- Setting property from CONNECT_INTERNAL_VALUE_CONVERTER: internal.value.converter=org.apache.kafka.connect.json.JsonConverter dbz_1 | --- Setting property from CONNECT_VALUE_CONVERTER: value.converter=org.apache.kafka.connect.json.JsonConverter dbz_1 | --- Setting property from CONNECT_REST_ADVERTISED_HOST_NAME: rest.advertised.host.name=192.168.32.5 dbz_1 | --- Setting property from CONNECT_OFFSET_FLUSH_INTERVAL_MS: offset.flush.interval.ms=60000 dbz_1 | --- Setting property from CONNECT_GROUP_ID: group.id=1 dbz_1 | --- Setting property from CONNECT_BOOTSTRAP_SERVERS: bootstrap.servers=kafka:9092 dbz_1 | --- Setting property from CONNECT_KEY_CONVERTER: key.converter=org.apache.kafka.connect.json.JsonConverter dbz_1 | --- Setting property from CONNECT_TASK_SHUTDOWN_GRACEFUL_TIMEOUT_MS: task.shutdown.graceful.timeout.ms=10000 dbz_1 | --- Setting property from CONNECT_REST_HOST_NAME: rest.host.name=192.168.32.5 dbz_1 | --- Setting property from CONNECT_PLUGIN_PATH: plugin.path=/kafka/connect dbz_1 | --- Setting property from CONNECT_REST_PORT: rest.port=8083 dbz_1 | --- Setting property from CONNECT_OFFSET_FLUSH_TIMEOUT_MS: offset.flush.timeout.ms=5000 dbz_1 | --- Setting property from CONNECT_INTERNAL_KEY_CONVERTER: internal.key.converter=org.apache.kafka.connect.json.JsonConverter dbz_1 | --- Setting property from CONNECT_CONFIG_STORAGE_TOPIC: config.storage.topic=my_connect_configs dbz_1 | --- Setting property from CONNECT_REST_ADVERTISED_PORT: rest.advertised.port=8083 dbz_1 | --- Setting property from CONNECT_OFFSET_STORAGE_TOPIC: offset.storage.topic=my_connect_offsets dbz_1 | 2019-02-15 14:39:06,831 INFO || Kafka Connect distributed worker initializing ... [org.apache.kafka.connect.cli.ConnectDistributed] dbz_1 | 2019-02-15 14:39:06,852 INFO || WorkerInfo values: dbz_1 | jvm.args = -Xms256M, -Xmx2G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/kafka/bin/../logs, -Dlog4j.configuration=file:/kafka/config/log4j.properties dbz_1 | jvm.spec = Oracle Corporation, OpenJDK 64-Bit Server VM, 1.8.0_191, 25.191-b12 dbz_1 | jvm.classpath = /kafka/bin/../libs/activation-1.1.1.jar:/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/kafka/bin/../libs/argparse4j-0.7.0.jar:/kafka/bin/../libs/audience-annotations-0.5.0.jar:/kafka/bin/../libs/avro-1.8.2.jar:/kafka/bin/../libs/common-config-5.0.1.jar:/kafka/bin/../libs/common-utils-5.0.1.jar:/kafka/bin/../libs/commons-lang3-3.5.jar:/kafka/bin/../libs/compileScala.mapping:/kafka/bin/../libs/compileScala.mapping.asc:/kafka/bin/../libs/connect-api-2.1.0.jar:/kafka/bin/../libs/connect-basic-auth-extension-2.1.0.jar:/kafka/bin/../libs/connect-file-2.1.0.jar:/kafka/bin/../libs/connect-json-2.1.0.jar:/kafka/bin/../libs/connect-runtime-2.1.0.jar:/kafka/bin/../libs/connect-transforms-2.1.0.jar:/kafka/bin/../libs/guava-20.0.jar:/kafka/bin/../libs/hk2-api-2.5.0-b42.jar:/kafka/bin/../libs/hk2-locator-2.5.0-b42.jar:/kafka/bin/../libs/hk2-utils-2.5.0-b42.jar:/kafka/bin/../libs/jackson-annotations-2.9.7.jar:/kafka/bin/../libs/jackson-core-2.9.7.jar:/kafka/bin/../libs/jackson-core-asl-1.9.13.jar:/kafka/bin/../libs/jackson-databind-2.9.7.jar:/kafka/bin/../libs/jackson-jaxrs-base-2.9.7.jar:/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.7.jar:/kafka/bin/../libs/jackson-mapper-asl-1.9.13.jar:/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.7.jar:/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/kafka/bin/../libs/javax.annotation-api-1.2.jar:/kafka/bin/../libs/javax.inject-1.jar:/kafka/bin/../libs/javax.inject-2.5.0-b42.jar:/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/kafka/bin/../libs/javax.ws.rs-api-2.1.jar:/kafka/bin/../libs/jaxb-api-2.3.0.jar:/kafka/bin/../libs/jersey-client-2.27.jar:/kafka/bin/../libs/jersey-common-2.27.jar:/kafka/bin/../libs/jersey-container-servlet-2.27.jar:/kafka/bin/../libs/jersey-container-servlet-core-2.27.jar:/kafka/bin/../libs/jersey-hk2-2.27.jar:/kafka/bin/../libs/jersey-media-jaxb-2.27.jar:/kafka/bin/../libs/jersey-server-2.27.jar:/kafka/bin/../libs/jetty-client-9.4.12.v20180830.jar:/kafka/bin/../libs/jetty-continuation-9.4.12.v20180830.jar:/kafka/bin/../libs/jetty-http-9.4.12.v20180830.jar:/kafka/bin/../libs/jetty-io-9.4.12.v20180830.jar:/kafka/bin/../libs/jetty-security-9.4.12.v20180830.jar:/kafka/bin/../libs/jetty-server-9.4.12.v20180830.jar:/kafka/bin/../libs/jetty-servlet-9.4.12.v20180830.jar:/kafka/bin/../libs/jetty-servlets-9.4.12.v20180830.jar:/kafka/bin/../libs/jetty-util-9.4.12.v20180830.jar:/kafka/bin/../libs/jopt-simple-5.0.4.jar:/kafka/bin/../libs/kafka-avro-serializer-5.0.1.jar:/kafka/bin/../libs/kafka-clients-2.1.0.jar:/kafka/bin/../libs/kafka-connect-avro-converter-5.0.1.jar:/kafka/bin/../libs/kafka-log4j-appender-2.1.0.jar:/kafka/bin/../libs/kafka-schema-registry-client-5.0.1.jar:/kafka/bin/../libs/kafka-streams-2.1.0.jar:/kafka/bin/../libs/kafka-streams-examples-2.1.0.jar:/kafka/bin/../libs/kafka-streams-scala_2.12-2.1.0.jar:/kafka/bin/../libs/kafka-streams-test-utils-2.1.0.jar:/kafka/bin/../libs/kafka-tools-2.1.0.jar:/kafka/bin/../libs/kafka_2.12-2.1.0.jar:/kafka/bin/../libs/log4j-1.2.17.jar:/kafka/bin/../libs/lz4-java-1.5.0.jar:/kafka/bin/../libs/maven-artifact-3.5.4.jar:/kafka/bin/../libs/metrics-core-2.2.0.jar:/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/kafka/bin/../libs/plexus-utils-3.1.0.jar:/kafka/bin/../libs/reflections-0.9.11.jar:/kafka/bin/../libs/rocksdbjni-5.14.2.jar:/kafka/bin/../libs/scala-library-2.12.7.jar:/kafka/bin/../libs/scala-logging_2.12-3.9.0.jar:/kafka/bin/../libs/scala-reflect-2.12.7.jar:/kafka/bin/../libs/slf4j-api-1.7.25.jar:/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/kafka/bin/../libs/snappy-java-1.1.7.2.jar:/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/kafka/bin/../libs/zkclient-0.10.jar:/kafka/bin/../libs/zookeeper-3.4.13.jar:/kafka/bin/../libs/zstd-jni-1.3.5-4.jar dbz_1 | os.spec = Linux, amd64, 4.9.125-linuxkit dbz_1 | os.vcpus = 2 dbz_1 | [org.apache.kafka.connect.runtime.WorkerInfo] dbz_1 | 2019-02-15 14:39:06,876 INFO || Scanning for plugin classes. This might take a moment ... [org.apache.kafka.connect.cli.ConnectDistributed] dbz_1 | 2019-02-15 14:39:06,945 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:07,887 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:07,890 INFO || Added plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:07,893 INFO || Added plugin 'io.debezium.transforms.UnwrapFromEnvelope' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:07,893 INFO || Added plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:07,895 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:08,188 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:08,189 INFO || Added plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:08,230 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:09,771 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:09,775 INFO || Added plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:09,775 INFO || Added plugin 'io.confluent.connect.avro.AvroConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:09,775 INFO || Added plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:09,776 INFO || Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:09,776 INFO || Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:09,882 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:10,462 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:10,466 INFO || Added plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:10,484 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:10,709 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:10,720 INFO || Added plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:10,720 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,782 INFO || Registered loader: sun.misc.Launcher$AppClassLoader@764c12b6 [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,782 INFO || Added plugin 'org.apache.kafka.connect.tools.MockSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,782 INFO || Added plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,783 INFO || Added plugin 'org.apache.kafka.connect.tools.MockConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,783 INFO || Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,783 INFO || Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,783 INFO || Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,784 INFO || Added plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,784 INFO || Added plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,785 INFO || Added plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,785 INFO || Added plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,785 INFO || Added plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,786 INFO || Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,787 INFO || Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,787 INFO || Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,787 INFO || Added plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,788 INFO || Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,788 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,788 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,788 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,789 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,789 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,789 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,789 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,790 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,790 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,790 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,790 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,791 INFO || Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,791 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,791 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,791 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,792 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,792 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,792 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,793 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,793 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,793 INFO || Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,795 INFO || Added aliases 'MongoDbConnector' and 'MongoDb' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,795 INFO || Added aliases 'MySqlConnector' and 'MySql' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,795 INFO || Added aliases 'OracleConnector' and 'Oracle' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,796 INFO || Added aliases 'PostgresConnector' and 'Postgres' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,796 INFO || Added aliases 'SqlServerConnector' and 'SqlServer' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,796 INFO || Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,797 INFO || Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,797 INFO || Added aliases 'MockConnector' and 'Mock' to plugin 'org.apache.kafka.connect.tools.MockConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,798 INFO || Added aliases 'MockSinkConnector' and 'MockSink' to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,798 INFO || Added aliases 'MockSourceConnector' and 'MockSource' to plugin 'org.apache.kafka.connect.tools.MockSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,798 INFO || Added aliases 'SchemaSourceConnector' and 'SchemaSource' to plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,799 INFO || Added aliases 'VerifiableSinkConnector' and 'VerifiableSink' to plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,799 INFO || Added aliases 'VerifiableSourceConnector' and 'VerifiableSource' to plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,799 INFO || Added aliases 'AvroConverter' and 'Avro' to plugin 'io.confluent.connect.avro.AvroConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,800 INFO || Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,800 INFO || Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,801 INFO || Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,801 INFO || Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,801 INFO || Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,801 INFO || Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,802 INFO || Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,802 INFO || Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,802 INFO || Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,803 INFO || Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,803 INFO || Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,803 INFO || Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,804 INFO || Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,804 INFO || Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,805 INFO || Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,805 INFO || Added alias 'SimpleHeaderConverter' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,805 INFO || Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,806 INFO || Added alias 'UnwrapFromMongoDbEnvelope' to plugin 'io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,807 INFO || Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,808 INFO || Added alias 'UnwrapFromEnvelope' to plugin 'io.debezium.transforms.UnwrapFromEnvelope' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,809 INFO || Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,810 INFO || Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,810 INFO || Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,810 INFO || Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] dbz_1 | 2019-02-15 14:39:12,852 INFO || DistributedConfig values: dbz_1 | access.control.allow.methods = dbz_1 | access.control.allow.origin = dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | config.providers = [] dbz_1 | config.storage.replication.factor = 1 dbz_1 | config.storage.topic = my_connect_configs dbz_1 | connections.max.idle.ms = 540000 dbz_1 | group.id = 1 dbz_1 | header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter dbz_1 | heartbeat.interval.ms = 3000 dbz_1 | internal.key.converter = class org.apache.kafka.connect.json.JsonConverter dbz_1 | internal.value.converter = class org.apache.kafka.connect.json.JsonConverter dbz_1 | key.converter = class org.apache.kafka.connect.json.JsonConverter dbz_1 | listeners = null dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | offset.flush.interval.ms = 60000 dbz_1 | offset.flush.timeout.ms = 5000 dbz_1 | offset.storage.partitions = 25 dbz_1 | offset.storage.replication.factor = 1 dbz_1 | offset.storage.topic = my_connect_offsets dbz_1 | plugin.path = [/kafka/connect] dbz_1 | rebalance.timeout.ms = 60000 dbz_1 | receive.buffer.bytes = 32768 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 40000 dbz_1 | rest.advertised.host.name = 192.168.32.5 dbz_1 | rest.advertised.listener = null dbz_1 | rest.advertised.port = 8083 dbz_1 | rest.extension.classes = [] dbz_1 | rest.host.name = 192.168.32.5 dbz_1 | rest.port = 8083 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | session.timeout.ms = 10000 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.client.auth = none dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | status.storage.partitions = 5 dbz_1 | status.storage.replication.factor = 1 dbz_1 | status.storage.topic = connect-status dbz_1 | task.shutdown.graceful.timeout.ms = 10000 dbz_1 | value.converter = class org.apache.kafka.connect.json.JsonConverter dbz_1 | worker.sync.timeout.ms = 3000 dbz_1 | worker.unsync.backoff.ms = 300000 dbz_1 | [org.apache.kafka.connect.runtime.distributed.DistributedConfig] dbz_1 | 2019-02-15 14:39:12,856 INFO || Worker configuration property 'internal.key.converter' is deprecated and may be removed in an upcoming release. The specified value matches the default, so this property can be safely removed from the worker configuration. [org.apache.kafka.connect.runtime.WorkerConfig] dbz_1 | 2019-02-15 14:39:12,856 INFO || Worker configuration property 'internal.value.converter' is deprecated and may be removed in an upcoming release. The specified value matches the default, so this property can be safely removed from the worker configuration. [org.apache.kafka.connect.runtime.WorkerConfig] dbz_1 | 2019-02-15 14:39:12,859 INFO || Creating Kafka admin client [org.apache.kafka.connect.util.ConnectUtils] dbz_1 | 2019-02-15 14:39:12,865 INFO || AdminClientConfig values: dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | connections.max.idle.ms = 300000 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | receive.buffer.bytes = 65536 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 120000 dbz_1 | retries = 5 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,005 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,006 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,008 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:13,008 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:13,356 INFO || Kafka cluster ID: h4azyV1LQEGG9E-J95Z37g [org.apache.kafka.connect.util.ConnectUtils] dbz_1 | 2019-02-15 14:39:13,401 INFO || Logging initialized @7591ms to org.eclipse.jetty.util.log.Slf4jLog [org.eclipse.jetty.util.log] dbz_1 | 2019-02-15 14:39:13,498 INFO || Added connector for http://192.168.32.5:8083 [org.apache.kafka.connect.runtime.rest.RestServer] dbz_1 | 2019-02-15 14:39:13,541 INFO || Advertised URI: http://192.168.32.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] dbz_1 | 2019-02-15 14:39:13,565 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:13,566 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:13,811 INFO || JsonConverterConfig values: dbz_1 | converter.type = key dbz_1 | schemas.cache.size = 1000 dbz_1 | schemas.enable = false dbz_1 | [org.apache.kafka.connect.json.JsonConverterConfig] dbz_1 | 2019-02-15 14:39:13,813 INFO || JsonConverterConfig values: dbz_1 | converter.type = value dbz_1 | schemas.cache.size = 1000 dbz_1 | schemas.enable = false dbz_1 | [org.apache.kafka.connect.json.JsonConverterConfig] dbz_1 | 2019-02-15 14:39:13,885 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:13,885 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:13,891 INFO || Kafka Connect distributed worker initialization took 7056ms [org.apache.kafka.connect.cli.ConnectDistributed] dbz_1 | 2019-02-15 14:39:13,891 INFO || Kafka Connect starting [org.apache.kafka.connect.runtime.Connect] dbz_1 | 2019-02-15 14:39:13,893 INFO || Starting REST server [org.apache.kafka.connect.runtime.rest.RestServer] dbz_1 | 2019-02-15 14:39:13,893 INFO || Herder starting [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:39:13,894 INFO || Worker starting [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:39:13,894 INFO || Starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] dbz_1 | 2019-02-15 14:39:13,894 INFO || Starting KafkaBasedLog with topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] dbz_1 | 2019-02-15 14:39:13,895 INFO || AdminClientConfig values: dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | connections.max.idle.ms = 300000 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | receive.buffer.bytes = 65536 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 120000 dbz_1 | retries = 5 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,906 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,907 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,907 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,908 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,908 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,909 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,909 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,909 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,910 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,911 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,911 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,912 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,920 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,921 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,921 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,921 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,922 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,922 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,923 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,923 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,923 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:13,924 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:13,925 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:14,112 INFO || jetty-9.4.12.v20180830; built: 2018-08-30T13:59:14.071Z; git: 27208684755d94a92186989f695db2d7b21ebc51; jvm 1.8.0_191-b12 [org.eclipse.jetty.server.Server] dbz_1 | 2019-02-15 14:39:14,190 INFO || DefaultSessionIdManager workerName=node0 [org.eclipse.jetty.server.session] dbz_1 | 2019-02-15 14:39:14,191 INFO || No SessionScavenger set, using defaults [org.eclipse.jetty.server.session] dbz_1 | 2019-02-15 14:39:14,195 INFO || node0 Scavenging every 660000ms [org.eclipse.jetty.server.session] dbz_1 | Feb 15, 2019 2:39:15 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime dbz_1 | WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.RootResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.RootResource will be ignored. dbz_1 | Feb 15, 2019 2:39:15 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime dbz_1 | WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored. dbz_1 | Feb 15, 2019 2:39:15 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime dbz_1 | WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored. dbz_1 | 2019-02-15 14:39:15,454 INFO || Created topic (name=my_connect_offsets, numPartitions=25, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] dbz_1 | 2019-02-15 14:39:15,472 INFO || ProducerConfig values: dbz_1 | acks = all dbz_1 | batch.size = 16384 dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | buffer.memory = 33554432 dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | compression.type = none dbz_1 | connections.max.idle.ms = 540000 dbz_1 | delivery.timeout.ms = 2147483647 dbz_1 | enable.idempotence = false dbz_1 | interceptor.classes = [] dbz_1 | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer dbz_1 | linger.ms = 0 dbz_1 | max.block.ms = 60000 dbz_1 | max.in.flight.requests.per.connection = 1 dbz_1 | max.request.size = 1048576 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner dbz_1 | receive.buffer.bytes = 32768 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 30000 dbz_1 | retries = 2147483647 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | transaction.timeout.ms = 60000 dbz_1 | transactional.id = null dbz_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer dbz_1 | [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,514 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,515 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,517 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,518 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,518 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,519 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,519 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,520 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,520 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,520 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,521 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,521 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,521 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,522 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,527 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,528 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,528 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,528 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,529 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,529 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,529 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:15,530 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:15,530 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:15,542 INFO || ConsumerConfig values: dbz_1 | auto.commit.interval.ms = 5000 dbz_1 | auto.offset.reset = earliest dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | check.crcs = true dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | connections.max.idle.ms = 540000 dbz_1 | default.api.timeout.ms = 60000 dbz_1 | enable.auto.commit = false dbz_1 | exclude.internal.topics = true dbz_1 | fetch.max.bytes = 52428800 dbz_1 | fetch.max.wait.ms = 500 dbz_1 | fetch.min.bytes = 1 dbz_1 | group.id = 1 dbz_1 | heartbeat.interval.ms = 3000 dbz_1 | interceptor.classes = [] dbz_1 | internal.leave.group.on.close = true dbz_1 | isolation.level = read_uncommitted dbz_1 | key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer dbz_1 | max.partition.fetch.bytes = 1048576 dbz_1 | max.poll.interval.ms = 300000 dbz_1 | max.poll.records = 500 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] dbz_1 | receive.buffer.bytes = 65536 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 30000 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | session.timeout.ms = 10000 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer dbz_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,597 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,597 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,597 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,597 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,597 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,597 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,597 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,598 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,599 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:15,599 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:15,599 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:15,632 INFO || Cluster ID: h4azyV1LQEGG9E-J95Z37g [org.apache.kafka.clients.Metadata] dbz_1 | Feb 15, 2019 2:39:15 PM org.glassfish.jersey.internal.Errors logErrors dbz_1 | WARNING: The following warnings have been detected: WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. dbz_1 | WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. dbz_1 | WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation. dbz_1 | WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation. dbz_1 | dbz_1 | 2019-02-15 14:39:15,675 INFO || Started o.e.j.s.ServletContextHandler@31e04b13{/,null,AVAILABLE} [org.eclipse.jetty.server.handler.ContextHandler] dbz_1 | 2019-02-15 14:39:15,697 INFO || Started http_192.168.32.58083@552ed807{HTTP/1.1,[http/1.1]}{192.168.32.5:8083} [org.eclipse.jetty.server.AbstractConnector] dbz_1 | 2019-02-15 14:39:15,699 INFO || Started @9924ms [org.eclipse.jetty.server.Server] dbz_1 | 2019-02-15 14:39:15,700 INFO || Advertised URI: http://192.168.32.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] dbz_1 | 2019-02-15 14:39:15,700 INFO || REST server listening at http://192.168.32.5:8083/, advertising URL http://192.168.32.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] dbz_1 | 2019-02-15 14:39:15,700 INFO || Kafka Connect started [org.apache.kafka.connect.runtime.Connect] dbz_1 | 2019-02-15 14:39:17,018 INFO || [Consumer clientId=consumer-1, groupId=1] Discovered group coordinator 192.168.32.4:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:39:17,047 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-2 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,048 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-4 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,048 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-6 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,048 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-8 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,049 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-0 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,049 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-18 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,049 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-20 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,049 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-22 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,050 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-24 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,050 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-10 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,050 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-12 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,050 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-14 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,051 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-16 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,051 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-3 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,051 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-5 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,051 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-7 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,051 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-9 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,051 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-1 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,052 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-19 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,052 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-21 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,052 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-23 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,052 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-11 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,054 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-13 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,054 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-15 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,056 INFO || [Consumer clientId=consumer-1, groupId=1] Resetting offset for partition my_connect_offsets-17 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,057 INFO || Finished reading KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] dbz_1 | 2019-02-15 14:39:17,059 INFO || Started KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] dbz_1 | 2019-02-15 14:39:17,059 INFO || Finished reading offsets topic and starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] dbz_1 | 2019-02-15 14:39:17,062 INFO || Worker started [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:39:17,062 INFO || Starting KafkaBasedLog with topic connect-status [org.apache.kafka.connect.util.KafkaBasedLog] dbz_1 | 2019-02-15 14:39:17,063 INFO || AdminClientConfig values: dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | connections.max.idle.ms = 300000 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | receive.buffer.bytes = 65536 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 120000 dbz_1 | retries = 5 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,065 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,066 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,066 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,066 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,066 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,066 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,067 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,070 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,070 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,070 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,071 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,071 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,071 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,071 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,072 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,072 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,072 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,072 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,072 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,073 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,073 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,073 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,073 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,231 INFO || Created topic (name=connect-status, numPartitions=5, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] dbz_1 | 2019-02-15 14:39:17,235 INFO || ProducerConfig values: dbz_1 | acks = all dbz_1 | batch.size = 16384 dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | buffer.memory = 33554432 dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | compression.type = none dbz_1 | connections.max.idle.ms = 540000 dbz_1 | delivery.timeout.ms = 120000 dbz_1 | enable.idempotence = false dbz_1 | interceptor.classes = [] dbz_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer dbz_1 | linger.ms = 0 dbz_1 | max.block.ms = 60000 dbz_1 | max.in.flight.requests.per.connection = 1 dbz_1 | max.request.size = 1048576 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner dbz_1 | receive.buffer.bytes = 32768 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 30000 dbz_1 | retries = 0 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | transaction.timeout.ms = 60000 dbz_1 | transactional.id = null dbz_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer dbz_1 | [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,244 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,245 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,246 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,246 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,246 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,246 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,246 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,246 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,247 INFO || ConsumerConfig values: dbz_1 | auto.commit.interval.ms = 5000 dbz_1 | auto.offset.reset = earliest dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | check.crcs = true dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | connections.max.idle.ms = 540000 dbz_1 | default.api.timeout.ms = 60000 dbz_1 | enable.auto.commit = false dbz_1 | exclude.internal.topics = true dbz_1 | fetch.max.bytes = 52428800 dbz_1 | fetch.max.wait.ms = 500 dbz_1 | fetch.min.bytes = 1 dbz_1 | group.id = 1 dbz_1 | heartbeat.interval.ms = 3000 dbz_1 | interceptor.classes = [] dbz_1 | internal.leave.group.on.close = true dbz_1 | isolation.level = read_uncommitted dbz_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer dbz_1 | max.partition.fetch.bytes = 1048576 dbz_1 | max.poll.interval.ms = 300000 dbz_1 | max.poll.records = 500 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] dbz_1 | receive.buffer.bytes = 65536 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 30000 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | session.timeout.ms = 10000 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer dbz_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,252 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,252 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,252 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,252 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,253 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,253 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,253 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,253 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,253 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,253 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,254 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,254 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,267 INFO || Cluster ID: h4azyV1LQEGG9E-J95Z37g [org.apache.kafka.clients.Metadata] dbz_1 | 2019-02-15 14:39:17,294 INFO || [Consumer clientId=consumer-2, groupId=1] Discovered group coordinator 192.168.32.4:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:39:17,303 INFO || [Consumer clientId=consumer-2, groupId=1] Resetting offset for partition connect-status-1 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,303 INFO || [Consumer clientId=consumer-2, groupId=1] Resetting offset for partition connect-status-2 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,303 INFO || [Consumer clientId=consumer-2, groupId=1] Resetting offset for partition connect-status-0 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,303 INFO || [Consumer clientId=consumer-2, groupId=1] Resetting offset for partition connect-status-3 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,304 INFO || [Consumer clientId=consumer-2, groupId=1] Resetting offset for partition connect-status-4 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,304 INFO || Finished reading KafkaBasedLog for topic connect-status [org.apache.kafka.connect.util.KafkaBasedLog] dbz_1 | 2019-02-15 14:39:17,304 INFO || Started KafkaBasedLog for topic connect-status [org.apache.kafka.connect.util.KafkaBasedLog] dbz_1 | 2019-02-15 14:39:17,305 INFO || Starting KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] dbz_1 | 2019-02-15 14:39:17,305 INFO || Starting KafkaBasedLog with topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] dbz_1 | 2019-02-15 14:39:17,310 INFO || AdminClientConfig values: dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | connections.max.idle.ms = 300000 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | receive.buffer.bytes = 65536 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 120000 dbz_1 | retries = 5 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,315 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,315 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,316 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,317 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,317 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,318 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,318 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,327 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,327 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,328 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,328 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,328 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,328 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,329 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,329 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,329 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,329 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,329 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,329 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,330 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,330 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.admin.AdminClientConfig] dbz_1 | 2019-02-15 14:39:17,330 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,331 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,401 INFO || Created topic (name=my_connect_configs, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] dbz_1 | 2019-02-15 14:39:17,404 INFO || ProducerConfig values: dbz_1 | acks = all dbz_1 | batch.size = 16384 dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | buffer.memory = 33554432 dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | compression.type = none dbz_1 | connections.max.idle.ms = 540000 dbz_1 | delivery.timeout.ms = 2147483647 dbz_1 | enable.idempotence = false dbz_1 | interceptor.classes = [] dbz_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer dbz_1 | linger.ms = 0 dbz_1 | max.block.ms = 60000 dbz_1 | max.in.flight.requests.per.connection = 1 dbz_1 | max.request.size = 1048576 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner dbz_1 | receive.buffer.bytes = 32768 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 30000 dbz_1 | retries = 2147483647 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | transaction.timeout.ms = 60000 dbz_1 | transactional.id = null dbz_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer dbz_1 | [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,415 WARN || The configuration 'group.id' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,415 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,416 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,416 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,416 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,417 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,417 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,417 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,418 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,418 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,418 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,418 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,418 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,418 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,419 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,422 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,422 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,422 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,423 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,423 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,423 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:39:17,424 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,424 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,425 INFO || ConsumerConfig values: dbz_1 | auto.commit.interval.ms = 5000 dbz_1 | auto.offset.reset = earliest dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | check.crcs = true dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | connections.max.idle.ms = 540000 dbz_1 | default.api.timeout.ms = 60000 dbz_1 | enable.auto.commit = false dbz_1 | exclude.internal.topics = true dbz_1 | fetch.max.bytes = 52428800 dbz_1 | fetch.max.wait.ms = 500 dbz_1 | fetch.min.bytes = 1 dbz_1 | group.id = 1 dbz_1 | heartbeat.interval.ms = 3000 dbz_1 | interceptor.classes = [] dbz_1 | internal.leave.group.on.close = true dbz_1 | isolation.level = read_uncommitted dbz_1 | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer dbz_1 | max.partition.fetch.bytes = 1048576 dbz_1 | max.poll.interval.ms = 300000 dbz_1 | max.poll.records = 500 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] dbz_1 | receive.buffer.bytes = 65536 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 30000 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | session.timeout.ms = 10000 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer dbz_1 | [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,430 WARN || The configuration 'rest.advertised.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'task.shutdown.graceful.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'plugin.path' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'status.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'offset.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'config.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'rest.advertised.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'status.storage.topic' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'rest.host.name' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'config.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'rest.port' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'internal.key.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'internal.value.converter' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 WARN || The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. [org.apache.kafka.clients.consumer.ConsumerConfig] dbz_1 | 2019-02-15 14:39:17,431 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,431 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:39:17,448 INFO || Cluster ID: h4azyV1LQEGG9E-J95Z37g [org.apache.kafka.clients.Metadata] dbz_1 | 2019-02-15 14:39:17,481 INFO || [Consumer clientId=consumer-3, groupId=1] Discovered group coordinator 192.168.32.4:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:39:17,489 INFO || [Consumer clientId=consumer-3, groupId=1] Resetting offset for partition my_connect_configs-0 to offset 0. [org.apache.kafka.clients.consumer.internals.Fetcher] dbz_1 | 2019-02-15 14:39:17,489 INFO || Finished reading KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] dbz_1 | 2019-02-15 14:39:17,490 INFO || Started KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] dbz_1 | 2019-02-15 14:39:17,490 INFO || Started KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] dbz_1 | 2019-02-15 14:39:17,490 INFO || Herder started [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:39:17,501 INFO || Cluster ID: h4azyV1LQEGG9E-J95Z37g [org.apache.kafka.clients.Metadata] dbz_1 | 2019-02-15 14:39:17,503 INFO || [Worker clientId=connect-1, groupId=1] Discovered group coordinator 192.168.32.4:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:39:17,515 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:39:17,740 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation 1 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:39:17,743 INFO || Joined group and got assignment: Assignment{error=0, leader='connect-1-e557cb74-d5f0-44c7-ad25-994377fc2820', leaderUrl='http://192.168.32.5:8083/', offset=-1, connectorIds=[], taskIds=[]} [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:39:17,745 INFO || Starting connectors and tasks using config offset -1 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:39:17,745 INFO || Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:22,292 WARN || The connection password is empty [io.debezium.connector.postgresql.PostgresConnector] dbz_1 | 2019-02-15 14:43:22,478 INFO || Successfully tested connection for jdbc:postgresql://db:5432/test with user 'postgres' [io.debezium.connector.postgresql.PostgresConnector] dbz_1 | 2019-02-15 14:43:22,513 INFO || Cluster ID: h4azyV1LQEGG9E-J95Z37g [org.apache.kafka.clients.Metadata] dbz_1 | 2019-02-15 14:43:22,588 INFO || Connector connector config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,087 INFO || Rebalance started [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,088 INFO || Finished stopping tasks in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,088 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:43:23,119 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation 2 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:43:23,120 INFO || Joined group and got assignment: Assignment{error=0, leader='connect-1-e557cb74-d5f0-44c7-ad25-994377fc2820', leaderUrl='http://192.168.32.5:8083/', offset=1, connectorIds=[connector], taskIds=[]} [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,121 INFO || Starting connectors and tasks using config offset 1 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,122 INFO || Starting connector connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,124 INFO || ConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.ConnectorConfig] dbz_1 | 2019-02-15 14:43:23,126 INFO || EnrichedConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] dbz_1 | 2019-02-15 14:43:23,126 INFO || Creating connector connector of type io.debezium.connector.postgresql.PostgresConnector [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,146 INFO || Instantiated connector connector with version 0.9.1.Final of type class io.debezium.connector.postgresql.PostgresConnector [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,176 INFO || Cluster ID: h4azyV1LQEGG9E-J95Z37g [org.apache.kafka.clients.Metadata] dbz_1 | 2019-02-15 14:43:23,182 INFO || Finished creating connector connector [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,187 INFO || SourceConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] dbz_1 | 2019-02-15 14:43:23,189 INFO || EnrichedConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] dbz_1 | 2019-02-15 14:43:23,259 INFO || 192.168.32.1 - - [15/Feb/2019:14:43:21 +0000] "POST /connectors HTTP/1.1" 201 402 1308 [org.apache.kafka.connect.runtime.rest.RestServer] dbz_1 | 2019-02-15 14:43:23,610 INFO || Tasks [connector-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,611 INFO || Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,612 INFO || Rebalance started [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,613 INFO || Stopping connector connector [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,624 INFO || Stopped connector connector [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,624 INFO || Finished stopping tasks in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,625 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:43:23,633 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation 3 [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] dbz_1 | 2019-02-15 14:43:23,634 INFO || Joined group and got assignment: Assignment{error=0, leader='connect-1-e557cb74-d5f0-44c7-ad25-994377fc2820', leaderUrl='http://192.168.32.5:8083/', offset=3, connectorIds=[connector], taskIds=[connector-0]} [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,634 INFO || Starting connectors and tasks using config offset 3 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,638 INFO || Starting connector connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,639 INFO || ConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.ConnectorConfig] dbz_1 | 2019-02-15 14:43:23,640 INFO || Starting task connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,639 INFO || EnrichedConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] dbz_1 | 2019-02-15 14:43:23,641 INFO || Creating connector connector of type io.debezium.connector.postgresql.PostgresConnector [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,642 INFO || Instantiated connector connector with version 0.9.1.Final of type class io.debezium.connector.postgresql.PostgresConnector [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,643 INFO || Creating task connector-0 [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,645 INFO || ConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.ConnectorConfig] dbz_1 | 2019-02-15 14:43:23,645 INFO || EnrichedConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] dbz_1 | 2019-02-15 14:43:23,649 INFO || TaskConfig values: dbz_1 | task.class = class io.debezium.connector.postgresql.PostgresConnectorTask dbz_1 | [org.apache.kafka.connect.runtime.TaskConfig] dbz_1 | 2019-02-15 14:43:23,649 INFO || Instantiated task connector-0 with version 0.9.1.Final of type io.debezium.connector.postgresql.PostgresConnectorTask [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,650 INFO || Finished creating connector connector [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,650 INFO || JsonConverterConfig values: dbz_1 | converter.type = key dbz_1 | schemas.cache.size = 1000 dbz_1 | schemas.enable = true dbz_1 | [org.apache.kafka.connect.json.JsonConverterConfig] dbz_1 | 2019-02-15 14:43:23,660 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,661 INFO || JsonConverterConfig values: dbz_1 | converter.type = value dbz_1 | schemas.cache.size = 1000 dbz_1 | schemas.enable = true dbz_1 | [org.apache.kafka.connect.json.JsonConverterConfig] dbz_1 | 2019-02-15 14:43:23,661 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,661 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] dbz_1 | 2019-02-15 14:43:23,662 INFO || SourceConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.SourceConnectorConfig] dbz_1 | 2019-02-15 14:43:23,663 INFO || EnrichedConnectorConfig values: dbz_1 | config.action.reload = RESTART dbz_1 | connector.class = io.debezium.connector.postgresql.PostgresConnector dbz_1 | errors.log.enable = false dbz_1 | errors.log.include.messages = false dbz_1 | errors.retry.delay.max.ms = 60000 dbz_1 | errors.retry.timeout = 0 dbz_1 | errors.tolerance = none dbz_1 | header.converter = null dbz_1 | key.converter = null dbz_1 | name = connector dbz_1 | tasks.max = 1 dbz_1 | transforms = [] dbz_1 | value.converter = null dbz_1 | [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] dbz_1 | 2019-02-15 14:43:23,672 INFO || ProducerConfig values: dbz_1 | acks = all dbz_1 | batch.size = 16384 dbz_1 | bootstrap.servers = [kafka:9092] dbz_1 | buffer.memory = 33554432 dbz_1 | client.dns.lookup = default dbz_1 | client.id = dbz_1 | compression.type = none dbz_1 | connections.max.idle.ms = 540000 dbz_1 | delivery.timeout.ms = 2147483647 dbz_1 | enable.idempotence = false dbz_1 | interceptor.classes = [] dbz_1 | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer dbz_1 | linger.ms = 0 dbz_1 | max.block.ms = 9223372036854775807 dbz_1 | max.in.flight.requests.per.connection = 1 dbz_1 | max.request.size = 1048576 dbz_1 | metadata.max.age.ms = 300000 dbz_1 | metric.reporters = [] dbz_1 | metrics.num.samples = 2 dbz_1 | metrics.recording.level = INFO dbz_1 | metrics.sample.window.ms = 30000 dbz_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner dbz_1 | receive.buffer.bytes = 32768 dbz_1 | reconnect.backoff.max.ms = 1000 dbz_1 | reconnect.backoff.ms = 50 dbz_1 | request.timeout.ms = 2147483647 dbz_1 | retries = 2147483647 dbz_1 | retry.backoff.ms = 100 dbz_1 | sasl.client.callback.handler.class = null dbz_1 | sasl.jaas.config = null dbz_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit dbz_1 | sasl.kerberos.min.time.before.relogin = 60000 dbz_1 | sasl.kerberos.service.name = null dbz_1 | sasl.kerberos.ticket.renew.jitter = 0.05 dbz_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 dbz_1 | sasl.login.callback.handler.class = null dbz_1 | sasl.login.class = null dbz_1 | sasl.login.refresh.buffer.seconds = 300 dbz_1 | sasl.login.refresh.min.period.seconds = 60 dbz_1 | sasl.login.refresh.window.factor = 0.8 dbz_1 | sasl.login.refresh.window.jitter = 0.05 dbz_1 | sasl.mechanism = GSSAPI dbz_1 | security.protocol = PLAINTEXT dbz_1 | send.buffer.bytes = 131072 dbz_1 | ssl.cipher.suites = null dbz_1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] dbz_1 | ssl.endpoint.identification.algorithm = https dbz_1 | ssl.key.password = null dbz_1 | ssl.keymanager.algorithm = SunX509 dbz_1 | ssl.keystore.location = null dbz_1 | ssl.keystore.password = null dbz_1 | ssl.keystore.type = JKS dbz_1 | ssl.protocol = TLS dbz_1 | ssl.provider = null dbz_1 | ssl.secure.random.implementation = null dbz_1 | ssl.trustmanager.algorithm = PKIX dbz_1 | ssl.truststore.location = null dbz_1 | ssl.truststore.password = null dbz_1 | ssl.truststore.type = JKS dbz_1 | transaction.timeout.ms = 60000 dbz_1 | transactional.id = null dbz_1 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer dbz_1 | [org.apache.kafka.clients.producer.ProducerConfig] dbz_1 | 2019-02-15 14:43:23,678 INFO || Kafka version : 2.1.0 [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:43:23,678 INFO || Kafka commitId : 809be928f1ae004e [org.apache.kafka.common.utils.AppInfoParser] dbz_1 | 2019-02-15 14:43:23,694 INFO || Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] dbz_1 | 2019-02-15 14:43:23,709 INFO || Starting PostgresConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,716 INFO || connector.class = io.debezium.connector.postgresql.PostgresConnector [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,716 INFO || database.user = postgres [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,716 INFO || database.dbname = test [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,716 INFO || task.class = io.debezium.connector.postgresql.PostgresConnectorTask [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,717 INFO || tasks.max = 1 [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,717 INFO || database.hostname = db [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,717 INFO || database.history.kafka.bootstrap.servers = kafka:9092 [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,717 INFO || name = connector [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,717 INFO || database.server.name = dbserver1 [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,717 INFO || plugin.name = wal2json [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,717 INFO || database.port = 5432 [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,717 INFO || snapshot.mode = initial [io.debezium.connector.common.BaseSourceTask] dbz_1 | 2019-02-15 14:43:23,836 INFO Postgres|dbserver1|postgres-connector-task user 'postgres' connected to database 'test' on PostgreSQL 10.6 (Debian 10.6-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit with roles: dbz_1 | role 'pg_read_all_settings' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false] dbz_1 | role 'pg_stat_scan_tables' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false] dbz_1 | role 'pg_monitor' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false] dbz_1 | role 'pg_read_all_stats' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false] dbz_1 | role 'pg_signal_backend' [superuser: false, replication: false, inherit: true, create role: false, create db: false, can log in: false] dbz_1 | role 'postgres' [superuser: true, replication: true, inherit: true, create role: true, create db: true, can log in: true] [io.debezium.connector.postgresql.PostgresConnectorTask] dbz_1 | 2019-02-15 14:43:23,836 INFO Postgres|dbserver1|postgres-connector-task No previous offset found [io.debezium.connector.postgresql.PostgresConnectorTask] dbz_1 | 2019-02-15 14:43:23,836 INFO Postgres|dbserver1|postgres-connector-task Taking a new snapshot of the DB and streaming logical changes once the snapshot is finished... [io.debezium.connector.postgresql.PostgresConnectorTask] dbz_1 | 2019-02-15 14:43:23,840 INFO Postgres|dbserver1|postgres-connector-task Requested thread factory for connector PostgresConnector, id = dbserver1 named = records-snapshot-producer [io.debezium.util.Threads] dbz_1 | 2019-02-15 14:43:23,842 INFO Postgres|dbserver1|postgres-connector-task Requested thread factory for connector PostgresConnector, id = dbserver1 named = records-stream-producer [io.debezium.util.Threads] dbz_1 | 2019-02-15 14:43:23,874 INFO Postgres|dbserver1|postgres-connector-task Obtained valid replication slot ReplicationSlot [active=false, latestFlushedLSN=null] [io.debezium.connector.postgresql.connection.PostgresConnection] dbz_1 | 2019-02-15 14:43:23,907 INFO Postgres|dbserver1|records-snapshot-producer Creating thread debezium-postgresconnector-dbserver1-records-snapshot-producer [io.debezium.util.Threads] dbz_1 | 2019-02-15 14:43:23,920 INFO Postgres|dbserver1|records-snapshot-producer Step 0: disabling autocommit [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:23,920 INFO Postgres|dbserver1|records-snapshot-producer Step 1: starting transaction and refreshing the DB schemas for database 'test' and user 'postgres' [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,032 INFO Postgres|dbserver1|records-snapshot-producer Step 2: locking each of the database tables, waiting a maximum of '10.0' seconds for each lock [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,071 INFO Postgres|dbserver1|records-snapshot-producer read xlogStart at '0/1771C48' from transaction '560' [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,071 INFO Postgres|dbserver1|records-snapshot-producer Step 3: reading and exporting the contents of each table [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,071 INFO Postgres|dbserver1|records-snapshot-producer exporting data from table 'public.sample' [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,072 INFO Postgres|dbserver1|records-snapshot-producer For table 'public.sample' using select statement: 'SELECT * FROM "public"."sample"' [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,099 INFO Postgres|dbserver1|records-snapshot-producer finished exporting '2' records for 'public.sample'; total duration '00:00:00.026' [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,099 INFO Postgres|dbserver1|records-snapshot-producer Step 4: committing transaction '560' [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,100 INFO Postgres|dbserver1|records-snapshot-producer Step 5: sending the last snapshot record [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,101 INFO Postgres|dbserver1|records-snapshot-producer Snapshot completed in '00:00:00.193' [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,104 INFO Postgres|dbserver1|records-snapshot-producer Snapshot finished, continuing streaming changes from 0/1771C48 [io.debezium.connector.postgresql.RecordsSnapshotProducer] dbz_1 | 2019-02-15 14:43:24,209 INFO Postgres|dbserver1|records-stream-producer REPLICA IDENTITY for 'public.sample' is 'DEFAULT'; UPDATE and DELETE events will contain previous values only for PK columns [io.debezium.connector.postgresql.PostgresSchema] dbz_1 | 2019-02-15 14:43:24,215 INFO Postgres|dbserver1|records-stream-producer Creating thread debezium-postgresconnector-dbserver1-records-stream-producer [io.debezium.util.Threads] dbz_1 | 2019-02-15 14:43:24,229 INFO || WorkerSourceTask{id=connector-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:43:24,275 WARN || [Producer clientId=producer-4] Error while fetching metadata with correlation id 1 : {dbserver1.public.sample=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] dbz_1 | 2019-02-15 14:43:24,277 INFO || Cluster ID: h4azyV1LQEGG9E-J95Z37g [org.apache.kafka.clients.Metadata] dbz_1 | 2019-02-15 14:44:15,508 INFO || Cluster ID: h4azyV1LQEGG9E-J95Z37g [org.apache.kafka.clients.Metadata] dbz_1 | 2019-02-15 14:44:23,619 INFO || WorkerSourceTask{id=connector-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:44:23,623 INFO || WorkerSourceTask{id=connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:44:23,652 INFO || WorkerSourceTask{id=connector-0} Finished commitOffsets successfully in 31 ms [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:46:24,153 INFO || WorkerSourceTask{id=connector-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:46:24,154 INFO || WorkerSourceTask{id=connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:49:34,289 INFO Postgres|dbserver1|records-stream-producer Different column count 1 present in the server message as schema in memory contains 2; refreshing table schema [io.debezium.connector.postgresql.RecordsStreamProducer] dbz_1 | 2019-02-15 14:50:11,477 INFO || WorkerSourceTask{id=connector-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:50:11,477 INFO || WorkerSourceTask{id=connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:50:11,489 INFO || WorkerSourceTask{id=connector-0} Finished commitOffsets successfully in 11 ms [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:50:29,114 ERROR Postgres|dbserver1|records-stream-producer unexpected exception while streaming logical changes [io.debezium.connector.postgresql.RecordsStreamProducer] dbz_1 | java.lang.NullPointerException dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.columnValues(RecordsStreamProducer.java:455) dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.process(RecordsStreamProducer.java:263) dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.lambda$streamChanges$1(RecordsStreamProducer.java:133) dbz_1 | at io.debezium.connector.postgresql.connection.wal2json.NonStreamingWal2JsonMessageDecoder.processMessage(NonStreamingWal2JsonMessageDecoder.java:62) dbz_1 | at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.deserializeMessages(PostgresReplicationConnection.java:265) dbz_1 | at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.read(PostgresReplicationConnection.java:250) dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.streamChanges(RecordsStreamProducer.java:133) dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.lambda$start$0(RecordsStreamProducer.java:119) dbz_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) dbz_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266) dbz_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) dbz_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) dbz_1 | at java.lang.Thread.run(Thread.java:748) dbz_1 | 2019-02-15 14:50:29,405 INFO || WorkerSourceTask{id=connector-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:50:29,405 INFO || WorkerSourceTask{id=connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask] dbz_1 | 2019-02-15 14:50:29,406 ERROR || WorkerSourceTask{id=connector-0} Task threw an uncaught and unrecoverable exception [org.apache.kafka.connect.runtime.WorkerTask] dbz_1 | org.apache.kafka.connect.errors.ConnectException: An exception ocurred in the change event producer. This connector will be stopped. dbz_1 | at io.debezium.connector.base.ChangeEventQueue.throwProducerFailureIfPresent(ChangeEventQueue.java:170) dbz_1 | at io.debezium.connector.base.ChangeEventQueue.poll(ChangeEventQueue.java:151) dbz_1 | at io.debezium.connector.postgresql.PostgresConnectorTask.poll(PostgresConnectorTask.java:156) dbz_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:244) dbz_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:220) dbz_1 | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175) dbz_1 | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) dbz_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) dbz_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266) dbz_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) dbz_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) dbz_1 | at java.lang.Thread.run(Thread.java:748) dbz_1 | Caused by: java.lang.NullPointerException dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.columnValues(RecordsStreamProducer.java:455) dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.process(RecordsStreamProducer.java:263) dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.lambda$streamChanges$1(RecordsStreamProducer.java:133) dbz_1 | at io.debezium.connector.postgresql.connection.wal2json.NonStreamingWal2JsonMessageDecoder.processMessage(NonStreamingWal2JsonMessageDecoder.java:62) dbz_1 | at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.deserializeMessages(PostgresReplicationConnection.java:265) dbz_1 | at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.read(PostgresReplicationConnection.java:250) dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.streamChanges(RecordsStreamProducer.java:133) dbz_1 | at io.debezium.connector.postgresql.RecordsStreamProducer.lambda$start$0(RecordsStreamProducer.java:119) dbz_1 | ... 5 more dbz_1 | 2019-02-15 14:50:29,407 ERROR || WorkerSourceTask{id=connector-0} Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask] dbz_1 | 2019-02-15 14:50:29,409 INFO || [Producer clientId=producer-4] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer]