Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-4343

Incremental Snapshot does not pick up table

XMLWordPrintable

    • False
    • False
    • Hide

      This can be reproduced using the tutorial containers and schema. Here are the steps:

      • Run debezium tutorial containers for ZooKeeper, Kafka, Postgres, Kafka Connect, and console consumer:

      $ docker run -it --rm --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper:1.7

      $ docker run -it --rm --name kafka -p 9092:9092 --link zookeeper:zookeeper debezium/kafka:1.7

      $ docker run -it --rm --name postgres -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres debezium/example-postgres:1.7

      $ docker run -it --rm --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets -e STATUS_STORAGE_TOPIC=my_connect_statuses --link zookeeper:zookeeper --link kafka:kafka --link postgres:postgres debezium/connect:1.7

      $ docker run -it --rm --name watcher --link zookeeper:zookeeper --link kafka:kafka debezium/kafka:1.7 watch-topic -a -k dbserver1.inventory.customers

      • Create the snapshot schema and table via psql postgresql://postgres:postgres@localhost:5432:

      CREATE SCHEMA debezium_signal;CREATE TABLE debezium_signal.snapshot ( id TEXT PRIMARY KEY, type TEXT NOT NULL, data TEXT );SET search_path to inventory,debezium_signal;

      •  Create the connector task using curl:

      $ curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{ "name": "inventory-connector", "config": { "connector.class": "io.debezium.connector.postgresql.PostgresConnector", "tasks.max": "1", "plugin.name": "pgoutput", "database.hostname": "postgres", "database.port": "5432", "database.user": "postgres", "database.password": "postgres", "database.dbname": "postgres", "database.server.name": "dbserver1", "schema.include.list": "inventory,debezium_signal", "signal.data.collection": "debezium_signal.snapshot", "provide.transaction.metadata": "true" } }'

      • Observe the initial snapshot records printed to the console. Create a new record in the customers table and observe a new create in the console.
      • Insert a signal record to trigger the incremental snapshot:

      INSERT INTO debezium_signal.snapshot (id,type,data) values ('01','execute-snapshot','{"data-collections": ["inventory.customers"]}');

      Show
      This can be reproduced using the tutorial containers and schema. Here are the steps: Run debezium tutorial containers for ZooKeeper, Kafka, Postgres, Kafka Connect, and console consumer: $ docker run -it --rm --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper:1.7 $ docker run -it --rm --name kafka -p 9092:9092 --link zookeeper:zookeeper debezium/kafka:1.7 $ docker run -it --rm --name postgres -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres debezium/example-postgres:1.7 $ docker run -it --rm --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets -e STATUS_STORAGE_TOPIC=my_connect_statuses --link zookeeper:zookeeper --link kafka:kafka --link postgres:postgres debezium/connect:1.7 $ docker run -it --rm --name watcher --link zookeeper:zookeeper --link kafka:kafka debezium/kafka:1.7 watch-topic -a -k dbserver1.inventory.customers Create the snapshot schema and table via psql postgresql://postgres:postgres@localhost:5432 : CREATE SCHEMA debezium_signal;CREATE TABLE debezium_signal.snapshot ( id TEXT PRIMARY KEY, type TEXT NOT NULL, data TEXT );SET search_path to inventory,debezium_signal;   Create the connector task using curl : $ curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{ "name": "inventory-connector", "config": { "connector.class": "io.debezium.connector.postgresql.PostgresConnector", "tasks.max": "1", "plugin.name": "pgoutput", "database.hostname": "postgres", "database.port": "5432", "database.user": "postgres", "database.password": "postgres", "database.dbname": "postgres", "database.server.name": "dbserver1", "schema.include.list": "inventory,debezium_signal", "signal.data.collection": "debezium_signal.snapshot", "provide.transaction.metadata": "true" } }' Observe the initial snapshot records printed to the console. Create a new record in the customers table and observe a new create in the console. Insert a signal record to trigger the incremental snapshot: INSERT INTO debezium_signal.snapshot (id,type,data) values ('01','execute-snapshot','{"data-collections": ["inventory.customers"] }');

      After configuring incremental snapshots and triggering a snapshot via an insert to the trigger table, the snapshot starts, but the requested table isn't recognized as being in the list of known tables and the snapshot terminates. The log statement, however, does seem to indicate that the table is known but it is somehow not getting picked up. Here is an example of the log output:

      ```

      2021-11-19 00:27:29,704 INFO Postgres|dbserver1|streaming Requested 'INCREMENTAL' snapshot of data collections '[inventory.customers]' [io.debezium.pipeline.signal.ExecuteSnapshot]2021-11-19 00:27:29,713 WARN Postgres|dbserver1|streaming Schema not found for table 'inventory.customers', known tables [inventory.spatial_ref_sys, inventory.geom, inventory.products_on_hand, inventory.customers, inventory.orders, debezium_signal.snapshot, inventory.products] [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]2021-11-19 00:27:29,717 INFO Postgres|dbserver1|streaming Skipping read chunk because snapshot is not running [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]

      ```
      This was first reported in Zulip Chat.

            jpechane Jiri Pechanec
            manderson202 Matt Anderson (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: