Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-7532

[debezium-operator] possibility to add multiple sources

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Unresolved
    • Icon: Optional Optional
    • Backlog
    • None
    • debezium-operator
    • None
    • False
    • None
    • False
    • 0
    • 0% 0%

      I'm conducting some tests with the Debezium Operator, and I find it to be a very promising project.

      I haven't found any documentation explaining the reasons behind the design of the operator's spec, but I believe it could be improved.

      Currently, we only have DebeziumServer, which follows a one-to-one relationship – one deployment for one destination. This seems quite limiting.

      Strimzi takes a different approach that is more suitable for a wide range of scenarios. In Strimzi, you create a KafkaConnect and then add configurations through KafkaConnectors. This means that one KafkaConnect can have N connectors.

      Here's an example from Strimzi:
       

      apiVersion: kafka.strimzi.io/v1beta2
      kind: KafkaConnect
      metadata:
        name: strimzi-debezium-cluster
      spec:
        version: 3.6.0
        image: 000.dkr.ecr.us-east-1.amazonaws.com/custom-image-kafka-connect:0.0.1
        replicas: 1
        bootstrapServers: 000:9092
        config:
          group.id: 000
          offset.storage.topic: connect-cluster-offsets
          config.storage.topic: connect-cluster-configs
          status.storage.topic: connect-cluster-status
          config.storage.replication.factor: 2
          offset.storage.replication.factor: 2
          status.storage.replication.factor: 2
      
      ---
      
      apiVersion: kafka.strimzi.io/v1beta2
      kind: KafkaConnector
      metadata:
        name: strimzi-connector-transaction
      spec:
        autoRestart:
          enabled: true
        class: io.debezium.connector.postgresql.PostgresConnector
        tasksMax: 1
        config:
          tasks.max: 1
          plugin.name: pgoutput
          database.user: 000
          database.dbname: 000
          database.server.name: 000
          database.hostname: 000
          database.password: 000
          database.port: 5432
          topic.prefix: 000
          snapshot.mode: never
          key.converter.schemas.enable: false
          poll.interval.ms: 150
          value.converter.schemas.enable: false
          value.converter: org.apache.kafka.connect.json.JsonConverter
          key.converter: org.apache.kafka.connect.json.JsonConverter
      

      That's it.

            Unassigned Unassigned
            udleinati Udlei Nati
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated: