Details

    • Type: Bug
    • Status: Closed (View Workflow)
    • Priority: Minor
    • Resolution: Done
    • Affects Version/s: 0.7.2
    • Fix Version/s: 0.7.4
    • Component/s: mysql-connector
    • Labels:
      None
    • Environment:

      MySQL 5.6
      Debezium 0.7.2 as well as 0.7.1 tested
      Confluent Kafka, Schemaregistry & Avro-converter 4.0.0 (Kafka 1.0.0)

    • Steps to Reproduce:
      Hide

      Two cases (one failure, one correct), both using the Debezium MySQL connector:

      (1) Correct example:
      Create table with "CREATE TABLE a (x decimal);" (NOTE: no precision/scale for decimal is given)
      Start mysql-connector
      Insert data into the table
      -> Observed behaviour: Data is correctly written to Kafka, Avro-schema is correctly created in schemaregistry

      (2) Failure case:
      Start mysql-connector
      Create table with "CREATE TABLE a (x decimal);" (NOTE: no precision/scale for decimal is given, but this also fails when no scale is given via DECIMAL(20) for example)
      Insert data into the table
      -> Observed behaviour: DataError in org.apache.kafka.connect.data.Decimal.fromLogical() since the scale of the Value is "0" while the scale of the schema is "-1"

      Show
      Two cases (one failure, one correct), both using the Debezium MySQL connector: (1) Correct example: Create table with "CREATE TABLE a (x decimal);" (NOTE: no precision/scale for decimal is given) Start mysql-connector Insert data into the table -> Observed behaviour: Data is correctly written to Kafka, Avro-schema is correctly created in schemaregistry (2) Failure case: Start mysql-connector Create table with "CREATE TABLE a (x decimal);" (NOTE: no precision/scale for decimal is given, but this also fails when no scale is given via DECIMAL(20) for example) Insert data into the table -> Observed behaviour: DataError in org.apache.kafka.connect.data.Decimal.fromLogical() since the scale of the Value is "0" while the scale of the schema is "-1"

      Description

      Hey everyone,

      I just stumbled upon a weird quirk which caused issues with the Schemaregistry & Avro-Converter when a Decimal datatype is encountered on MySQL. Not sure if this is a bug or intended behaviour, but it seems very strange to me so I've decided to at least inform you.

      It seems like there is a difference in behaviour when creating schemata for tables, depending on whether the connector "snapshots" the table or reads the DDL from the binlog.

      When creating a new table in MySQL that contains a DECIMAL column with no specified precision and scale, MySQL will default to DECIMAL(10, 0). However, if the Debezium connector is already running and recording this event, it will create a schema with a scale of "-1". This will cause an error later on when serializing the value as Avro in org.apache.kafka.connect.data.Decimal.fromLogical().

      This does not happen if Debezium reads the table definition during a snapshot since it then correctly reads the column as having a scale/precision of (10, 0).

      Now, this issue can easily be circumvented by always specifying precision and scale, or by forcing a new snapshot after creating the table. However, it seems weird that Debezium would use a different scale (-1) than the MySQL default (0) if no scale is explicitly given.

      Cheers
      Felix

        Gliffy Diagrams

          Attachments

            Activity

              People

              • Assignee:
                jpechanec Jiri Pechanec
                Reporter:
                mrtrustworthy Felix Eggert
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: