Import and process data from your Kafka environment using your existing transform map configurations.

Before you begin

Formatting Kafka message payloads

Use simple JSON payloads for Kafka messages. Each Kafka message should have just one, flat JSON object. Complex JSON payloads, such as those with lists and nested objects, are not supported.

Example of simple, valid JSON input, using a flat map of key-value pairs.
{"key1": "value1", "key2": "value2"}
Examples of complex, invalid JSON input. The first uses a list. The second uses nested objects.
[{"key1": "value1", "key2": "value2"}, {"key1": "value3", "key2": "value4"}]
{"key1": "value1", "key2": {"key3": "value3", "key4": "value4"}}

About this task

To configure a consumer, you need to create two records.
  1. The consumer record, which specifies how to import and process data.
  2. A record for the Kafka stream, which defines the stream of data to your consumer.
This task covers the consumer creation. For instructions on creating a Kafka stream, see Create a Kafka stream.

Procedure

  1. Navigate to All > IntegrationHub > Consumers > Transform Map Consumer.
  2. Select New.
  3. In the form, fill in the fields.
    Table 1. Transform Map Consumer
    Field Description
    Name Name of the Transform Map consumer.
    Transform Map Name of the transform map to use to process data.
    Delivery guarantee If there's a node failure, option to specify the delivery guarantee for incoming messages. Select one of the following.
    • No lost but duplicates: All messages are delivered at least once. Some messages might be delivered more than once.
    • Once or not at all: A message isn’t delivered more than once. Some messages might not be delivered at all.
    Serialization format The serialization format for the message. Select one of the following.
    • Plain Text: Select this option for any plain-text messages. This is the default format.
    • Encoded: Select this option for messages in an Apache Avro format. Converting plain-text messages to an Avro format requires a schema. Select the schema registry in the Schema registry field. For more information on schemas, see Schema management in Stream Connect.
    Column mapping Option to specify whether the message's JSON key maps the data to the column's name or label in the import set table. Select one of the following.
    • Label
    • Column name
    Synchronize Inserts Option to guarantee there's only one record with unique coalesce field values by synchronizing record inserts.
    Application Application scope for the Transform Map Consumer.
    Schema registry
    Registry for the selected schema. Select one of the following.
    • Standalone Schema Registry
    • Confluent Schema Registry

    This field appears only when the Serialization format is set to Encoded.

    For the Confluent Schema Registry, if the received message's schema ID isn't in the schema table, the system imports the schema dynamically, using the configured REST connection.

  4. Select Save.

What to do next

Create a Kafka stream for this consumer. After the stream is activated, you can start receiving messages from your Kafka environment.