Page cover

Kafka

Kafka Node sends messages to Kafka brokers. Expects messages with any message type. Will send record via Kafka producer to Kafka server.

Configuration:

image
  • Topic pattern - can be a static string, or pattern that is resolved using Message Metadata properties. For example ${deviceType}

  • bootstrap servers - list of kafka brokers separated with comma.

  • Automatically retry times - number of attempts to resend message if connection fails.

  • Produces batch size - batch size in bytes for grouping messages with the same partition.

  • Time to buffer locally - max local buffering window duration in ms.

  • Client buffer max size - max buffer size in bytes for sending messages.

  • Number of acknowledgments - number of acknowledgments node requires to received before considering a request complete.

  • Key serializer - by default org.apache.kafka.common.serialization.StringSerializer

  • Value serializer - by default org.apache.kafka.common.serialization.StringSerializer

  • Other properties - any other additional properties could be provided for kafka broker connection.

Published body - Node will send full Message payload to the Kafka topic. If required, Rule Chain can be configured to use chain of Transformation Nodes for sending correct Payload to the Kafka.

Outbound message from this node will contain response offset, partition and topic properties in the Message metadata. Original Message payload, type and originator will not be changed.

Note - if you want to use Confluent cloud as a kafka broker you should add next properties:

Key
Value

ssl.endpoint.identification.algorithm

https

sasl.mechanism

PLAIN

sasl.jaas.config

org.apache.kafka.common.security.plain.PlainLoginModule required username="CLUSTER_API_KEY" password="CLUSTER_API_SECRET";

security.protocol

SASL_SSL

  • CLUSTER_API_KEY - your access key from Cluster settings.

  • CLUSTER_API_SECRET - your access secret from Cluster settings.

Last updated

Was this helpful?