Altair® Panopticon


Creating Apache Kafka Input Data Source

Allows Panopticon Streams to subscribe to Kafka topics on an external cluster.


1.     In the New Data Source page, select Input > Kafka in the Connector drop-down list.

2.     Enter the connection details:



Bootstrap Server

List of  host/port pairs of Kafka servers used to bootstrap connections to a Kafka cluster.

By default, the value is localhost:9092,broker:29092. However, this can be overridden by specifying another bootstrap server in the External Settings text box (as specified in step 3).

Schema Registry Host

Where the Schema Registry is located. This can be in a different location from the Kafka cluster.

Schema Registry Port

The port number of the schema registry which provides the serving layer for the metadata. Default is 8081.


3.     Enter the External Settings to support authentication (i.e., username and password). Note that if the bootstrap server is not secure, then there is no need to authenticate and you may leave this text box blank.

Below is an example of system settings for an SASL authentication:

bootstrap.servers=localhost:9093 sasl.jaas.config= required username="dwchuser" password="dwchpwd";

4.     Click Fetch Topics to populate the Topic drop-down list.

By default, the Hide Internal Topics toggle button is enabled and the Avro message type is selected.


Tap the slider to turn it off. The internal Kafka topics are also displayed in the drop-down list.


Click the drop-down list to search and select the desired topic.

For non-Avro topics, select the Message Type: Fix, JSON, Text,  XML, or Protobuf.

·         If Text is selected, confirm the Text Qualifier, Column Delimiter, and if the first row of the message includes column headings.



Text Qualifier

Specifies if fields are enclosed by text qualifiers, and if present to ignore any column delimiters within these text qualifiers.

Column Delimiter

Specifies the column delimiter to be used when parsing the text file.

First Row Headings

Determines if the first row should specify the retrieved column headings, and not be used in data discovery.


·         If JSON is selected, enter the Record Path which allows the identification of multiple records within the JSON document (e.g., myroot.items.item).



Record Path

The record path that will be queried by the connector’s path (e.g., myroot.items.item).


·         If Protobuf is selected, confirm the Decimal Separator, and enter the Schema Name and Type Name.  

Then click    to select the File Descriptor (.desc file) in the Open dialog.



Schema Name

The Protobuf schema.

Type Name

The message of Protobuf type that will be sent to Kafka.

File Descriptor

The FileDescriptorSet which:

·         is an output of the protocol compiler.

·         represents a set of .proto files, using the --descriptor_set_out option.



5.     Check the From Beginning box to subscribe from the beginning to the latest messages.

If un-checked, you will only be subscribed to the latest messages.

6.     Select either the period (.) or comma (,) as the Decimal Separator.


Prepend 'default:' for the elements falling under default namespace.



7.     Click  to fetch the schema based on the connection details. Consequently, the list of columns with the data type found from inspecting the first ‘n’ rows of the input data source is populated and the Save button is enabled.

8.     For non-Avro message types, except Protobuf, click  to add columns to the Kafka connection that represent sections of the message. Then enter or select:




The column name of the source schema.

Fix Tag/JsonPath/Text Column Index/XPath

The Fix Tag/JsonPath/Text Column Index/XPath of the source schema.


The data type of the column. Can be a Text, Numeric, or Time

Date Format

The format when the data type is Time.


Defined parameters that can be used as filter. Only available for Avro, JSON, Text, and XML message types.


Determines whether the message field should be processed.



To parse and format times with higher than millisecond precision, the format string needs to end with a period followed by sequence of upper case S. There can be no additional characters following them.

For example: yyyy-MM-dd HH:mm:ss.SSSSSS



9.     You can also opt to load or save a copy of the column definition.

10.  Define the Real-time Settings.

13.   Click . The new data source is added in the Data Sources list.