Skip to main content

· 2 min read

In this guide, we are going to learn how use Conduit's stream inspector. Stream inspection is available via the Conduit UI and the API.


To access the stream inspector through the UI, first navigate to the pipeline which you'd like to inspect. Then, click on the connector in which you're interested. You'll see something similar to this:

stream inspector pipeline view

Click the "Inspect Stream" button to start inspecting the connector. A new pop-up window will show the records:

stream inspector show stream

On the "Stream" tab you'll see the latest 10 records. If you switch to the "Single record" view, only the last record will be shown. You can use the "Pause" button to pause the inspector and stop receiving the latest record(s). The ones that are already shown will be kept so you can inspect them more thoroughly.


To access the stream inspector through the API, you'll need a WebSocket client (for example wscat). The URL on which the inspector is available comes in the following format: ws://host:port/v1/connectors/<connector ID>/inspect. For example, if you run Conduit locally with the default settings, you can inspect a connector by running the following command:

$ wscat -c ws://localhost:8080/v1/connectors/pipeline1:destination1/inspect | jq .
"result": {
"position": "NGVmNTFhMzUtMzUwMi00M2VjLWE2YjEtMzdkMDllZjRlY2U1",
"operation": "OPERATION_CREATE",
"metadata": {
"opencdc.readAt": "1669886131666337227"
"key": {
"rawData": "NzQwYjUyYzQtOTNhOS00MTkzLTkzMmQtN2Q0OWI3NWY5YzQ3"
"payload": {
"before": {
"rawData": ""
"after": {
"structuredData": {
"company": "string 1d4398e3-21cf-41e0-9134-3fe012e6d1fb",
"id": 1534737621,
"name": "string fbc664fa-fdf2-4c5a-b656-d52cbddab671",
"trial": true

The above command also uses jq to pretty-print the output. You can also use jq to decode Base64-encoded strings, which may represent record positions, keys or payloads:

wscat -c ws://localhost:8080/v1/connectors/pipeline1:destination1/inspect | jq '.result.key.rawData |= @base64d'

· 8 min read

In this article, we are going to walk through, step by step, how to build a Conduit connector.

Conduit connectors communicate with Conduit by either writing records into the pipeline (source connector) and/or the other way around (destination connector).

For this example, we are going to build an Algolia destination connector. The goal of this connector is to give the user the ability to send data to Algolia. In the context of search engines, this is called indexing. Since Conduit is a generic tool to move data between data infrastructure, with this new connector we can index data from any Conduit Source (PostgreSQL, Kafka, etc.).

You may find this full example on GitHub.

Let's build!

· 3 min read

The Conduit Kafka Connect Wrapper connector is a special connector that allows you to use Kafka Connect connectors with Conduit. Conduit doesn't come bundled with Kafka Connect connectors, but you can use it to bring any Kafka Connect connector with Conduit.

This connector gives you the ability to:

  • Easily migrate from Kafka Connect to Conduit.
  • Remove Kafka as a dependency to move data between data infrastructure.
  • Leverage a datastore if Conduit doesn't have a native connector.

Since the Conduit Kafka Connect Wrapper itself is written in Java, but most of Conduit's connectors are written in Go, it also serves as a good example of the flexbilty of the Conduit Plugin SDK.

Let's begin.

How it works

To use the Kafka Connect wrapper connector, you'll need to:

  1. Clone the conduit-kafka-connect-wrapper repository.
  2. Build the Connector JAR.
  3. Download Kafka Connect JARs and any dependencies you would like to add.
  4. Create a Conduit pipeline.
  5. Add the Connector to pipeline.

· 3 min read

By default, Conduit ships with a REST API that allows you to automate the creation of data pipelines and connectors. To make it easy to get started with the API, we have provided a Swagger UI to visualize and interact with the Condiut without having to write any code...yet 😉.

After you start Conduit, if you navigate to http://localhost:8080/openapi/, you will see a page that looks like this:

Conduit in Terminal

Then, after you test the API, you can write code to make the equilivent request. For example, here is how you would make a request using the axios Node.js library.

const config = {
type: 'TYPE_SOURCE',
plugin: `${pkgPath}/pkg/plugins/pg/pg`,
config: {
name: 'pg',
settings: {
table: pgTable,
url: pgUrl,
cdc: 'false',

const response = await`http://localhost:8080/v1/connectors`, config)

Esentially, the API is everything you'd need to auomate pipeline creation. Let's begin.

Starting Conduit

To get started, you need to install and start Conduit. You may even add Conduit to your $PATH.


To open the Swagger UI, open your browser and navigate to http://localhost:8080/openapi. This UI allows you to interact with the API and create connectors. It also serves as a reference for the API.

Making a Request

The API lets you manage all parts of Conduit. For example, all we need to create and start a pieline are these three APIs:

  • Create Pipelines - POST /v1/pipelines
  • Create Connectors - POST /v1/connectors
  • Start/Stop Pipelines POST /v1/pipelines/{id}/start

Let's use the Swagger UI to create a pipeline.

  1. First, find the create pipeline API, and select "Try it out":
Conduit in Terminal
  1. Update the body of the request with your new pipeline details:
Create a Conduit Pipeline

In this case, the config describes the name and the description of the new pipeline:

"config": {
"name": "string",
"description": "string"
  1. Select "Execute", notice the response of the request:
Conduit API Response

For every request, you will be able to try it out, see the body of the request, and the expected response.

What's Next

Now that you know how to try out the API, you can explore Conduit with these other resources:

· 3 min read

In this guide, we will build a data pipline that moves data between files. This example is a great to get started with Conduit on a local machine, but it's also the foundation of use cases such as log aggregation.

Kafka to Postgres Conduit Pipeline

Everytime that data is appended to the src.log, data will be move in real-time to dest.log.