~16MB binary - No JVM - No ZooKeeper - Apache 2.0

Kafka semantics.
Go simplicity.

Lightweight, fault-tolerant message streaming built on NATS.

Single binary. No JVM. No ZooKeeper. Durable and replicated.

Kafka is powerful. It's also heavy.

Kafka is the gold standard for message streaming—but it comes with baggage.

The Kafka Tax:

JVM

JVM runtime — Memory-hungry, tuning-intensive

ZK

ZooKeeper dependency — Another cluster to manage (or KRaft migration)

Operational complexity — Broker configs, partition rebalancing, consumer group lag

Client ecosystem — librdkafka or nothing for non-Java languages

For edge deployments, IoT workloads, or teams that want streaming without the overhead—there's a better way.

The Solution

Message streaming for the rest of us

Liftbridge delivers Kafka's durability and semantics in a lightweight package designed for Go-first teams.

Durable & Replicated

Append-only commit log with configurable replication. Messages acknowledged with AckPolicy.ALL survive node failures.

Built on NATS

Uses NATS as the transport layer. Add durable streaming to existing NATS deployments, or run standalone.

Single Binary

~16MB. No JVM. No ZooKeeper. Deploy anywhere—bare metal, containers, edge devices.

Kafka-Compatible Semantics

Streams, partitions, consumer groups, offset tracking. If you know Kafka, you know Liftbridge.

Under the Hood

Technical Highlights

Production-grade message streaming with thoughtful engineering choices.

Dual Consensus Model

Raft for metadata (cluster state, stream assignments). ISR replication for data (high-throughput message writes). Best of both worlds.

Flexible Durability

Choose your ack policy: LEADER for speed, ALL for durability, NONE for fire-and-forget. Tune per-stream or per-message.

Time-Based Replay

Timestamp indexes on every partition. Query "everything since yesterday" without scanning from offset zero.

Go-Native Client

go-liftbridge is the canonical client. No CGO, no librdkafka, pure Go.

Consumer Groups

Distributed stream processing with automatic partition assignment, consumer balancing, position tracking, and failover.

Log Compaction

Retain only the latest value per key. Perfect for event sourcing and maintaining current state without unbounded growth.

Capabilities

More features, same simplicity

Everything you need for production message streaming.

Wildcard Subscriptions

Subscribe to patterns like stock.nyse.* for flexible topic matching.

Message Headers

Attach arbitrary key-value metadata to any message.

Key-Value Semantics

Messages with keys for partitioning and compaction.

Retention Policies

Time-based and size-based data lifecycle management.

gRPC API

Build clients in any language with the gRPC protocol.

Partitioned Streams

Horizontal scaling with configurable partitions per stream.

Liftbridge vs. the alternatives

Choose the right tool for your streaming needs.

LiftbridgeKafkaNATS JetStreamRedpanda
LanguageGoJava/ScalaGoC++
RuntimeSingle binaryJVMSingle binarySingle binary
DependenciesNATSZooKeeper/KRaftNoneNone
Binary size~16MB~100MB+~20MB~100MB+
Kafka API compatibleSemantics onlyNativeNoYes
Memory footprintLowHighLowMedium
Best forGo teams, edge, IoTEnterprise, ecosystemNATS usersKafka replacement

Go-First

Pure Go client, no CGO dependencies

NATS Ecosystem

Leverage existing NATS infrastructure

Edge-Ready

16MB binary deploys anywhere

Architecture

How it works

Liftbridge adds durable, replicated streaming on top of NATS.

PublishersYour appsNATSPub/SubTransportLiftbridgeDurable LogReplicatedIndexedRetainedConsumersYour appsReplicated across nodes
1

Streams map to NATS subjects

Configure partitions for parallel processing and ordered delivery.

2

Partitions are append-only logs

Replicated across the cluster for fault tolerance.

3

Consumers read from any offset

Latest, earliest, or specific timestamp—your choice.

4

Consumer groups coordinate

Automatic partition assignment and offset tracking.

Built for real workloads

From edge gateways to microservices—Liftbridge handles production message streaming.

IoT & Sensor Data

Buffer telemetry from thousands of devices. Handle network interruptions gracefully. Replay data for reprocessing.

TelemetryBufferingReplay

Event Sourcing

Durable event log for CQRS architectures. Replay events to rebuild state. Audit trail included.

CQRSAuditState Rebuild

Microservices Decoupling

Async communication between services. Producers and consumers scale independently. No tight coupling.

AsyncScalingDecoupling

Edge Computing

Small enough to run on edge gateways. Buffer data locally, forward when connected. Single binary deployment.

EdgeOfflineGateway
Quick Start

Up and running in 5 minutes

1

Start Liftbridge

Terminal
docker run -d --name liftbridge \
  -p 4222:4222 -p 9292:9292 \
  liftbridge/standalone-dev

Includes embedded NATS server. Requires Docker.

2

Create a Go project

Terminal
mkdir liftbridge-demo && cd liftbridge-demo
go mod init demo
go get github.com/liftbridge-io/go-liftbridge/v2

Requires Go 1.18+.

3

Create main.go

main.go
package main

import (
    "context"
    "fmt"
    "log"

    lift "github.com/liftbridge-io/go-liftbridge/v2"
)

func main() {
    // Connect to Liftbridge
    client, err := lift.Connect([]string{"localhost:9292"})
    if err != nil {
        log.Fatal(err)
    }
    defer client.Close()

    ctx := context.Background()

    // Create a stream (maps NATS subject "events" to stream "events-stream")
    if err := client.CreateStream(ctx, "events", "events-stream"); err != nil {
        if err != lift.ErrStreamExists {
            log.Fatal(err)
        }
    }
    fmt.Println("Stream created!")

    // Publish a message
    if _, err := client.Publish(ctx, "events-stream",
        []byte("hello world")); err != nil {
        log.Fatal(err)
    }
    fmt.Println("Message published!")

    // Subscribe and read the message
    if err := client.Subscribe(ctx, "events-stream",
        func(msg *lift.Message, err error) {
            if err != nil {
                log.Fatal(err)
            }
            fmt.Printf("Received: %s\n", string(msg.Value()))
        },
        lift.StartAtEarliestReceived(),
    ); err != nil {
        log.Fatal(err)
    }
}
4

Run it

Terminal
go run main.go

Stream created!
Message published!
Received: hello world

Open source. Community driven.

Liftbridge is maintained by Basekick Labs, creators of Arc. We're committed to keeping it open, lightweight, and useful.

Liftbridge was created by Tyler Treat in 2017 and is now maintained by Basekick Labs as part of our IoT data platform. Read the full story ->

FAQ

Both add persistence to NATS, but with different approaches:

  • JetStream is built into NATS Server itself. One binary, but less Kafka-like semantics.
  • Liftbridge runs alongside NATS. Separate binary, but full Kafka semantics (partitions, consumer groups, offset tracking).

If you want Kafka-style streaming with NATS as transport, choose Liftbridge.

No. Liftbridge uses NATS as its underlying transport layer.

However, you can run NATS and Liftbridge together easily:

  • Embedded NATS server (single process)
  • Existing NATS cluster (add durability to existing deployment)

Go is the primary client: go-liftbridge

Community clients exist for:

  • Python
  • Java
  • Node.js

The gRPC API is documented, so you can generate clients for any language.

Yes. Liftbridge has been used in production since 2018.

Current version: Preparing v26.01.1 (January 2026)

Apache 2.0 licensed. Battle-tested in IoT, fintech, and microservices deployments.

Liftbridge uses a dual consensus model:

  • Raft for cluster metadata (stream assignments, leader election)
  • ISR (In-Sync Replicas) for message data (high-throughput replication)

This gives you both strong consistency for metadata and high throughput for data.

Ordering is guaranteed within a partition.

Like Kafka:

  • Messages with the same key go to the same partition
  • Within a partition, messages are strictly ordered
  • Across partitions, no ordering guarantee

Ready to simplify your streaming infrastructure?

Apache 2.0 licensed. Production ready. Go-native.

Preparing v26.01.1License: Apache 2.0Language: Go