Liftbridge 26.01.1: CalVer, Reverse Subscriptions, and 241K msgs/sec

We took over Liftbridge maintenance about a month ago. The project had been quiet for a while, but the fundamentals were solid—lightweight Kafka alternative built on NATS, fault-tolerant streams, exactly what you need for buffering IoT data before it hits Arc.
This is our first release, and it's a big one. We've modernized the build system, fixed critical bugs that had been lingering for years, and added features that make Liftbridge actually usable in production.
Here's what's new.
CalVer Versioning
We're moving from semantic versioning to CalVer. This release is 26.01.1—year, month, patch.
Why? Because semantic versioning doesn't make sense for infrastructure software that evolves continuously. CalVer tells you exactly when a release came out, which is more useful when you're debugging production issues six months from now.
The minimum Go version is now 1.25.6.
Reverse Subscriptions
You can now read streams backwards—newest to oldest.
This sounds simple, but it's incredibly useful for debugging. When something breaks at 3am, you want to see the most recent events first, not scroll through hours of history. Reverse subscriptions make that fast.
We also optimized the cursor handling. Seeking to a position went from O(n) to O(k) complexity, where k is the number of segments you actually need to scan. For large streams, this is a significant improvement.
Release Automation
Liftbridge now has proper CI/CD. Every release automatically builds:
- Multi-platform binaries (macOS, Linux, Windows across AMD64 and ARM64)
- Linux packages (Debian .deb and RPM)
- Docker images pushed to ghcr.io
No more manual release processes. Tag a version, GitHub Actions handles the rest.
Package Management
If you're running Liftbridge on Linux, the packages now include everything you need:
- systemd service files for automatic startup
- Default configuration at
/etc/liftbridge/liftbridge.yaml - Dedicated
liftbridgeuser and group for security
Install the package and you're running. No manual setup required.
Anonymous Telemetry
We've added opt-out telemetry to understand how Liftbridge is being used. It collects non-sensitive data—version, OS, CPU count—nothing about your messages or configuration.
You can disable it in the config file or via environment variable. We're not interested in tracking you, just understanding which platforms to prioritize.
Performance
We added benchmark utilities to measure actual throughput. On reasonable hardware, Liftbridge sustains 241K messages per second with the fast-path optimization for RF=1 partitions.
That's more than enough for most IoT buffering scenarios, and it leaves plenty of headroom.
Bug Fixes
This release fixes several critical issues that had been open for a while:
Corrupt Index Recovery. If an index file gets corrupted (power loss, disk issues), Liftbridge now detects it and rebuilds from the log files automatically. No more manual intervention required.
Snapshot Restore Panic. Fixed a nil pointer dereference that could crash the server during recovery. This was a nasty one—it only showed up under specific failure scenarios.
Signal Handling Race. The embedded NATS server now respects Liftbridge's graceful shutdown sequence. Previously, killing the process could leave things in an inconsistent state.
Leader ISR Inconsistency. Added defensive checks for edge cases where partition leaders weren't in the sync replica set.
Segment Deletion. Implemented mark-then-delete approach to prevent race conditions when cleaning up old segments.
Dependencies
We updated the core dependencies:
- hashicorp/raft: v1.1.2 → v1.7.3
- go-hclog: v0.14.1 → v1.6.2
The Raft update enables the pre-vote protocol, which improves cluster stability during network partitions.
Quick Start
Create a docker-compose.yml:
services:
nats:
image: nats:latest
ports:
- "4222:4222"
liftbridge:
image: ghcr.io/liftbridge-io/liftbridge:26.01.1
user: root
ports:
- "9292:9292"
volumes:
- liftbridge-data:/tmp/liftbridge
command: ["--nats-servers", "nats://nats:4222", "--raft-bootstrap-seed"]
depends_on:
- nats
volumes:
liftbridge-data:Then run:
docker compose up -dPort 4222 is NATS, port 9292 is the Liftbridge API.
Note: The user: root is a workaround for a permissions issue with Docker volumes. We know it's not ideal, and we're fixing the Dockerfile to create the data directory with proper ownership. This will be resolved in 26.03.1.
Then connect with Go:
package main
import (
"context"
"fmt"
"time"
lift "github.com/liftbridge-io/go-liftbridge/v2"
)
func main() {
// Connect to Liftbridge
client, err := lift.Connect([]string{"localhost:9292"})
if err != nil {
panic(err)
}
defer client.Close()
ctx := context.Background()
// Create a stream
err = client.CreateStream(ctx, "events", "events-stream")
if err != nil && err != lift.ErrStreamExists {
panic(err)
}
fmt.Println("Stream created!")
// Publish a message
_, err = client.Publish(ctx, "events-stream", []byte("hello world"))
if err != nil {
panic(err)
}
fmt.Println("Message published!")
// Subscribe to the stream
err = client.Subscribe(ctx, "events-stream",
func(msg *lift.Message, err error) {
if err != nil {
panic(err)
}
fmt.Printf("Received: %s\n", msg.Value())
},
lift.StartAtEarliestReceived())
if err != nil {
panic(err)
}
// Keep running to receive messages
time.Sleep(5 * time.Second)
}Output:
Stream created!
Message published!
Received: hello world
What's Next
We're moving to a bi-monthly release cycle. Expect 26.03.1 in March, 26.05.01 in May, and so on.
For 26.03.1, we're focused on:
- Docker improvements. Fix the permissions issue so you don't need
user: root. Proper data directory setup out of the box. - Arc integration. The goal is a complete IoT data pipeline: Telegraf collects sensor data, Liftbridge buffers it durably, Arc stores it for analytics, Grafana visualizes it.
- Standalone image. A single Docker image with embedded NATS for simpler local development.
Liftbridge gives you replay capability and durability at the edge. When sensors dump hours of backlogged data or when Arc is down for maintenance, nothing gets lost.
Resources
- GitHub:
github.comliftbridge-io/liftbridgehttps://github.com/liftbridge-io/liftbridge
- Documentation: https://liftbridge.io/docs
- Discord: https://discord.gg/nxnWfUxsdm
Ready to handle billion-record workloads?
Deploy Arc in minutes. Own your data in Parquet.