About Arc

Modern workloads generate massive data. We built the database that can handle it.

Modern analytical workloads generate massive data. Product analytics teams process billions of user events per day. Observability platforms ingest millions of logs, metrics, and traces per second. AI agents accumulate millions of conversation records.

Traditional databases can't keep up. They're slow, expensive, and lock your data in proprietary formats.

Arc solves this: 18M+ records/sec ingestion, 6M+ rows/sec queries on billions of rows, portable Parquet files you own.

Built on DuckDB and Parquet because we believe your data should be:

  • Fast to query - Billions of rows, millisecond response times
  • Yours to keep - Standard Parquet format, query with any tool
  • Simple to operate - Docker/Kubernetes, runs at the edge or in the cloud

Arc started because we spent years running data infrastructure for demanding workloads - product analytics, observability, IoT, fleet tracking - and kept hitting the same problems: databases that couldn't scale, proprietary formats that locked data in, and poor SQL support.

We're building the analytical database we wish existed.

Built by Basekick Labs. Founded by Ignacio, ex-InfluxData.

Ready to handle billion-record workloads?

Open Source

docker run -d -p 8000:8000 \
  -e STORAGE_BACKEND=local \
  -v arc-data:/app/data \
  ghcr.io/basekick-labs/arc:latest
Installation Guide ->

Arc Enterprise

Clustering, RBAC, and dedicated support for large deployments.