Arc Cloud is live. Start free — no credit card required.
ClickBench Verified

Arc vs ClickHouse

Arc matches or beats ClickHouse on analytical queries across all tested machines, with 2.5x faster ingestion and portable Parquet storage you fully own.

2.5x
faster ingestion
17.3M
rec/s Arc vs 7.0M ClickHouse
Parquet
portable storage you own

ClickBench Results

99.9M rows, 43 analytical queries. Arc runs true cold runs: service restart and OS cache flush before every query. Verify on benchmark.clickhouse.com →

Combined Score (lower is better)

SystemMachineScore
Arcc8g.metal-48xl×1.04
Arcc7a.metal-48xl×1.13
Arcc6a.4xlarge×1.94
ClickHouse (Parquet, single)c6a.4xlarge×3.00
ClickHouse (Parquet, partitioned)c6a.4xlarge×1.98

Cold Run (lower is better)

SystemMachineScore
Arcc8g.metal-48xl×1.16
Arcc7a.metal-48xl×1.21
Arcc6a.4xlarge×1.23
ClickHouse (Parquet, partitioned)c6a.4xlarge×1.48
ClickHouse (Parquet, single)c6a.4xlarge×1.80

Hot Run (lower is better)

SystemMachineScore
Arcc8g.metal-48xl×1.02
Arcc7a.metal-48xl×1.15
Arcc6a.4xlarge×2.82
ClickHouse (Parquet, partitioned)c6a.4xlarge×2.73
ClickHouse (Parquet, single)c6a.4xlarge×5.13

Ingestion Benchmark

Sustained 60-second ingestion load. Same machine, same record schema.

SystemThroughputBatch sizeProtocol
Arc17.3M rec/s1,000 rowsMessagePack columnar
ClickHouse7.0M rec/s100,000 rowsHTTP native protocol

Arc achieves ~2.5x higher throughput with 100x smaller batches.

Why Arc Is Different: Under the Hood

Arc and ClickHouse are both columnar analytical databases, but they make opposite bets on portability, simplicity, and storage ownership.

Storage Format

Parquet you own vs. MergeTree you don't

Arc stores every table as standard Apache Parquet files in a time-partitioned path (db/measurement/YYYY/MM/DD/HH/). Any tool that reads Parquet (DuckDB, Spark, Snowflake, pandas) can query Arc data directly without going through Arc. ClickHouse stores data in its proprietary MergeTree format, which is not readable outside of ClickHouse.

Query Engine

DuckDB + SQL rewrites vs. custom C++ dialect

Arc embeds DuckDB, a vectorized OLAP engine with PostgreSQL-compatible SQL. Arc adds SQL rewrites at the HTTP layer: regex calls are rewritten to equivalent string functions and time bucketing uses epoch integer arithmetic before the query is handed to DuckDB. ClickHouse has its own custom C++ vectorized engine with a SQL dialect that diverges from standard SQL in several ways.

Ingestion Protocol

1K-row batches vs. 100K-row minimums

Arc accepts MessagePack binary columnar batches (18M+ records/s), InfluxDB Line Protocol (drop-in for existing Telegraf pipelines), and bulk CSV/Parquet import. Efficient batches start at ~1,000 rows. ClickHouse requires batches of 100,000+ rows to reach optimal throughput; smaller writes are buffered in memory and flushed asynchronously, adding operational complexity.

Deployment Model

One binary vs. six processes

Arc ships as a single Go binary with no external dependencies. Clustering uses embedded Raft consensus with no separate coordination service. ClickHouse requires the server binary plus ClickHouse Keeper (or Apache ZooKeeper) for distributed coordination. A production ClickHouse cluster typically involves 6+ processes; an Arc cluster runs 3.

Feature Comparison

FeatureArcClickHouse
Standard SQL (DuckDB / PostgreSQL-compatible)
Portable Parquet storage
Open source (AGPL-3.0)
InfluxDB Line Protocol ingestion
Edge / single-binary deployment
Retention policies
Managed cloud offering

Frequently Asked Questions

How does Arc compare to ClickHouse on ClickBench?

Arc matches or beats ClickHouse (Parquet) on combined score across all three tested machines. On cold runs, Arc leads on every machine. On hot runs on c6a.4xlarge, Arc is faster; on high-end machines the results are close. Full results are on benchmark.clickhouse.com.

Can I migrate from ClickHouse to Arc?

Yes. Arc speaks standard SQL and stores data in Parquet. You can export data from ClickHouse and import via Arc's HTTP API or the MessagePack columnar ingestion endpoint. The migration path is documented at docs.basekick.net.

Does Arc support ClickHouse's ingestion protocol?

Arc uses its own high-performance MessagePack columnar protocol and also supports InfluxDB Line Protocol. ClickHouse's native protocol is not supported, but most client libraries can be adapted to Arc's HTTP API.

Is Arc's data portable?

Yes. Arc stores all data as standard Parquet files on storage you control (local disk, S3, MinIO). You can query them with DuckDB, Spark, Snowflake, or any Parquet-compatible tool. ClickHouse uses its own proprietary format.

Pricing

Start free with open source. Scale with enterprise features when you need them.

Open Source

Freeforever
AGPL-3.0 licensed
  • 18M records/sec ingestion
  • Full SQL query engine (DuckDB)
  • Parquet storage (S3, GCS, local)
  • Docker and Kubernetes ready
  • Community support (Discord)
New

Arc Cloud

from$50/month

Managed hosting. No infrastructure. Free 30-day trial.

  • Deploy in 30 seconds
  • Dedicated physical servers
  • Daily backups to S3
  • Arc Enterprise included
  • No credit card required
Coming Q2 2026

Enterprise

$5,000/year

Starting price for up to 8 cores. Clustering, RBAC, and dedicated support.

  • Everything in Open Source
  • Horizontal clustering and HA
  • Role-based access control (RBAC)
  • Tiered storage and auto-aggregation
  • Dedicated support and SLAs
View all plans ->

Enterprise Features

Clustering

Horizontal scaling with automatic data distribution. Query routing and load balancing across nodes.

Security

Fine-grained RBAC with database and table-level permissions. LDAP/SAML integration available.

Data Management

Automated retention policies, continuous queries for aggregation, and tiered storage for cost optimization.

Ready to handle billion-record workloads?

Deploy Arc in minutes. Own your data in Parquet. Use for analytics, observability, AI, IoT, or data warehousing.

Get Started ->