Arc on ClickBench: TimescaleDB. It's Not Close.

#Arc#ClickBench#benchmark#TimescaleDB#Parquet#performance#cold runs#analytical database#PostgreSQL
Cover image for Arc on ClickBench: TimescaleDB. It's Not Close.

Last week we published our ClickBench results against ClickHouse. Arc won on every machine, with stricter cold run methodology.

TimescaleDB is next. If anything, the numbers are more one-sided.

What TimescaleDB Is

TimescaleDB is a PostgreSQL extension for time-series data. It adds hypertables (automatic time-based partitioning), columnar compression, and continuous aggregates on top of standard PostgreSQL. It's a legitimate product with real production users, and it's the most common migration path for teams already on PostgreSQL who need time-series capabilities.

It also has a managed cloud offering — Timescale Cloud — at various instance sizes. We included both in the comparison.

ClickBench: Same Benchmark, Different Story

ClickBench is the industry-standard benchmark for analytical databases. 99.9M rows, 43 analytical queries, open source, fully reproducible. Same methodology as our ClickHouse post — Arc runs true cold runs (service restart + OS cache flush before every query). TimescaleDB is on the https://github.com/ClickHouse/ClickBench/issues/793 — when that gets fixed, their numbers get worse. Ours don't move.

Combined score — relative time, lower is better:

SystemMachineScore
Arcc8g.metal-48xl×1.20
Arcc7a.metal-48xl×1.31
Arcc6a.4xlarge×2.24
TimescaleDBc6a.4xlarge×10.44
Timescale Cloud16 vCPU 64GB×10.84
Timescale Cloud8 vCPU 32GB×15.06
Timescale Cloud4 vCPU 16GB×20.36
TimescaleDB (no columnstore)c6a.4xlarge×149.18

Cold run — relative time, lower is better:

SystemMachineScore
Arcc8g.metal-48xl×1.71
Arcc6a.4xlarge×1.79
Arcc7a.metal-48xl×1.81
Timescale Cloud16 vCPU 64GB×4.47
Timescale Cloud8 vCPU 32GB×7.85
TimescaleDBc6a.4xlarge×12.36
Timescale Cloud4 vCPU 16GB×16.69
TimescaleDB (no columnstore)c6a.4xlarge×112.55

Hot run — relative time, lower is better:

SystemMachineScore
Arcc8g.metal-48xl×1.14
Arcc7a.metal-48xl×1.29
Arcc6a.4xlarge×3.15
TimescaleDBc6a.4xlarge×20.63
Timescale Cloud16 vCPU 64GB×25.94
Timescale Cloud8 vCPU 32GB×36.33
Timescale Cloud4 vCPU 16GB×45.93
TimescaleDB (no columnstore)c6a.4xlarge×590.72

On the same hardware (c6a.4xlarge), Arc is 4.6x faster on combined score and 6.5x faster on hot runs.

Arc on a c6a.4xlarge (×2.24) also beats every Timescale Cloud instance — including their 16 vCPU 64GB managed offering (×10.84). A self-hosted single binary on commodity hardware outperforms their cloud product on bigger machines.

Ingestion: Where It Gets Worse for TimescaleDB

ClickBench measures queries. For time-series workloads — IoT telemetry, product events, observability pipelines — ingestion throughput is equally important. You need to write fast before you can query fast.

We ran sustained ingestion benchmarks on a MacBook Pro (M3 Pro Max, 14 cores, 36 GB RAM, 1 TB NVMe). We gave TimescaleDB every advantage: Community Edition with hypertables and columnar compression enabled, pgx COPY protocol, tuned PostgreSQL config (shared_buffers=8GB, work_mem=256MB), synchronous_commit=off. We tested multiple batch sizes and worker counts to find their peak.

TimescaleDB peaked at 1.81M rec/sec with 10,000-row batches.

Arc was tested with 1,000-row batches — 10x smaller.

ArcTimescaleDB
Throughput17.3M rec/s1.81M rec/s
Batch size1,000 rows10,000 rows
p50 latency4.4ms48ms
p99 latency26ms260ms
30s total records~520M~54M

9.6x faster throughput. 10x smaller batches. 11x lower p50 latency. 10x lower p99.

The gap isn't tuning — it's architecture. TimescaleDB is PostgreSQL underneath. Every write goes through the PostgreSQL heap, WAL, and TimescaleDB's chunk management layer. That stack has a hard ceiling on write throughput regardless of configuration. We tested WAL on, WAL off, sync commit off, different batch sizes and worker counts. It doesn't move past ~1.8M rec/sec.

Arc sorts by time at ingestion and appends columnar Parquet files via Apache Arrow with no background merge pressure. That's why it sustains 17M+ rec/sec with tiny batches.

The Honest Caveat

TimescaleDB is PostgreSQL. If you're already running PostgreSQL, the operational familiarity is real. Your existing tooling, your DBA's knowledge, your monitoring setup — it all carries over. That has value.

TimescaleDB also has row-level UPDATE semantics and transactional guarantees that Arc doesn't offer. If your workload needs those, TimescaleDB is the right choice.

Arc is optimized for append-heavy analytical workloads — ingestion pipelines, analytics, observability, IoT telemetry. If that's your use case, the numbers speak for themselves.

Operational Complexity

TimescaleDB: Install PostgreSQL, install the extension, tune postgresql.conf, create hypertables, configure compression policies, manage chunk intervals, monitor background jobs.

Arc:

docker run -d -p 8000:8000 \
  -e STORAGE_BACKEND=local \
  -v arc-data:/app/data \
  ghcr.io/basekick-labs/arc:latest

Point it at S3 for production. That's it.

Reproduce It Yourself

Every claim in this post is reproducible. That's the whole point.

The Bottom Line

Same hardware, same benchmark: Arc is 4.6x faster than TimescaleDB on combined ClickBench score. On hot runs, 6.5x faster.

On ingestion: 17.3M rec/sec vs 1.81M rec/sec. 9.6x faster, with 10x smaller batches.

TimescaleDB's results are also on the lukewarm run list for ClickBench issue #793. When that gets fixed, the gap widens further.

Arc is open source, AGPL-3.0, single binary, stores your data in standard Parquet. Already running TimescaleDB? We'll help you migrate at no cost.

This is the second in our ClickBench series. We're comparing Arc against every major analytical database — not because we have anything against them, but because Arc deserves to be measured fairly, and you deserve to see the numbers. Next up: InfluxDB.

Get started with Arc →

https://github.com/Basekick-Labs/arc


Questions or challenges to the methodology? Open an issue on GitHub or find us on Discord. We want the numbers to be right.

Ready to handle billion-record workloads?

Deploy Arc in minutes. Own your data in Parquet. Use for analytics, observability, AI, IoT, or data warehousing.

Get Started ->