Arc Cloud is live. Start free — no credit card required.
ClickBench Verified

Arc vs TimescaleDB

Arc is 4.6x faster on combined ClickBench score and 9.6x faster on ingestion, without PostgreSQL extension overhead, shard management, or compaction complexity.

4.6x
faster queries
9.6x
faster ingestion
No PG
no extension dependencies

ClickBench Results

99.9M rows, 43 analytical queries. Arc runs true cold runs: service restart and OS cache flush before every query. Verify on benchmark.clickhouse.com →

Combined Score (lower is better)

SystemMachineScore
Arcc8g.metal-48xl×1.20
Arcc7a.metal-48xl×1.52
Arcc6a.4xlarge×2.24
TimescaleDBc6a.4xlarge×10.44

Cold Run (lower is better)

SystemMachineScore
Arcc8g.metal-48xl×1.71
Arcc6a.4xlarge×1.81
TimescaleDBc6a.4xlarge×12.36

Hot Run (lower is better)

SystemMachineScore
Arcc8g.metal-48xl×1.14
Arcc6a.4xlarge×3.15
TimescaleDBc6a.4xlarge×20.63

Ingestion Benchmark

Sustained 60-second ingestion load. Same machine, same record schema.

SystemThroughputBatch sizeProtocol
Arc17.3M rec/s1,000 rowsMessagePack columnar
TimescaleDB1.81M rec/s10,000 rowsPostgreSQL COPY

Arc achieves ~9.6x higher throughput with 10x smaller batches.

Why Arc Is Different: Under the Hood

TimescaleDB extends PostgreSQL for time-series workloads. Arc is purpose-built for analytical scan workloads from the ground up. The performance gap reflects that difference in design intent.

Storage Format

Portable Parquet vs. PostgreSQL heap files

Arc stores all data as standard Apache Parquet files in a time-partitioned path (db/measurement/YYYY/MM/DD/HH/), readable by any Parquet-compatible tool without going through Arc. TimescaleDB stores data in PostgreSQL heap files, row-oriented by default. An optional columnar compression policy (Hypercore) converts older chunks to a compressed columnar form, but even those chunks live inside PostgreSQL's storage layer, not as portable files you can take elsewhere.

Query Engine

Vectorized columnar vs. row-at-a-time PostgreSQL

Arc embeds DuckDB, a vectorized OLAP engine that processes 2,048 rows per SIMD operation and reads only the columns a query touches. TimescaleDB uses PostgreSQL's query planner with TimescaleDB-specific hooks: hypertable constraint exclusion prunes irrelevant time-range chunks at plan time, and ChunkAppend parallelizes scans across chunks. These are meaningful optimizations, but the underlying executor is still PostgreSQL's row-at-a-time engine. That is the root cause of the 4.6x gap on ClickBench's aggregation-heavy query set.

Ingestion Protocol

17.3M rec/s at 1K batches vs. 1.81M rec/s at 10K

Arc accepts MessagePack binary columnar batches (17.3M rec/s at 1,000-row batches), InfluxDB Line Protocol for Telegraf compatibility, and bulk Parquet import. TimescaleDB ingests via the standard PostgreSQL wire protocol: INSERT statements or COPY for bulk loads. COPY bypasses the SQL parser and writes directly to heap pages, giving the best possible PostgreSQL throughput. At 10,000-row COPY batches, TimescaleDB reaches 1.81M rec/s, 9.6x slower than Arc at one-tenth the batch size.

Deployment Model

Single binary vs. PostgreSQL extension stack

Arc is a single Go binary with no external dependencies. TimescaleDB is a PostgreSQL extension loaded via shared_preload_libraries, requiring a running PostgreSQL server process, correct PostgreSQL version compatibility, and the TimescaleDB library on the same host. Every TimescaleDB deployment inherits the full PostgreSQL operational surface: vacuuming, WAL management, connection pooling, and upgrade compatibility between PostgreSQL major versions and TimescaleDB releases.

Feature Comparison

FeatureArcTimescaleDB
Standard SQL
Portable Parquet storage
Open source (AGPL-3.0)
InfluxDB Line Protocol ingestion
Edge / single-binary deployment
Retention policies
No PostgreSQL dependency

Frequently Asked Questions

How does Arc compare to TimescaleDB on ClickBench?

On the c6a.4xlarge machine, Arc records a combined score of ×2.24 vs TimescaleDB's ×10.44, approximately 4.6x faster. On hot runs, the gap widens: Arc ×3.15 vs TimescaleDB ×20.63. Full public results are on benchmark.clickhouse.com.

Do I need PostgreSQL to run Arc?

No. Arc is a standalone binary with no PostgreSQL dependency. TimescaleDB is a PostgreSQL extension that inherits PostgreSQL's row-oriented storage, shard management overhead, and compaction backlog. Arc uses DuckDB for analytical SQL and Parquet for storage.

Can I migrate from TimescaleDB to Arc?

Yes. Export your data from TimescaleDB via COPY or pg_dump, then load it into Arc via the HTTP API or MessagePack ingestion. If you use Telegraf for data collection, Arc supports InfluxDB Line Protocol: change the endpoint URL and you're done.

Is Arc's data portable?

Yes. Arc stores all data as standard Parquet files. Query them with DuckDB, Spark, Snowflake, or any Parquet-compatible tool. TimescaleDB stores data in PostgreSQL heap format, which is not portable.

Pricing

Start free with open source. Scale with enterprise features when you need them.

Open Source

Freeforever
AGPL-3.0 licensed
  • 18M records/sec ingestion
  • Full SQL query engine (DuckDB)
  • Parquet storage (S3, GCS, local)
  • Docker and Kubernetes ready
  • Community support (Discord)
New

Arc Cloud

from$50/month

Managed hosting. No infrastructure. Free 30-day trial.

  • Deploy in 30 seconds
  • Dedicated physical servers
  • Daily backups to S3
  • Arc Enterprise included
  • No credit card required
Coming Q2 2026

Enterprise

$5,000/year

Starting price for up to 8 cores. Clustering, RBAC, and dedicated support.

  • Everything in Open Source
  • Horizontal clustering and HA
  • Role-based access control (RBAC)
  • Tiered storage and auto-aggregation
  • Dedicated support and SLAs
View all plans ->

Enterprise Features

Clustering

Horizontal scaling with automatic data distribution. Query routing and load balancing across nodes.

Security

Fine-grained RBAC with database and table-level permissions. LDAP/SAML integration available.

Data Management

Automated retention policies, continuous queries for aggregation, and tiered storage for cost optimization.

Ready to handle billion-record workloads?

Deploy Arc in minutes. Own your data in Parquet. Use for analytics, observability, AI, IoT, or data warehousing.

Get Started ->