We Benchmarked Arc Against Every Major Log Database. The Results Were Brutal.

Photo by Sebastian Pociecha on Unsplash
I'll be honest—this benchmark wasn't on our roadmap.
Over the past couple of weeks, something unexpected started happening. People began using Arc to store logs. Not metrics, not IoT sensor data, not racing telemetry—logs. We started getting emails, Discord messages, and GitHub issues asking about log ingestion performance, query patterns for structured logs, and how Arc compares to Elasticsearch or Loki.
And that caught us off guard. Observability was never our primary focus. Arc was built for industrial IoT—racing telemetry, mining sensors, smart city infrastructure. But our users kept pushing us in this direction, and honestly, when your users tell you what they want, you listen.
So we decided to find out: how does Arc actually perform as a log database? We put it head-to-head against the most popular log systems out there—Elasticsearch, Grafana Loki, ClickHouse, VictoriaLogs, and Quickwit. No cherry-picking queries. No favorable configurations. Just a 60-second sustained ingestion test and a suite of real-world queries on 50 million logs.
The results surprised even us.
Ingestion: 4.9 Million Logs Per Second
Here's the headline number. Arc ingested 4,905,141 logs per second sustained over 60 seconds, with sub-millisecond median latency (0.92ms p50). That's not a burst. That's sustained throughput.
| System | Logs/sec | Total Logs | vs Arc | p50 | p95 | p99 | p999 |
|---|---|---|---|---|---|---|---|
| Arc | 4,905,141 | 294.3M | — | 0.92ms | 2.94ms | 5.87ms | 14.59ms |
| VictoriaLogs | 2,262,593 | 135.8M | 2.2x slower | 0.86ms | 16.34ms | 22.90ms | 37.10ms |
| Loki | 1,135,568 | 68.1M | 4.3x slower | 4.38ms | 8.91ms | 25.36ms | 31.43ms |
| ClickHouse | 400,851 | 24.1M | 12.2x slower | 9.15ms | 28.85ms | 58.96ms | 1,084ms |
| Quickwit | 251,933 | 15.1M | 19.5x slower | 4.06ms | 93.49ms | 122.77ms | 265.55ms |
| Elasticsearch | 101,087 | 6.1M | 48.5x slower | 37.93ms | 78.88ms | 807.50ms | 2,288ms |
All systems ran for 60 seconds with 12 workers, 500 logs per batch, and 100% success rate (zero errors across the board).
Arc maintained a consistent 4.7–5.2M RPS throughout the entire test with near-zero errors and minimal latency variance. ClickHouse and Elasticsearch both showed throughput degradation over time—ClickHouse due to background merge operations, and Elasticsearch from segment management overhead.
VictoriaLogs deserves a shoutout here. At 2.2M logs/sec, it held strong and was the closest competitor. Solid system.
Query Performance: 50 Million Logs
Ingestion is only half the story. You need to actually query this stuff. So we loaded 50 million logs into each system and ran a suite of eight query patterns—from simple counts to full-text search to complex multi-condition filters.
Average Latency (ms)
| Query | Arc (JSON) | Arc (Arrow) | VictoriaLogs | ClickHouse | Elasticsearch | Quickwit | Loki* |
|---|---|---|---|---|---|---|---|
| Count All | 2.51 | 2.69 | 1.21 | 1.98 | 0.81 | 174.35 | 3,840 |
| Filter by Level | 8.12 | 7.71 | 2.87 | 122.23 | 17.53 | 1,085 | 26.11 |
| Filter by Service | 7.66 | 7.52 | 2.51 | 87.78 | 16.13 | 898.34 | 152.54 |
| Full-Text Search | 8.89 | 9.11 | 9.08 | 77.09 | 17.78 | 3,586 | 2,074 |
| Time Range (1hr) | 29.03 | 24.91 | 9.26 | 83.04 | 70.89 | 948.24 | 1,886 |
| Top 10 Aggregation | 18.28 | 18.56 | 157.90 | 54.00 | 1.14 | 175.04 | 3,572 |
| Complex Filter | 9.57 | 9.09 | 10.95 | 59.13 | 24.57 | 4,595 | 11.59 |
| Bulk SELECT (10K) | 32.68 | 29.10 | 9.74 | 39.54 | 53.23 | 173.00 | 1,869 |
*Loki only stored 1.36M of 50M logs—results not directly comparable.
p99 Latency (ms)
| Query | Arc (JSON) | Arc (Arrow) | VictoriaLogs | ClickHouse | Elasticsearch | Quickwit | Loki* |
|---|---|---|---|---|---|---|---|
| Count All | 2.68 | 2.86 | 1.25 | 7.24 | 1.23 | 175.35 | 3,868 |
| Filter by Level | 8.60 | 8.14 | 3.12 | 136.24 | 18.05 | 1,090 | 31.31 |
| Filter by Service | 7.84 | 7.78 | 3.24 | 134.71 | 16.48 | 902.10 | 157.72 |
| Full-Text Search | 9.26 | 10.01 | 23.29 | 86.25 | 18.38 | 3,590 | 2,084 |
| Time Range (1hr) | 29.68 | 25.33 | 9.58 | 103.75 | 77.75 | 951.93 | 1,913 |
| Top 10 Aggregation | 18.40 | 19.26 | 165.21 | 137.39 | 1.41 | 177.08 | 3,633 |
| Complex Filter | 10.14 | 9.34 | 11.31 | 89.91 | 26.75 | 4,599 | 12.16 |
| Bulk SELECT (10K) | 33.58 | 29.53 | 11.08 | 65.17 | 54.40 | 175.74 | 1,892 |
*Loki only stored 1.36M of 50M logs—results not directly comparable.
A few things stand out:
VictoriaLogs is fast at queries. Really fast. Across most query types, it was consistently the quickest. Credit where it's due—their query engine is well optimized for log workloads.
Elasticsearch is a mixed bag. It excelled at aggregations (1.14ms for Top 10—fastest of the bunch) but struggled on range queries and bulk retrievals. The inverted index shines for some patterns and hurts for others.
Arc's Arrow IPC format helps. For data-heavy queries (time range, bulk select), switching from JSON to Arrow IPC gave a 5–14% improvement. Not massive, but it adds up when you're moving large result sets.
Arc is competitive across the board. We're not the fastest on every single query type, but we're consistently in the mix—and we're doing this with standard SQL on DuckDB, not a specialized log query engine. That matters when you want to join your logs with metrics, traces, and events in one query.
The Loki Problem
Now, here's where things got interesting. Loki reported 1.1 million logs/sec ingestion, which sounds respectable. But when we went to query the data, we found only 1.36 million logs out of the 50 million we sent.
Loki silently dropped approximately 98% of our logs while returning HTTP 204 (success) responses.
We're not calling this a bug—Loki's architecture is designed around specific constraints (like the out-of-order rejection and per-stream rate limits). But if you're evaluating Loki for high-throughput log ingestion, be aware that the numbers you see at the API level may not reflect what actually gets stored. We excluded Loki from the query comparison because the dataset sizes weren't comparable.
How We Tested
Transparency matters, so here's the setup:
- Hardware: Apple M3 Pro Max, 14 cores, 16GB RAM, 1TB disk
- Configuration: All systems running with default settings—no fine-tuning on Arc or any of the competitors. Out of the box, as you'd get them on first install.
- Test duration: 60 seconds sustained ingestion
- Batch size: 500 logs per request
- Log format: Realistic web API logs with structured fields (timestamp, level, service, hostname, HTTP method, path, status code, latency, request/trace IDs, user IDs, messages)
- Level distribution: 60% INFO, 25% DEBUG, 10% WARN, 4% ERROR, 1% FATAL—mirrors real production patterns
- Query dataset: 50 million logs loaded into each system
- Query iterations: 5 runs per query (after warmup), reporting averages
The benchmark code is fully open source:
- https://github.com/Basekick-Labs/arc/blob/main/benchmarks/log_bench/main.go
- https://github.com/Basekick-Labs/arc/blob/main/benchmarks/query_suite/main.go
- https://github.com/Basekick-Labs/arc/blob/main/benchmarks/log_bench/RESULTS.md
Run them yourself. We welcome it.
What This Means
Arc wasn't originally built as a "log database." It was built as a high-performance time-series engine for industrial IoT workloads—racing telemetry, mining sensors, smart city infrastructure. But logs are time-series data too. They have timestamps, they arrive in high volumes, and you need to query them fast.
The fact that Arc can ingest logs at 4.9M/sec while also giving you full SQL query capabilities (joins across databases, window functions, CTEs) makes it a compelling option if you're tired of maintaining separate systems for metrics and logs.
One database. One query language. One storage format.
Try It
docker run -d \
-p 8000:8000 \
-e STORAGE_BACKEND=local \
-v arc-data:/app/data \
ghcr.io/basekick-labs/arc:latestCheck out the https://github.com/Basekick-Labs/arc, run the benchmarks yourself, and let us know what you find.
Questions? Reach out at enterprise@basekick.net or join us on Discord.
Ready to handle billion-record workloads?
Deploy Arc in minutes. Own your data in Parquet.