We Got It Wrong: Why Arc Isn't a Time-Series Database

Photo by Kenny Eliason on Unsplash
We launched Arc as a "time-series database for Industrial IoT."
That was wrong.
Not because Arc doesn't handle time-series data well. It does. 18 million records per second ingestion, 6 million rows per second queries, DuckDB SQL, Parquet storage. The tech works great.
We were wrong about what Arc is.
What We Actually Built
Arc is a columnar analytical database built on DuckDB and Parquet.
Under the hood:
- DuckDB query engine — analytical workloads, not just time-series
- Parquet columnar format — efficient storage for any structured data
- Apache Arrow — in-memory columnar processing
- S3/Azure/local storage backends — object storage
None of that is time-series specific.
Arc handles:
- Product analytics (events, clickstreams, A/B tests)
- Observability (logs, metrics, traces)
- AI agent memory (conversations, context, RAG)
- IoT telemetry (sensors, devices, time-series data)
- Security logs (audit trails, SIEM)
- Data warehousing (analytics, BI, reporting)
We built a general-purpose analytical database and branded it as "time-series only."
That's like building a Ferrari and only driving it to the grocery store.
Why We Narrowed Too Much
When we started, we thought we needed extreme focus.
"Pick a niche. Own it. Don't try to be everything to everyone."
Good advice for some products. Terrible advice for Arc.
We came from the InfluxDB world. Time-series was what we knew. Industrial IoT seemed like a clear, defensible market. So we said: "Arc is for Industrial IoT."
Problem: That's not what Arc is. That's just one thing Arc can do.
The Reality Check
Six months building full-time. Zero revenue. Zero paying customers.
People would try Arc. They'd love the performance:
"Holy shit, 10 million records per second on a single instance?"
"This DuckDB SQL is amazing, way better than InfluxQL"
"Parquet on S3? Finally, no vendor lock-in"
Then they'd ask:
"Can Arc handle application logs?"
"Can Arc store product events?"
"Can we use Arc for AI agent memory?"
And we'd say: "Well... it's a time-series database, but technically yes..."
That's stupid.
Arc handles logs great. Arc handles events great. Arc handles AI memory great.
Just say that.
What People Actually Told Us
Here's what we heard from people testing Arc:
From a product analytics company:
"We're processing 500M user events per day. Arc ingests faster than ClickHouse and the SQL is way cleaner. But you market it as 'Industrial IoT' so we almost didn't try it."
From an observability startup:
"We need unified storage for logs, metrics, and traces. Arc does all three perfectly. Why do you only talk about time-series?"
From an AI agent company:
"Timescale just launched Memory Engine for agent memory. You built the same thing a month ago with Arc + Memtrace and it's open source. Why aren't you positioning it that way?"
We were solving real problems. We just weren't telling people about it.
What's Changing
New positioning:
Arc is a high-performance columnar analytical database. Use it for whatever analytical workload you have.
What's NOT changing:
The product.
Arc still does:
- 18M+ records/sec ingestion (MessagePack columnar format)
- 6M+ rows/sec queries (DuckDB Arrow IPC)
- 0.47ms p50 latency on analytical queries
- Parquet storage on S3/Azure/local
- Single Go binary deployment
- AGPL-3.0 open source
Nothing about the technology changes.
We're just being honest about what we built.
The SQL That Changed Our Mind
Here's the moment it clicked.
Someone asked: "Can Arc do this query?"
SELECT
user_id,
event_type,
COUNT(*) OVER (
PARTITION BY user_id
ORDER BY timestamp
) as event_sequence,
LEAD(event_type) OVER (
PARTITION BY user_id
ORDER BY timestamp
) as next_event
FROM analytics.events
WHERE timestamp > NOW() - INTERVAL '7 days'
AND event_type IN ('page_view', 'add_to_cart', 'purchase')
ORDER BY user_id, timestamp;That's a product analytics query. Window functions, event sequencing, funnel analysis.
We ran it. Billions of rows. Sub-second response.
Arc crushed it.
And we realized: we've been telling people Arc can't do this. But it can. It does it better than most product analytics databases.
We were limiting ourselves for no reason.
What This Means for You
If you're using Arc for time-series workloads:
Nothing changes. Keep doing what you're doing. Arc still handles time-series data better than InfluxDB, TimescaleDB, or QuestDB.
If you've been waiting to try Arc because "we're not in IoT":
Try it now. Arc works for:
- Product analytics (Mixpanel/Amplitude alternative)
- Observability (Datadog/New Relic backend)
- AI agent memory (Timescale Memory Engine alternative, but open source)
- Security logs (Splunk/Elastic alternative)
- Data warehousing (Snowflake alternative for event data)
If you're building something with high-throughput writes and need fast analytical queries:
Arc probably works. Just try it.
The Bigger Lesson
We made a classic mistake: We built what we knew (time-series) instead of what we actually built (analytical database).
The tech was there all along. DuckDB is an analytical engine. Parquet is a columnar format. Arrow is for analytical processing.
We just refused to admit it because "focus" felt safer than "broad."
But sometimes broad is focused.
Arc has one job: ingest data fast, query it fast, store it cheap.
Time-series, events, logs, analytics — doesn't matter. If it's columnar data with analytical queries, Arc handles it.
That's not unfocused. That's just honest.
Why Now?
Two things happened:
1. Timescale launched Memory Engine
Timescale (Postgres-based time-series DB) launched "Memory Engine" for AI agent memory two weeks ago.
We looked at it and realized: we built the same thing two months ago. Arc + Memtrace does AI agent memory. It's faster (18M+ writes/sec vs Postgres), it's open source (AGPL vs proprietary), and it uses portable Parquet files.
We just never told anyone because "Arc is for Industrial IoT."
2. Every demo request was outside IoT
Last month:
- Product analytics company: "Can Arc replace ClickHouse?"
- Observability startup: "Can Arc handle logs + metrics + traces?"
- AI company: "Can Arc store agent conversations?"
- Security company: "Can Arc do log analysis?"
All said yes. All converted to testing. None were IoT.
The market was telling us what Arc is. We just weren't listening.
What We're Doing About It
Today, we're repositioning everything:
- GitHub: "Columnar analytical database" (not time-series)
- Website: Multiple use cases (analytics, observability, AI, IoT, logs)
- Docs: Broader examples (not just sensors and metrics)
- Messaging: "Use Arc for whatever you need"
We're also launching Arc Cloud (managed offering) in the next few weeks. Free beta. Low friction. Try Arc in 5 minutes without self-hosting.
And we're hiring a sales co-founder (20% equity, pure equity initially) to help us actually sell this thing. Because I'm a builder, not a closer.
Try Arc
One database. DuckDB SQL. Parquet storage. S3/Azure. Single binary.
Use it for:
- Product analytics
- Observability
- AI agent memory
- IoT telemetry
- Security logs
- Data warehousing
- Whatever
Stop asking if Arc can handle your use case. Just try it.
Links
- https://github.com/Basekick-Labs/arc
- Docs: docs.basekick.net/arc
- Discord
- Questions? hello@basekick.net
— Ignacio
Founder, Basekick Labs
Costa Rica
P.S. If you're a sales co-founder who can close $40K+ enterprise deals, we're hiring. 20% equity. Email me: ignacio[at]basekick[dot]net
Ready to handle billion-record workloads?
Deploy Arc in minutes. Own your data in Parquet. Use for analytics, observability, AI, IoT, or data warehousing.
