Built for Billion-Record
Industrial Workloads
Racing telemetry. Smart cities. Mining sensors. Medical devices.
When you have billions of sensor readings and milliseconds matter, Arc delivers.
docker run -d -p 8000:8000 \
-e STORAGE_BACKEND=local \
-v arc-data:/app/data \
ghcr.io/basekick-labs/arc:latestSingle binary. No dependencies. Production-ready in minutes.
Industrial IoT Generates Massive Data
100M+ sensor readings in a single race. 10B infrastructure events daily in a smart city.
Traditional time-series databases can't keep up. Arc can.
Real Industrial IoT Scale:
Race telemetry: 100M+ sensor readings in 3 hours
Smart city: 10B infrastructure sensor events daily
Mining operation: Billions of equipment telemetry points
Medical devices: Real-time patient monitoring at scale
Built for billion-record workloads.
DuckDB SQL. No Proprietary Query Language.
Not a custom DSL. Not a query language that changes every major version.
Not vendor lock-in through proprietary syntax.
Just DuckDB-powered SQL with window functions, CTEs, and joins.
SELECT
device_id,
facility_name,
AVG(temperature) OVER (
PARTITION BY device_id
ORDER BY timestamp
ROWS BETWEEN 10 PRECEDING AND CURRENT ROW
) as temp_moving_avg,
MAX(pressure) as peak_pressure,
STDDEV(vibration) as vibration_variance
FROM iot_sensors
WHERE timestamp > NOW() - INTERVAL '24 hours'
AND facility_id IN ('mining_site_42', 'plant_7')
GROUP BY device_id, facility_name, timestamp
HAVING MAX(pressure) > 850 OR STDDEV(vibration) > 2.5;If you know SQL, you know Arc. Powered by DuckDB.
Window Functions
Moving averages, ranking, and complex aggregations built-in
CTEs & Subqueries
Break down complex analysis into readable, composable parts
JOINs Across Sensors
Correlate temperature, pressure, vibration data across devices
Parquet files you actually own
Your data lives in standard Parquet files on S3, MinIO, or local disk.
Arc disappears tomorrow? You still own your data.
Query it with DuckDB, ClickHouse, Snowflake, or any tool that reads Parquet.
This is what "portable data" actually means.
S3 / MinIO / Local
Store anywhere you want
Standard Parquet
Industry standard format
You Own It
No vendor lock-in, ever
9.47M records/sec sustained
High-throughput metrics ingestion via MessagePack columnar format.
No degradation. No memory leaks. Just stable performance.
Hardware: M3 Max 14-core, 36GB RAM • Workers: 35 • Duration: 60 seconds • Success rate: 100%
p50: 2.79ms • p95: 4.66ms • p99: 6.11ms
See Arc in Action
Live demos running on Arc. Real data, real-time ingestion, sub-second queries. All powered by Arc's high-performance time-series engine.
🚢 Vessel Tracking
Real-time AIS data from vessels in Singapore's Strait of Malacca. Live position updates every 30 seconds.
✈️ Flight Tracking
Live ADS-B data tracking aircraft over New York City. Real-time altitude, speed, and position updates.
🌦️ Weather Tracking
Multi-city weather monitoring across Buenos Aires, London, and Tokyo. Temperature, humidity, and pressure trends.
📊 System Monitoring
Real-time Docker container metrics for the Arc database. CPU, memory, network, and disk I/O tracking.
Migrate from your existing time-series database
We help with migration at no cost.
Arc speaks InfluxDB Line Protocol natively.
Point Telegraf at Arc. Dual-write during migration. Cut over when ready.
No agent changes. No downtime. No data loss.
Already running TimescaleDB?
We'll help you migrate at no cost.
Keep your SQL queries. Arc uses standard DuckDB SQL with window functions, CTEs, and joins.
Own your data in Parquet. No vendor lock-in. Query with any tool. 10-50x faster queries after compaction.
Already running QuestDB?
We'll help you migrate at no cost.
Broader SQL support. Full window functions, CTEs, complex joins - features QuestDB doesn't support.
Better ecosystem integration. Native Grafana datasource, VSCode extension, Apache Superset dialect.
Features That Matter for IoT
Grafana Integration
Official Grafana datasource plugin. Build dashboards for sensor data, equipment telemetry, and facility monitoring. Azure AD OAuth support.
Setup guide →VSCode Extension
Full-featured database manager. Query editor with autocomplete. Notebooks for analysis. CSV import. Alerting.
Install from marketplace →Automatic Compaction
Small files merge into optimized 512MB Parquet files. 10-50x faster queries with zero configuration.
Retention Policies
Time-based lifecycle management. Keep 7 days of raw data, 90 days of rollups, 2 years of aggregates.
Or keep everything forever - storage is cheap.
GDPR-Compliant Deletion
Precise record deletion with file rewrites. No tombstones. No query overhead.
Write-Ahead Log (WAL)
Optional durability for zero data loss. Disabled by default for maximum throughput.
Enable when durability matters more than speed.
Multi-Database Architecture
Organize by facility, device type, or environment. Isolated namespaces for multi-tenant deployments. Query across databases with standard SQL joins.
Apache Superset Integration
Native dialect for BI dashboards. Connect your existing visualization tools.
See all features →Trusted by Industrial Teams
Environmental Monitoring - Migrated from TimescaleDB. 10x faster queries on CO2 sensor data from vehicle-mounted devices.
Fleet Tracking - Processing billions of GPS tracking events. Arc handles their scale effortlessly.
Clinical IoT - Real-time patient monitoring across multiple facilities. Arc delivers the performance they need.
FAQ
Yes. Version 25.12.1 is stable and ready for production use.
Self-hosted Arc is production-ready now.
Cloud managed version launching Q1 2026.
ClickHouse wins on raw analytical performance.
Arc wins on operational simplicity and data portability.
- •ClickHouse is a distributed system requiring cluster management.
- •Arc runs on a single node with object storage (S3/MinIO).
- •ClickHouse uses proprietary MergeTree format.
- •Arc uses standard Parquet files.
- •12x faster ingestion via MessagePack columnar protocol vs InfluxDB Line Protocol.
- •Portable Parquet files vs proprietary TSM format. Query with any tool.
- •Standard SQL vs Flux (which InfluxData deprecated).
- •InfluxDB 1.x Enterprise customers are stuck on deprecated software. Arc provides a migration path with Line Protocol compatibility.
- •Parquet compression gives 3-5x reduction vs raw sensor data.
- •S3 storage costs ~$0.023/GB/month.
- •1TB of sensor data = ~$23/month in storage.
- •Significantly cheaper than proprietary IoT platforms.
- •Built-in retention policies automatically delete old data.
- •Or keep everything - storage is cheap, queries are fast.
Single-node architecture currently.
HA and clustering planned for 6-12 month timeline.
For now: Run primary + standby with object storage replication.
WAL enabled for zero data loss during failover.
Currently API token authentication.
Grafana integration supports Azure AD OAuth.
RBAC, SSO, and multi-tenancy planned for enterprise release.
- 1Point Telegraf at Arc (Line Protocol compatible)
- 2Dual-write to both systems during transition
- 3Verify data in Arc matches InfluxDB
- 4Update Grafana dashboards to Arc data source
- 5Cut over when ready
Migration tooling and documentation: Migration guide →
Pricing
Self-Hosted
- Run on Docker, Kubernetes, or bare metal
- Local filesystem, MinIO, S3, GCS, or any S3-compatible storage
- All features included
- Community support
Arc Cloud
Managed hosting with automatic backups, monitoring, and support.
- Automatic backups and disaster recovery
- High availability and clustering
- Monitoring and alerting
- Dedicated support
Early access for industrial IoT customers.
Get Started
Self-Hosted Installation
docker run -d -p 8000:8000 \
-e STORAGE_BACKEND=local \
-v arc-data:/app/data \
ghcr.io/basekick-labs/arc:latestcurl -X POST http://localhost:8000/api/v1/write/line-protocol \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: text/plain" \
-H "x-arc-database: default" \
--data-binary "cpu,host=server01 value=0.64 1633024800000000000"echo '{"m":"cpu","columns":{"time":[1633024800000],"host":["server01"],"usage":[95.0]}}' | \
python3 -c "import sys,msgpack,json; sys.stdout.buffer.write(msgpack.packb(json.load(sys.stdin)))" | \
curl -X POST http://localhost:8000/api/v1/write/msgpack \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/msgpack" \
-H "x-arc-database: default" \
--data-binary @-curl -X POST http://localhost:8000/api/v1/query \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-H "x-arc-database: default" \
-d '{"sql":"SELECT * FROM cpu WHERE time > NOW() - INTERVAL 1 HOUR","format":"json"}'Arc Cloud Waitlist
Join the waitlist for managed hosting.
Early access members get preferential pricing.