Arc Cloud is live. Start free — no credit card required.

5 InfluxDB Alternatives in 2026: An Honest Comparison

#InfluxDB#time-series#databases#comparison#migration#TimescaleDB#VictoriaMetrics#QuestDB#GreptimeDB#Arc#alternatives
Cover image for 5 InfluxDB Alternatives in 2026: An Honest Comparison

InfluxDB has been the default choice for time-series data for almost a decade. If you ran metrics, telemetry, or IoT workloads anytime between 2015 and 2022, you probably ran InfluxDB. Many teams still do.

But the landscape changed. InfluxDB rewrote its storage engine: TSM/TSI in v1 and v2, then IOx (built on Apache Arrow, DataFusion, and Parquet) in v3. Each major version came with new query languages, new licensing terms, and new product structures. v3 Core finally reached general availability in April 2025, roughly nine years after v1.0 GA in 2016. That's a long incubation period for teams that just need a reliable database for their sensor data.

I worked at InfluxData. I watched this evolution from inside, and now I watch it from outside. The architectural choices in v3 (SQL-based engine, columnar storage, object-storage backed) are good ones. They validate where the industry has moved. But for teams already feeling the pain of v1 or v2 deployments, the question isn't whether v3 is better than v2. It's whether the upgrade path is worth it, or whether it's time to evaluate alternatives.

This post covers five of those alternatives. Some are similar to InfluxDB. Some are very different. None of them are perfect. I'll be specific about where each one wins and where it falls short, including for Arc, which I built and which is the fifth option on this list.

Why Teams Leave InfluxDB

Before getting into alternatives, it's worth being honest about why teams evaluate alternatives in the first place. The reasons I see most often:

Architectural churn. Three major versions across roughly nine years, with different storage engines and different query languages (InfluxQL, then Flux, then SQL via DataFusion in v3). Teams that bet on v2 and Flux feel stranded. Teams on v1 face an aging codebase. The v3 migration path is real but non-trivial.

Performance ceilings at scale. InfluxDB v3 is significantly faster than v2 for analytical queries. But teams pushing past tens of millions of series, or doing heavy joins across measurements, still hit walls. The tag-and-field data model that worked for IoT metrics doesn't fit cleanly when you need to correlate metrics with logs, traces, and events.

Cost at scale. InfluxDB Cloud's pricing scales with cardinality and ingest volume in ways that surprise teams as their workloads grow. Self-hosted InfluxDB Enterprise pricing for clustering and HA is not public, but it's not cheap.

Licensing and feature gates. The community version doesn't include clustering, HA, or some operational features. The line between "open source" and "you should really pay for Cloud or Enterprise" has moved several times.

Query language fragmentation. InfluxQL, Flux, SQL: most production InfluxDB deployments have queries written in at least two of these. Migrating to a SQL-first alternative often simplifies the codebase before you even consider the storage engine.

Whatever your reason, here are five databases worth evaluating.

What to Look For in an Alternative

Before listing options, name the criteria. Different teams will weight these differently:

  1. Query language. SQL, PromQL, custom DSL, or a mix. Familiar wins.
  2. Data model. Pure time-series (tag/field), wide events, full relational, or a unified observability schema covering metrics, logs, and traces.
  3. Storage architecture. Local disk, object storage, or hybrid. Object storage scales cheaper; local disk is faster for hot data.
  4. Operational complexity. Single binary vs. clustered with operators vs. external dependencies (Postgres, Kafka, etc.).
  5. License. Apache 2.0, AGPL, source-available, or proprietary. Affects whether competitors can fork your stack and whether you have legal risk.
  6. Migration path from InfluxDB. Line Protocol compatibility, Telegraf support, schema mapping tools.
  7. Maturity and community. Production deployments, contributor count, support quality.

With those in mind:

1. TimescaleDB (Tiger Data)

TimescaleDB is a PostgreSQL extension that adds time-series capabilities to a standard Postgres database. The company rebranded to Tiger Data in June 2025, but the open source extension is still called TimescaleDB.

What it's best at. If your team is already running Postgres, TimescaleDB is the lowest-friction path to better time-series support. Hypertables transparently partition data by time. Continuous aggregates pre-compute rollups. The ecosystem is enormous: every Postgres tool, ORM, and client library works.

For teams that need both transactional and analytical workloads in the same database (e.g., user account data alongside event telemetry), TimescaleDB's Postgres foundation is a real advantage. You're not running two databases.

Where it falls short. TimescaleDB now ships Hypercore, a hybrid row-and-columnstore engine. Hot data lives in row format for fast writes and updates; cooled data is automatically migrated to a compressed columnstore optimized for analytical queries. Some aggregate operators have vectorized execution paths over the columnstore. So the original critique that "everything decompresses back to rows" is no longer fully accurate. It's a real hybrid engine now.

That said, the executor is still primarily Postgres's row-based engine. Vectorized paths exist for specific operators, not the whole query plan. On heavy aggregation workloads over very large datasets, you can feel the gap against engines that are columnar end-to-end (DuckDB, ClickHouse, Arc). We measured this directly: on ClickBench, Arc was 4.6x faster than TimescaleDB on combined score and 9.6x faster on ingestion. Updates and deletes on the columnstore are supported and have improved dramatically (v2.11 added DML on compressed chunks, v2.16 brought >1000x speedups on update/delete by avoiding unnecessary decompression, v2.21 added batch-level deletes for further gains). Heavy backfills can still be more expensive than on a row-store, but the "compressed chunks are read-only" framing that was true a few years ago doesn't reflect the current engine.

Pricing scales with infrastructure on Tiger Cloud.

License. Dual-licensed. The Apache 2.0 edition includes core hypertable functionality. The features most teams actually want (Hypercore for compression and columnar storage, continuous aggregates, data retention policies, advanced hyperfunctions) are under the Timescale License (TSL), source-available but not OSI-open-source. TSL prohibits offering TimescaleDB as a managed service. Self-managed deployments using TSL features are free.

InfluxDB migration path. No drop-in Line Protocol support. Tiger Data publishes https://github.com/timescale/outflux for migrations, but it only supports InfluxDB v1.x. v2 and v3 require custom ETL or Telegraf-based pipelines. Expect schema redesign work mapping measurements + tags + fields into Postgres tables.

Best for. Teams that already use Postgres heavily, value SQL above all else, and have time-series workloads that fit comfortably below 100M rows per hypertable.

2. VictoriaMetrics

VictoriaMetrics is a Prometheus-compatible metrics database written in Go. It ships as a single binary or as a clustered version with separate components for ingestion, storage, and querying.

What it's best at. If you run Prometheus and you're hitting its retention or scale limits, VictoriaMetrics is the cleanest replacement. It speaks PromQL. It accepts Prometheus remote-write. It works with your existing Grafana dashboards and Alertmanager rules without modification.

Compared to InfluxDB, VictoriaMetrics is dramatically simpler operationally. The single-binary deployment is genuinely single-binary. The clustered version is more components but still much simpler than running an InfluxDB Enterprise cluster. Compression is excellent: typical observability workloads compress 10x or better.

Where it falls short. VictoriaMetrics itself is a metrics database. It doesn't handle logs or traces directly. The company addresses that with sister products: VictoriaLogs for log ingestion and querying, and VictoriaTraces (open source, OTLP-compatible, v0.7.0 as of January 2026) for distributed traces. All three are separate products with separate query interfaces. If your goal is one engine for everything, VictoriaMetrics is three engines from one vendor instead.

PromQL is powerful for metrics but doesn't have the analytical depth of SQL. Joins, window functions, complex aggregations across dimensions: you'll find yourself wishing for SQL on workloads that grow past pure metrics.

License. Apache 2.0. Enterprise version exists with additional features (downsampling, advanced multi-tenant management, audit logs). Basic multi-tenancy is in the OSS cluster version.

InfluxDB migration path. vmctl is a real migration tool that reads from InfluxDB v1.x and writes to VictoriaMetrics. InfluxDB v2.x is not officially supported by vmctl and requires third-party tooling. Expect to rewrite Flux queries to PromQL.

Best for. Teams that want to replace Prometheus and InfluxDB with one stack for metrics, who are happy with PromQL, and who are fine with VictoriaLogs and VictoriaTraces as separate products for the rest.

3. QuestDB

QuestDB is a purpose-built time-series database written primarily in zero-GC Java and C++, with a custom storage engine. It positions itself for high-frequency workloads: financial markets, IoT telemetry, real-time analytics on rapidly arriving data.

What it's best at. Raw ingestion speed for simple time-series workloads is genuinely fast. The Influx Line Protocol-compatible ingestion endpoint means existing InfluxDB writers can point at QuestDB with minimal changes, and Telegraf works via the InfluxDB output plugin. SQL is supported with time-series extensions like SAMPLE BY and LATEST ON, which are pleasant to use for windowing and last-known-value queries.

Where it falls short. Analytical depth is limited. Joins are supported but expensive. Subqueries and complex CTEs work but aren't where the engine shines. If your queries go beyond SELECT ... GROUP BY time, dimension, you may find QuestDB feels constrained.

The benchmark situation deserves a footnote. QuestDB publishes very strong ClickBench numbers. ClickBench's own README distinguishes between "lukewarm cold runs" (only OS page cache cleared) and true cold runs (database restart required), and explicitly encourages migration to true cold runs because lukewarm runs benefit databases that maintain extensive internal caches. ClickBench is currently in the middle of that migration: https://github.com/ClickHouse/ClickBench/issues/793 tracks 86 systems (including QuestDB, ClickHouse, DuckDB, Postgres, Spark, and many others) that haven't yet moved to true cold runs. The broader public dispute is documented in https://github.com/ClickHouse/ClickHouse/issues/38109. For context, we wrote up our own true-cold-run methodology and results against ClickHouse. Run your own benchmarks on your own workloads before relying on published numbers from anyone, including us.

The community is smaller than InfluxDB's or Timescale's, though active. The managed self-serve QuestDB Cloud has been wound down; the current managed offering is QuestDB Enterprise BYOC (Bring Your Own Cloud) on AWS or Azure, which requires enterprise engagement rather than a self-serve signup.

License. Apache 2.0 (OSS). Enterprise is commercially licensed.

InfluxDB migration path. Line Protocol-compatible ingestion is a significant advantage. Existing Telegraf agents can write to QuestDB with a configuration change. Query rewriting from InfluxQL or Flux to QuestDB SQL is required.

Best for. High-frequency, simple time-series workloads where ingestion speed is paramount and analytical queries stay relatively flat.

4. GreptimeDB

GreptimeDB is a newer entrant: a Rust-based, cloud-native database that unifies metrics, logs, and traces in a single engine. v1.0 GA shipped on April 14, 2026.

What it's best at. GreptimeDB is the most architecturally similar to where InfluxDB v3 is going (columnar, object-storage-backed, SQL-supported), but built fresh without the legacy. It supports ANSI SQL and PromQL alongside a streaming interface.

The unified data model is the big bet. Greptime markets it as "Observability 2.0", with metrics, logs, and traces represented as timestamped wide events in the same engine and queryable with the same SQL (cross-signal joins are an explicit design goal). If you've been running Loki for logs, Prometheus or InfluxDB for metrics, and Jaeger or Tempo for traces, GreptimeDB proposes consolidating all of that. Greptime has published log-scenario benchmarks against Loki claiming substantial wins on ingestion and query latency.

GreptimeDB Edge exists for resource-constrained environments (Android/ARM, automotive, IoT). A Kubernetes operator and Helm charts are available. Object storage support covers S3, GCS, Azure Blob, Aliyun OSS, and S3-compatibles like MinIO.

Where it falls short. Maturity. GreptimeDB hit 1.0 GA only weeks before this post was written. The architecture is sound, but production deployments at large scale are still accumulating. Teams with low risk tolerance might wait six months for the rough edges to be sanded.

The Rust ecosystem for database tooling is younger than Go's or Java's, which can mean fewer integrations and a smaller pool of operational expertise. GreptimeDB Enterprise is where some clustering, HA, and security features live; pricing is not public.

License. Apache 2.0 (OSS), proprietary Enterprise.

InfluxDB migration path. GreptimeDB has Line Protocol compatibility for ingestion (V1 and V2 endpoints, Telegraf-compatible). Query migration depends on whether you go SQL or PromQL.

Best for. Cloud-native, Kubernetes-first teams that want to consolidate observability into one engine and are comfortable adopting newer infrastructure.

5. Arc

Arc is a columnar analytical database designed for metrics, logs, traces, and events. I built it after leaving InfluxData. It uses DuckDB for the query engine and stores data as native Apache Parquet on S3, Azure Blob, MinIO, or local disk.

I'm including Arc here because it's a real alternative to InfluxDB for a specific kind of team. I'll be honest about where it fits and where it doesn't.

What it's best at. Drop-in compatibility with InfluxDB at the ingestion layer is the biggest practical advantage. Existing Telegraf agents work without modification. Influx Line Protocol is a first-class ingestion path. Teams have migrated production InfluxDB workloads to Arc by changing the destination URL. That's it.

DuckDB SQL means full standard SQL with window functions, CTEs, complex joins, and analytical functions that aren't available in most TSDBs. If your team already knows Postgres SQL, you already know how to query Arc.

Storage is plain Apache Parquet, in your S3 (or Azure Blob, or MinIO, or local disk). No proprietary format. If you ever want to leave Arc, your data is already in a format readable by DuckDB, Spark, Polars, ClickHouse, BigQuery, or anything else that reads Parquet. There's no export step. Vendor lock-in isn't a meaningful risk.

Single Go binary. No JVM, no separate query nodes, no operator required for the basic deployment. Ingest path tested at over 18 million records per second on commodity hardware. On ClickBench, Arc is 1.6x faster than InfluxDB's query engine (DataFusion) on combined score, with 5x faster ingestion than InfluxDB 3 Core and 10x faster than InfluxDB 3 Enterprise.

Where it falls short. Arc is newer than the other databases on this list. It's been live since October 2025, about six months before this post. Early production deployments exist, but the community is smaller than InfluxDB's, Timescale's, or even GreptimeDB's. If you need a battle-tested database with thousands of public production deployments, Arc isn't there yet.

The license is AGPL-3.0. This is intentional: it prevents cloud providers from forking Arc into a commercial offering without contributing back. But some enterprises avoid AGPL software for legal reasons. If your legal team has restrictions on copyleft licenses, Arc's OSS version may not be approved. Arc Enterprise is commercially licensed and avoids this concern.

Clustering and HA are Arc Enterprise features, not OSS. The single-binary deployment is genuinely production-ready, but if you need active-active multi-region replication, that's the paid path.

License. AGPL-3.0 (community), commercial license (Enterprise).

InfluxDB migration path. Drop-in Line Protocol support. Telegraf works without modification. Existing InfluxDB ingestion agents can point at Arc with a URL change. We covered the full migration playbook in Migrating from InfluxDB to Arc.

Best for. Teams that want analytical SQL depth on time-series data, value Parquet portability, and are willing to adopt newer infrastructure to get the architectural advantages.

Try it. Arc runs in a single container. If you already have Telegraf pointed at InfluxDB, you can have Arc accepting the same Line Protocol traffic in under a minute:

docker run -d -p 8000:8000 ghcr.io/basekick-labs/arc:latest

Full setup, configuration, and the InfluxDB migration playbook are in the Arc documentation and the Migrating from InfluxDB to Arc guide. If you want managed hosting instead, Arc Cloud is live.

Decision Matrix

CriteriaTimescaleDBVictoriaMetricsQuestDBGreptimeDBArc
Query languageSQL (Postgres)PromQLSQL + ILPSQL + PromQLSQL (DuckDB)
Data modelTime-series + relationalMetrics onlyTime-seriesMetrics + logs + traces (wide events)Metrics + logs + traces + events
StoragePostgres (local)Local + S3LocalObject storage nativeObject storage native (Parquet)
Single binaryNo (Postgres)YesYesYesYes
LicenseApache 2.0 + TSLApache 2.0Apache 2.0Apache 2.0AGPL-3.0
Line ProtocolNo (outflux v1 only)Limited (vmctl v1 only)YesYesYes
Telegraf drop-inNoLimitedYesYesYes
MaturityVery high (since 2017)High (since 2018)Medium-highMedium (v1 GA Apr 2026)Lower (live since Oct 2025)
Best fitPostgres teamsPrometheus replacementHigh-frequency simpleCloud-native unifiedAnalytical SQL on telemetry

Ranked subjectively, no database is best across all criteria. The best choice depends on what you're optimizing for.

How to Think About Migration Paths

The thing nobody tells you about migrating off InfluxDB is that the storage engine is rarely the hardest part. The hardest part is whatever you've built around InfluxDB: the dashboards, the alerts, the queries embedded in application code, the agents in production, the runbooks your on-call team relies on.

A few principles that help:

Migrate ingestion first. If your alternative supports Line Protocol (QuestDB, GreptimeDB, Arc), redirect a fraction of production traffic to the new database first. Validate that data lands correctly. Compare query results between InfluxDB and the new system on overlapping time windows.

Migrate queries gradually. Don't try to rewrite every dashboard at once. Start with one or two high-value dashboards. Run them against both databases in parallel. Address the differences one query at a time.

Decommission InfluxDB last. Keep the old database running until you've validated the new one for at least one full retention cycle. The cost of running two databases for 30 to 90 days is much less than the cost of discovering a missing edge case after you've shut down InfluxDB.

Tools that help: vmctl (VictoriaMetrics, InfluxDB v1 only), outflux (Postgres/TimescaleDB, InfluxDB v1 only), Telegraf with multiple outputs (write to both old and new during transition).

Conclusion: Pick Based on Workload, Not Hype

There isn't a universally correct InfluxDB alternative. Each of these five databases is the right answer for a different team:

  • Already on Postgres? TimescaleDB.
  • Replacing Prometheus, metrics-only? VictoriaMetrics.
  • High-frequency simple time-series? QuestDB.
  • Cloud-native unified observability, comfortable with newer infrastructure? GreptimeDB.
  • Want analytical SQL depth, Parquet portability, drop-in InfluxDB compatibility? Arc.

The honest advice: don't pick a database from a blog post. Pick a shortlist, run a proof-of-concept on your actual workload for two weeks each, and let the data decide. The differences between these databases on synthetic benchmarks rarely match the differences on real production data with real query patterns.

This post will be updated quarterly. Last updated April 2026. If something's wrong or out of date, https://github.com/Basekick-Labs/arc or in Discord.


Get started with Arc:


About the author. I'm Ignacio Van Droogenbroeck, founder of Basekick Labs and an ex-InfluxData engineer. I've been working with time-series data since 2018 and have run production InfluxDB deployments at scale. I built Arc because I believe the next decade of telemetry infrastructure should be built on portable, columnar formats, not proprietary engines. You can disagree, and that's fine. The other four options on this list are all real, valid choices for a lot of teams.

Ready to handle billion-record workloads?

Deploy Arc in minutes. Own your data in Parquet. Use for analytics, observability, AI, IoT, or data warehousing.

Get Started ->