Arc Cloud is live. Start free — no credit card required.

Arc Now Speaks TLE: Ingest Satellite Orbital Data Natively

#Arc#TLE#satellites#space#orbital data#ingestion#Space-Track#CelesTrak#Parquet#SQL
Cover image for Arc Now Speaks TLE: Ingest Satellite Orbital Data Natively

If you work with satellite data, you know TLE files. Two-line element sets — the format that Space-Track.org, CelesTrak, and ground station pipelines have used for decades to describe where satellites are and where they're going.

They look like this:

ISS (ZARYA)
1 25544U 98067A   24001.50000000  .00016717  00000-0  10270-3 0  9001
2 25544  51.6400 100.2000 0007420  35.5000 324.6000 15.49560000    09

Compact. Standardized. Absolutely unhinged formatting from the 1960s that somehow still runs the entire space industry.

And traditionally, getting TLE data into a database means writing a parser, mapping 18 fields crammed into 138 characters of fixed-width text, handling both 2-line and 3-line formats, computing derived orbital metrics yourself, and praying you didn't get the column offsets wrong.

We got tired of doing that. So we built it into Arc.

What We Built

Arc has a dedicated TLE endpoint: POST /api/v1/write/tle. Point it at a .tle file, and Arc handles everything — parses the format, extracts orbital elements as fields, stores satellite identifiers as tags, and automatically computes derived metrics: semi-major axis, orbital period, apogee, perigee, orbit classification.

One curl command:

curl -X POST "http://localhost:8000/api/v1/write/tle" \
  -H "Authorization: Bearer $ARC_TOKEN" \
  -H "X-Arc-Database: satellites" \
  --data-binary @stations.tle

That's it. Your TLE data is in Arc, stored as Parquet, queryable with SQL. No parser. No field mapping. No off-by-one column errors that haunt you at 3am.

Two Modes: Streaming and Bulk Import

Streaming (POST /api/v1/write/tle) is for continuous feeds — cron jobs pulling fresh TLEs every few hours, real-time updates from ground stations, or live Space-Track.org pipelines. This is your production ingestion path.

Bulk import (POST /api/v1/import/tle) is for historical backfill. Got a CelesTrak catalog dump with 28,000 objects? Upload it once:

curl -X POST "http://localhost:8000/api/v1/import/tle" \
  -H "Authorization: Bearer $ARC_TOKEN" \
  -H "X-Arc-Database: satellites" \
  -F "file=@catalog.tle"

Response:

{
  "status": "ok",
  "result": {
    "database": "satellites",
    "measurement": "satellite_tle",
    "satellite_count": 28000,
    "rows_imported": 28000,
    "duration_ms": 1250
  }
}

28,000 satellites in 1.25 seconds. Try that with your hand-rolled Python parser.

Both endpoints handle 2-line and 3-line formats, mixed-format files, and gzip compression. Upload a .tle.gz and Arc decompresses on the fly.

What Gets Stored

Arc doesn't just dump raw TLE strings into a table. It parses every field and computes derived orbital metrics automatically.

Identifiers (tags):

FieldSourceExample
nameLine 0 (3-line format)ISS (ZARYA)
norad_idLine 1, cols 3-725544
intl_designatorLine 1, cols 10-1798067A
classificationLine 1, col 8U (unclassified)

Orbital elements (fields):

FieldDescription
epochTLE epoch as timestamp
mean_motionRevolutions per day
eccentricityOrbital eccentricity
inclinationDegrees
raanRight ascension of ascending node
arg_perigeeArgument of perigee
mean_anomalyMean anomaly
bstarDrag term
mean_motion_dotFirst derivative of mean motion
mean_motion_ddotSecond derivative of mean motion
rev_numberRevolution number at epoch
element_set_numberElement set number

Derived metrics (computed by Arc):

FieldDescription
semi_major_axis_kmComputed from mean motion
period_minutesOrbital period
apogee_kmHighest point above Earth
perigee_kmLowest point above Earth
orbit_classificationLEO, MEO, GEO, HEO

You'd normally compute those yourself. Arc does it at ingestion time.

Query It with SQL

Once the data is in Arc, it's just a table. Standard DuckDB SQL. No special satellite query language. No SDK. Just SQL.

-- All LEO satellites
SELECT name, norad_id, inclination, period_minutes
FROM satellite_tle
WHERE orbit_classification = 'LEO'
ORDER BY period_minutes ASC
-- ISS position history
SELECT epoch, mean_motion, eccentricity, apogee_km, perigee_km
FROM satellite_tle
WHERE norad_id = 25544
ORDER BY epoch DESC
LIMIT 100
-- Decaying orbits: satellites losing altitude
SELECT
  name,
  norad_id,
  perigee_km,
  mean_motion_dot,
  bstar
FROM satellite_tle
WHERE perigee_km < 300
  AND mean_motion_dot > 0
ORDER BY perigee_km ASC
-- GEO belt occupancy by inclination band
SELECT
  FLOOR(inclination) as incl_band,
  COUNT(DISTINCT norad_id) as satellites
FROM satellite_tle
WHERE orbit_classification = 'GEO'
GROUP BY incl_band
ORDER BY incl_band

Standard SQL. Joins, aggregations, window functions, CTEs — all of it. Because the data is in Parquet, you can also query it directly with DuckDB, Spark, Pandas, or any tool that reads Parquet files.

Why We Built This

We've been running the satellite tracking demo on basekick.net since December — live positions of 14,000+ objects from Space-Track.org, updating continuously. Building that demo meant writing TLE parsing from scratch. Field offsets, checksum validation, epoch conversion, derived metric computation.

It worked. But it felt wrong. TLE is a standard format used by every space agency, every satellite operator, every tracking system on the planet. It shouldn't require custom code every time someone wants to store orbital data in a database.

So we made it a first-class ingestion endpoint. Same philosophy as our Line Protocol endpoint and MQTT integration: if a data format is standardized and widely used, Arc should speak it natively.

Setting Up a Space-Track Pipeline

Here's a practical example — a cron job that pulls fresh TLEs from Space-Track.org and feeds them into Arc:

#!/bin/bash
# fetch-tles.sh — runs every 4 hours via cron
 
# Login to Space-Track
COOKIE=$(curl -s -c - -b /dev/null \
  -d "identity=your@email.com&password=yourpassword" \
  "https://www.space-track.org/ajaxauth/login")
 
# Download active satellites catalog
curl -s -b "$COOKIE" \
  "https://www.space-track.org/basicspacedata/query/class/gp/EPOCH/%3Enow-30/orderby/NORAD_CAT_ID/format/3le" \
  -o /tmp/active-satellites.tle
 
# Feed into Arc
curl -X POST "http://localhost:8000/api/v1/write/tle" \
  -H "Authorization: Bearer $ARC_TOKEN" \
  -H "X-Arc-Database: satellites" \
  --data-binary @/tmp/active-satellites.tle
 
# Logout
curl -s -b "$COOKIE" "https://www.space-track.org/ajaxauth/logout"

Add to crontab:

0 */4 * * * /opt/scripts/fetch-tles.sh >> /var/log/tle-ingestion.log 2>&1

Fresh orbital data every 4 hours, fully automated. No Python scripts, no ETL pipelines, no data engineering team.

What's Next

If you're building ground station software, satellite tracking systems, conjunction analysis tools, or space situational awareness platforms — Arc now has native TLE support. Both 2-line and 3-line formats, mixed-format files, bulk import, streaming ingestion, and automatic derived metrics.

Available since v26.02.1.


Resources:

Questions? Reach out on Twitter or join our Discord.

Ready to handle billion-record workloads?

Deploy Arc in minutes. Own your data in Parquet. Use for analytics, observability, AI, IoT, or data warehousing.

Get Started ->