Forward FortiGate Firewall Logs to Arc for Long-Term Security Analytics

If you're running FortiGate firewalls, you know the pain. FortiAnalyzer licenses aren't cheap, and cloud log retention gets expensive fast. But you need those logs—compliance, incident response, threat hunting. Deleting them isn't an option.
Here's the thing: FortiGate can forward logs via syslog to any server. Telegraf can receive syslog and push to Arc. Arc can store years of firewall logs and query them in seconds.
Let's set it up.
The Architecture
The flow is simple:
FortiGate → Syslog (UDP/TCP) → Telegraf → Arc → Grafana
FortiGate sends logs via syslog. Telegraf receives them, parses the fields, and forwards to Arc. You query with SQL and visualize in Grafana.
No FortiAnalyzer license required. No per-GB cloud costs. Just your own infrastructure.
Setting Up Telegraf
First, let's get Telegraf running with the syslog input plugin. Create a telegraf.conf:
# Telegraf configuration for FortiGate syslog
[agent]
interval = "10s"
flush_interval = "10s"
hostname = ""
omit_hostname = false
# Syslog input - listen for FortiGate logs
[[inputs.syslog]]
server = "udp://:1514"
best_effort = true
syslog_standard = "RFC5424"
## For TCP with TLS (more reliable, encrypted)
# server = "tcp://:6514"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
# Output to Arc
[[outputs.influxdb_v2]]
urls = ["http://arc:8000"]
token = "$ARC_TOKEN"
organization = "default"
bucket = "firewall"
[outputs.influxdb_v2.tagpass]
# Only send syslog metrics
measurement = ["syslog"]A few notes:
- Port 1514 is commonly used for syslog collection (avoids needing root for port 514)
- best_effort = true extracts partial data from malformed messages—useful since not all FortiGate firmware versions output perfect RFC5424
- UDP is simpler but TCP with TLS is more reliable for production
Docker Compose Setup
Here's a complete setup with Telegraf and Arc:
services:
arc:
image: ghcr.io/basekick-labs/arc:26.01.1
container_name: arc
restart: unless-stopped
environment:
- STORAGE_BACKEND=local
- DB_PATH=/data/arc.db
volumes:
- arc-data:/data
ports:
- "8000:8000"
telegraf:
image: telegraf:1.33
container_name: telegraf
restart: unless-stopped
environment:
- ARC_TOKEN=${ARC_TOKEN}
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
ports:
- "1514:1514/udp"
depends_on:
- arc
grafana:
image: grafana/grafana:11.4.0
container_name: grafana
restart: unless-stopped
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana-data:/var/lib/grafana
ports:
- "3000:3000"
depends_on:
- arc
volumes:
arc-data:
grafana-data:Start it:
docker compose up -dGet the Arc admin token from logs:
docker logs arc | grep "Admin token"Set it as an environment variable and restart Telegraf:
export ARC_TOKEN=your-token-here
docker compose up -d telegrafConfiguring FortiGate
Now configure your FortiGate to send logs to Telegraf. You can do this via GUI or CLI.
Via GUI
- Log into FortiGate
- Go to Log & Report → Log Settings
- Enable Send Logs to Syslog
- Enter your Telegraf server IP and port (1514)
- Click Apply
Via CLI
SSH into your FortiGate and run:
config log syslogd setting
set status enable
set server 192.168.1.100
set port 1514
set mode udp
set facility local7
set format default
end
Replace 192.168.1.100 with your Telegraf server IP.
Enable Traffic Logging
By default, FortiGate might not log all traffic. To enable comprehensive logging:
config log syslogd filter
set severity information
set forward-traffic enable
set local-traffic enable
set multicast-traffic enable
set anomaly enable
set voip enable
end
Multiple Syslog Servers
FortiGate supports up to 4 syslog destinations. If you want redundancy or to send to multiple collectors:
config log syslogd2 setting
set status enable
set server 192.168.1.101
set port 1514
end
Verifying the Flow
Once configured, you should see logs flowing into Arc. Check Telegraf logs first:
docker logs telegraf | tail -20You should see messages about syslog input being initialized and metrics being written.
Query Arc to verify data is arriving:
curl -X POST http://localhost:8000/api/v1/query \
-H "Authorization: Bearer $ARC_TOKEN" \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT * FROM syslog ORDER BY time DESC LIMIT 10", "database": "firewall"}'Querying Firewall Logs
Now the fun part—analyzing your security data with SQL.
Recent Denied Traffic
SELECT
time,
appname,
message
FROM syslog
WHERE message LIKE '%action=deny%'
ORDER BY time DESC
LIMIT 100Top Source IPs by Volume
SELECT
REGEXP_EXTRACT(message, 'srcip=([0-9.]+)', 1) as src_ip,
COUNT(*) as event_count
FROM syslog
WHERE time > NOW() - INTERVAL '24 hours'
GROUP BY src_ip
ORDER BY event_count DESC
LIMIT 20Failed VPN Logins
SELECT
time,
message
FROM syslog
WHERE message LIKE '%vpn%'
AND message LIKE '%failed%'
ORDER BY time DESC
LIMIT 50Traffic by Application
SELECT
REGEXP_EXTRACT(message, 'app="([^"]+)"', 1) as application,
COUNT(*) as hits,
SUM(CAST(REGEXP_EXTRACT(message, 'sentbyte=([0-9]+)', 1) AS BIGINT)) as bytes_sent
FROM syslog
WHERE time > NOW() - INTERVAL '7 days'
GROUP BY application
ORDER BY hits DESC
LIMIT 20Security Events Over Time
SELECT
time_bucket(INTERVAL '1 hour', time) as hour,
COUNT(*) as events,
COUNT(CASE WHEN message LIKE '%action=deny%' THEN 1 END) as denied,
COUNT(CASE WHEN message LIKE '%severity=critical%' THEN 1 END) as critical
FROM syslog
WHERE time > NOW() - INTERVAL '7 days'
GROUP BY hour
ORDER BY hour DESCGrafana Dashboard
Connect Grafana to Arc using the DuckDB data source plugin, then create dashboards for:
- Real-time event feed
- Denied traffic by source IP
- Bandwidth usage over time
- VPN login attempts
- Top blocked applications
- Geographic distribution of traffic (if you parse srcip and do GeoIP lookups)
The queries above work directly in Grafana panels.
Storage Considerations
Firewall logs can be verbose. A busy FortiGate might generate gigabytes per day. Here's how to manage it:
Retention policy. Set up automatic deletion of old data:
curl -X POST http://localhost:8000/api/v1/retention \
-H "Authorization: Bearer $ARC_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"database": "firewall",
"measurement": "syslog",
"retention_days": 90
}'Compression. Arc stores data in Parquet format with automatic compression. Expect 3-5x compression on log data.
Storage backend. For large deployments, use S3-compatible storage instead of local disk:
environment:
- STORAGE_BACKEND=s3
- S3_BUCKET=firewall-logs
- S3_ENDPOINT=https://s3.amazonaws.com
- AWS_ACCESS_KEY_ID=xxx
- AWS_SECRET_ACCESS_KEY=xxxTCP with TLS (Production)
For production, consider TCP with TLS instead of UDP:
Update Telegraf config:
[[inputs.syslog]]
server = "tcp://:6514"
tls_cert = "/etc/telegraf/server.crt"
tls_key = "/etc/telegraf/server.key"
tls_allowed_cacerts = ["/etc/telegraf/ca.crt"]Update FortiGate:
config log syslogd setting
set status enable
set server 192.168.1.100
set port 6514
set mode reliable
set enc-algorithm high
set certificate "your-cert-name"
end
This ensures logs are encrypted in transit and delivered reliably.
Cost Comparison
Let's do the math. A mid-size FortiGate generating 10GB of logs per day:
FortiAnalyzer (on-prem):
- License: ~$5,000-15,000/year depending on capacity
- Hardware/VM costs
- Maintenance
Cloud SIEM (per-GB pricing):
- 10GB/day × 30 days × $2-5/GB = $600-1,500/month
- That's $7,200-18,000/year
Arc (self-hosted):
- Compute: ~$50-100/month for a decent VM
- Storage: 10GB/day compressed to ~2GB × 365 = 730GB/year × $0.02/GB = ~$15/year on S3
- Total: ~$600-1,200/year
You keep full control, query with standard SQL, and retain data as long as you want.
What's Next
This same pattern works for any device that speaks syslog:
- Palo Alto firewalls
- Cisco ASA/Firepower
- pfSense/OPNsense
- Linux servers (rsyslog/systemd-journald)
- Network switches and routers
The key is Telegraf's syslog input plugin—it normalizes everything into a consistent format that Arc can query.
Resources
- FortiGate Syslog Configuration: Fortinet Documentation
- Telegraf Syslog Plugin: InfluxData Documentation
- Arc Documentation: docs.basekick.net/arc
- Discord: discord.gg/nxnWfUxsdm
Questions? Drop by the Discord or reach out on Twitter.
Ready to handle billion-record workloads?
Deploy Arc in minutes. Own your data in Parquet. Use for analytics, observability, AI, IoT, or data warehousing.
