Frostholm vs restic vs Borg: a realistic benchmark
Every backup tool benchmark I've read was done by someone with a stake in the result, on a workload that happened to flatter their tool, with methodology that makes reproducibility somewhere between difficult and impossible. I'm not a neutral party here — I wrote Frostholm — but I've tried to be honest about where each tool loses.
Initial backup (first snapshot, full data)
| Tool | Time | CPU | Peak RAM | Stored size |
|---|---|---|---|---|
| Frostholm | 6m 14s | ~380% | 512 MB | 181 GB |
| restic | 7m 02s | ~320% | 448 MB | 183 GB |
| Borg | 9m 38s | ~180% | 312 MB | 180 GB |
Frostholm and restic are close on initial backup. Borg is slower — it uses LZ4 compression by default and its Python layer adds overhead, though it uses considerably less RAM. Stored sizes are nearly identical because RAW files don't compress and barely deduplicate.
Incremental backup (200 new files added, ~4 GB)
| Tool | Time | New data written | Correctly deduped |
|---|---|---|---|
| Frostholm | 38s | 4.1 GB | yes |
| restic | 44s | 4.2 GB | yes |
| Borg | 52s | 4.0 GB | yes |
All three tools handle pure incremental correctly. The time difference is mostly scan speed — how fast each tool can traverse the source tree and check which files changed.
Modified file (renamed a large directory, ~40 GB)
| Tool | Time | New data written | Notes |
|---|---|---|---|
| Frostholm | 1m 08s | ~0 GB | All chunks already in index |
| restic | 1m 22s | ~0 GB | Same — path-independent hashing |
| Borg | 2m 14s | ~0 GB | Cache lookup slower on rename |
Renaming a directory is a good stress test. All three tools correctly identify that the underlying data didn't change and write nearly nothing. Frostholm and restic are faster because their chunk lookup is pure hash-table O(1); Borg's cache uses a different structure that's slower when file paths change significantly.
Full restore (all 180 GB, local destination)
| Tool | Time | Peak RAM |
|---|---|---|
| Frostholm (--parallelism 1) | 8m 42s | 380 MB |
| restic | 9m 18s | 412 MB |
| Borg | 11m 05s | 290 MB |
Note: this is Frostholm v0.3.7 with single-threaded restore. v0.4 with
--parallelism 8 does the same restore in ~5m 10s. For local backends
the gain is smaller (I/O bound) but still meaningful.
Single-file restore (recovering one 800 MB RAW file)
| Tool | Time | Data read from repo |
|---|---|---|
| Frostholm | 4.2s | 812 MB |
| restic | 5.1s | 812 MB |
| Borg | 6.8s | ~820 MB |
All three tools read only the chunks needed for that file. No difference in correctness, small differences in overhead.
Memory usage on a large repository
RAM usage is where tools diverge most on large repositories. The index has to fit in memory during backup. At 180 GB with ~2 MB average chunk size, there are roughly 90,000 chunks — each requiring a 32-byte hash + 16 bytes of location metadata = ~4.3 MB for the raw index. The actual RSS is much higher due to Go's runtime overhead, hash-map load factor, and various buffers.
Borg wins on memory because it uses a segmented cache stored on disk, not a full in-memory map. If you're running backups on a machine with limited RAM (say, a 1 GB VPS), Borg is meaningfully better. For desktop use cases the difference doesn't matter.
Integrity verification speed
| Tool | Spot check (5%) | Full check |
|---|---|---|
| Frostholm | 28s | 9m 02s |
| restic check --read-data | N/A | 11m 18s |
| Borg check | ~45s | 13m 40s |
"Spot check" reads and verifies a random 5% sample of pack files. Full check reads everything. Frostholm's full check is faster than restic and Borg primarily because pack files are read sequentially in large chunks rather than file-by-file.
Summary
If I weren't the author of Frostholm, here's what I'd tell someone choosing between these:
- restic: best default choice. Mature, well-documented, large community, many backend options, actively maintained. Use it unless you have a specific reason not to.
- Borg: best for memory-constrained environments and for users who want compression to actually help (it's more effective on text/code workloads than either Frostholm or restic in default config). Slower but more RAM-efficient.
- Frostholm: best if you're specifically targeting cold storage (B2 native backend), want fast cross-file dedup, or find value in the simpler repository format. Less mature than the other two — I'd keep a restic repo as a second copy until v1.0.
All three tools are correct. Choose based on ecosystem, not benchmark numbers — the differences above won't matter for most workloads.
Benchmark scripts and raw data: github.com/e-var/frostholm/benchmarks/2025