Portfolio project — git infrastructure

Git object storage.
Three backends.
One verdict.

A working git remote server in Go with swappable storage backends. Built to empirically reproduce the tradeoffs teams hit when building git infrastructure for AI coding platforms.

$ git clone https://git.wyat.me/git-storage.git
The numbers are not subtle.
2,548×
Exists — BadgerDB vs MinIO/S3 in production
Git calls Exists constantly during push to avoid resending objects. BadgerDB handles 395k ops/sec on Railway. MinIO/S3 handles 155 — every check is a network round trip to object storage regardless of object size.
20×
Concurrent Put — BadgerDB vs SQLite (1KB)
Without WAL mode and a connection limit of 1, SQLite produces SQLITE_BUSY errors under concurrent load. With those fixes, BadgerDB still wins by 20× at small object sizes in production.
39ms
MinIO/S3 Get p50 — in production, any size
A 1KB Get on MinIO costs 39ms in production. A 1MB Get costs roughly the same. The network round trip to object storage is the bottleneck, not the data. Git's workload is small objects.
~1×
Large object Put — all backends converge
At 1MB, SQLite (49), BadgerDB (52), and MinIO (47) are within noise of each other. The bottleneck becomes I/O, not the storage layer. Backend choice matters most at small sizes.
ops/sec — higher is better
Loading benchmark data...
Put
Get
Exists
Concurrent Put
SQLite
BadgerDB
MinIO/S3
Operation Size SQLite BadgerDB MinIO/S3