By 10:25 AM, I’d entered what Mystery Science Theater 3000 fans call “Deep Hurting.” The migration plan was solid. The backup discipline was comprehensive. The execution? Chaos.
I run a containerized production Mastodon instance on an 8 GB Mac mini. (Yes, I know what the cloud people say, and FYI it’s Cloudflare Tunneled for protection.) My Docker Desktop installation’s half-gig RAM footprint was eating precious resources. Colima promised the same Docker experience without the GUI overhead. I budgeted a 1.5 hour migration plan for what should’ve been a straightforward runtime swap.
Two and a half hours and seven critical issues later, I’d discovered that Docker Hardened Images and Colima don’t play nicely together. And that discovery matters to anyone running hardened containers in virtualized environments.
The Plan (That Didn’t Survive Contact with Reality)
The strategy was textbook: maintenance window approach, comprehensive backups (database dumps, volume archives, configuration snapshots), explicit rollback procedures. I’d stop Docker Desktop, switch the Docker context to Colima, update one path in the Makefile I use to automate tasks, and restart services. Everything uses bind mounts, so data stays on the host file system. What could go wrong?
Everything. Everything could go wrong.
Obsolete Makefile references
First backup try:
service "db" is not running
Wait–what’s db? I migrated from version 14 to version 17 of the PostgreSQL relational database system weeks ago. Switched and even switched from the default PostgreSQL image to a Docker Hardened Image (DHI), even. My compose files reference db-pg17. But the Makefile’s backup targets? Still calling the old db service. The PostgreSQL migration documentation lived in the README file that I keep. The Makefile lived in… a different mental context apparently.
Lesson: When you migrate infrastructure components, grep for references everywhere. Compose files, Makefiles, scripts, documentation. “It’s working” means “it’s working right now,” not “the migration completed.”
The empty postgres17/ directory
After resolving the database restore issues (we’ll get there), containers started successfully. Then I ran a restart test. PostgreSQL came up empty–no data, no tables, fresh initialization.
% ls -la postgres17/
total 0
drwxr-xr-x@ 2 markandsharon staff 64 Jan 7 16:31 .
64 bytes. An empty directory. That December PostgreSQL 14 → 17 “migration”? Created the directory, never populated it. PostgreSQL 14 data stayed in postgres14/. Docker Desktop must’ve been using cached or internal storage.
Lesson: Don’t trust that migrations succeeded because services are healthy. Check the actual data files. Persistence isn’t persistence if nothing’s persisting.
Wrong database target
After fixing the Makefile, services started… and instantly crash-looped:
PG::UndefinedTable: ERROR: relation "users" does not exist
PostgreSQL was healthy. The application disagreed. Turns out I’d restored the dump to the wrong database:
# What I did (wrong):
psql -U mastodon postgres < dump.sql
# What I should have done:
psql -U mastodon mastodon_production < dump.sql
The mastodon_production database existed–it was just empty. All my data went into the postgres database that nothing was reading. The psql command-line client defaults to the database matching your username or postgres if unspecified. Explicit is better than implicit, especially when you’re in a hurry.
Version-specific PGDATA paths
Once data landed in the right database, I hit a new problem: data didn’t persist across restarts. The bind mount directory stayed empty even though PostgreSQL was running and accepting writes.
It turns out that my PostgreSQL DHI uses version-specific paths:
# My bind mount:
- ./postgres17:/var/lib/postgresql/data
# Actual DHI PostgreSQL data directory:
# PGDATA=/var/lib/postgresql/17/data
The mount shadowed the wrong directory. PostgreSQL wrote data to /var/lib/postgresql/17/data, which wasn’t mounted. Data lived in ephemeral container storage. Restart? Data gone.
$ docker compose exec db-pg17 psql -U mastodon postgres -c "SHOW data_directory;"
data_directory
-----------------------------
/var/lib/postgresql/17/data
Lesson: Verify assumptions. Every single one. Check SHOW data_directory; immediately after container start. Test a restart before celebrating success.
I corrected the mount path to match DHI’s expected location. That’s when I found the real problem.
The DHI + Colima Incompatibility Discovery: VirtioFS bind mount ownership failures
After correcting the mount path, PostgreSQL entered an immediate crash-loop:
FATAL: data directory "/var/lib/postgresql/17/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
Inside the container, the mounted directory appeared owned by the root user (user ID 0). But PostgreSQL runs as the postgres user. Permission denied.
% docker compose run --rm --entrypoint sh db-pg17 -c "ls -ld /var/lib/postgresql/17/data"
drwxr-xr-x 2 0 0 4096 Jan 10 16:22 /var/lib/postgresql/17/data
# Owner: UID 0 (root), but PostgreSQL requires postgres user ownership
Colima uses the VirtioFS system for file sharing. VirtioFS handles UID mapping differently than Docker Desktop’s virtual machine (VM) implementation. Bind mounts that work perfectly on Docker Desktop fail on Colima because the ownership mapping doesn’t translate.
Fine. This is a known issue with Colima and some images. I’ll switch to a named volume–Docker manages those internally, so host filesystem permissions shouldn’t matter.
Named volumes still failed:
FATAL: data directory "/var/lib/postgresql/17/data" has wrong ownership
Wait. Named volumes are supposed to be isolated from host file system issues. They’re managed entirely by Docker. Fresh named volume, Docker creates it, Docker populates it–and it still shows wrong ownership inside the DHI container.
# Fresh named volume:
% docker compose run --rm --entrypoint sh db-pg17 -c "ls -ld /var/lib/postgresql/17/data"
drwxr-xr-x 2 0 0 4096 Jan 10 16:22 /var/lib/postgresql/17/data
DHI PostgreSQL’s entrypoint has environmental assumptions that Colima’s VM doesn’t satisfy. The image’s security hardening includes stricter ownership validation. That validation doesn’t account for Colima’s volume handling.
The pragmatic trade-off
So I had to make a decision:
- Debug DHI + Colima compatibility (unknown time investment, might be unsolvable), or
- Switch to the standard
postgres:17-alpineimage (known working, immediate resolution)
Production system. Already 1.5 hours into debugging. Swap the image:
# Before (DHI):
image: dhi.io/postgres:17-alpine3.22
volumes:
- postgres17-data:/var/lib/postgresql/17/data
# After (Standard):
image: postgres:17-alpine
volumes:
- postgres17-data:/var/lib/postgresql/data
PostgreSQL initialized successfully. Data persisted across restarts. Services came up healthy.
The trade-off:
- ✓ Gained: Colima compatibility, reliable data persistence, onward progress
- ❌ Lost (temporarily): DHI security hardening–documented for future investigation
Docker Hardened Images offer security features through stricter defaults and entrypoint validation. Those same strict requirements reduce the compatibility surface. When you introduce a different virtualization environment (Colima’s VirtioFS instead of Docker Desktop’s VM), the hardening becomes brittleness.
This isn’t DHI’s fault–it’s the expected consequence of defense-in-depth. But if you’re migrating from Docker Desktop to Colima, test your image compatibility in isolation first. This is crucial if you are using Docker Hardened Images. Carry out these tests before migration day.
The Outcome
Migration completed at 11:30 AM. Zero data loss. All services healthy. Automation restored. RAM reclaimed (Docker Desktop’s overhead vs. Colima’s negligible footprint).
The real outcome was discovering–systematically, through elimination–that DHI PostgreSQL and Colima are incompatible without further investigation. I’ve documented this as a known issue. Future work: test DHI with different volume strategies, check whether newer DHI versions resolve the issue, evaluate whether the security delta matters for a single-user instance.
For now, I’m running standard postgres:17-alpine. The migration is successful. The security regression is documented and scheduled for future investigation. Forward progress beats perfectionism.
Key Takeaways
Backups are your safety net–use them. I restored the database once during this migration. That restore took 30 seconds because I’d verified the backup existed and was recent.
Systematic debugging beats panic every time. Bind mounts failed → tried named volumes → still failed → isolated to image-specific behavior. That progression ruled out host file system issues and pointed directly at image compatibility.
Pragmatic trade-offs beat perfectionism. I could’ve spent hours debugging DHI compatibility. Instead, I documented the incompatibility, switched to standard images, and moved on. The security regression is tracked. The production system is running.
Document failures honestly; they’re learning opportunities. This post exists because the migration didn’t go smoothly. The DHI + Colima incompatibility is now documented for anyone else hitting the same issue. That’s more valuable than a “here’s how I moved from X to Y” success story.
| Migration duration | 2.5 hours actual vs. 1.5 hours planned |
| Issues encountered | 7 critical |
| Data loss | 0 bytes |
| Services | All healthy |
| Memory reclaimed | ~500 MB |
| Novel discoveries | 1 (DHI + Colima incompatibility) |
| Trade-offs documented | 1 (security hardening vs. compatibility |
Running production infrastructure on an 8 GB Mac mini teaches you to value both resources and reliability. Colima delivers on the resources. This migration delivered on the reliability… eventually.






