I’ve forked and modified the coaxial/borgmatic container. It’s available as monachus/borgmatic on Docker Hub, and the Github repo is here.
My initial reason for forking was that coaxial’s container was running v1.1.15 of borgmatic, and I wanted something more current. After forking it, I made some other changes, such as moving the command to an entrypoint. By using an entrypoint, we can pass flags to borgmatic without rebuilding the container.
I’m using this in a Kubernetes cluster, with the complete borgmatic config delivered as a combination of ConfigMap (main config and excludes file) and secrets (encryption passphrase, ssh key). It runs hourly as a CronJob.
I work for Rancher Labs, and among other projects, I maintain the primary container for Hugo, currently with more than 500K pulls.
I’m a longtime user of tarsnap, and I was thrilled to find Borg and then borgmatic to run my own tarsnap-like solution.
I will continue to maintain the container and add additional features to it as needed. Please feel free to open issues on Github for anything you’d like to see me include. In the coming weeks, I’ll be writing an article for Rancher about how to use borgmatic in a Kubernetes cluster -- I’ll tell you when that happens.
If you’d like to reference a container image that will always be up to date with your releases, I would be honored to see mine replace coaxial’s in your documentation.
An example of the sort of stuff that I just can’t let go is that the Alpine package for Borg is 1.1.3, and borg is currently on 1.1.7. I’ll change the container build instructions to pull the 1.1.7 release (and all future releases) directly, instead of relying on the Alpine package to eventually catch up.
Super cool. I’ll have a look and update the README. I’m curious though: In Kubernetes, how are handing the set of things to backup to borgmatic? For instance, are you enumerating all PV claims in the cluster dynamically, or do you have a hard-coded list in your ConfigMap? Also, given that the volumes are already (presumably) mounted elsewhere, are you mounting them again in the borgmatic container to facilitate backups (so, not ReadWriteOnce)?
I’m using a ConfigMap for /etc/borgmatic/config.yml and another for /etc/borgmatic/excludes, with the latter referenced as the excludes file in the config. At the moment I’m using it to back up NFS content for things that change from within the Pods, like the NodeRED configuration that gets written out when I deploy changes or the etcd backups that run from within the cluster.
Since it’s NFS, I’m mounting the parent directory for the NFS shares into the borgmatic pod at /data with ReadOnly permissions and then running the backup with specific paths and exclusions for directories below it. I considered just running borg/borgmatic on the NFS server itself, but I try to keep the NAS just doing storage and not having to think too much about anything else. How to run it from within Kubernetes is also an interesting challenge, and this setup will work for places where I don’t control the backing server, such as EFS in AWS. I still want to selectively back NFS data up to S3 from within that cluster and have all the deduplication and encryption magic that keeps costs down.
My other storage medium in the local cluster is iSCSI, and I haven’t yet tackled any of the pods that use iSCSI volumes. They’re all ReadWriteOnce. My thought is that I’ll run the container as a sidecar within the pod, so it can access the same PV. I’d have to modify it to run its own cron process to kick off the job at some interval or use a maintenance script that sleeps between runs. Which form it takes would be controlled by an environment variable (or if I feel super nerdy, have it poll the API to figure out what kind of resource it is).
Ah, NFS. That makes sense then. A sidecar container for borgmatic is a good idea; I hadn’t thought of that. If you get to the point of producing a public Helm chart or similar, please let me know.
Anyway, I’ve updated the borgmatic README with the link. Thank you for pointing it out!
No due date set.
This issue currently doesn't have any dependencies.
Deleting a branch is permanent. It CANNOT be undone. Continue?