No errors and no backup #853
Labels
No Label
bug
data loss
design finalized
good first issue
new feature area
question / support
security
waiting for response
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: borgmatic-collective/borgmatic#853
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
What I'm trying to do and why
I am running mailcow in docker (its only for docker available) and need to backup the files according the documentation of mailcow
https://docs.mailcow.email/third_party/borgmatic/third_party-borgmatic/
The backup is running without any errors and at least there are some kb files saved instead of over 3GB.
Steps to reproduce
config.yaml file
Actual behavior
Expected behavior
Should backup over 3GB of files.
Directory size:
Files including subdirectories:
According
borgmatic info
only 5 files have been backed.Other notes / implementation ideas
Running:
borg create --one-file-system --read-special ssh://uxxxxxx-sub11@uxxxxxx-sub11.your-storagebox.de:23/./mailcow::{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f} /etc/borgmatic.d/config.yaml /mnt/source /root/.borgmatic --dry-run --list
Result:
After backup is the size around 50kb with
borgmatic info
instead the 3GB.borgmatic version
1.8.9
borgmatic installation method
Docker Container: latest
Borg version
borg 1.2.8
Python version
Python 3.12.2
Database version (if applicable)
psql (PostgreSQL) 16.2
Operating system and version
NAME="Alpine Linux" ID=alpine VERSION_ID=3.19.1 PRETTY_NAME="Alpine Linux v3.19" HOME_URL="https://alpinelinux.org/" BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
Thanks for taking the time to file this and provide all the details! My guess at this point is that
--one-file-system
is causing problems here. Specifically, if/mnt/source/postfix
,/mnt/source/vmail
,/mnt/source/rspamd
,/mnt/source/redis
,/mnt/source/crypt
reside on different filesystems from/mnt/source
, then Borg won't actually traverse those filesystem boundaries and backup the contained files.Two ways I can think of to verify this theory. First, you can run the dry-run again, but this time without
--one-file-system
:borg create --read-special ssh://uxxxxxx-sub11@uxxxxxx-sub11.your-storagebox.de:23/./mailcow::{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f} /etc/borgmatic.d/config.yaml /mnt/source /root/.borgmatic --dry-run --list
If that starts including way more files, then it's pretty clear
--one-file-system
is the proximate source of the problem.The second way is to run
mount
within your borgmatic container and observe whether the various directories in question are mounted as separate filesystems.The dry run listed all the files and I started a regular run which had some issues and didnt go further for minutes, stacked on a file after several starts that I interrupted.
borg create --read-special ssh://uxxxxxx-sub11@uxxxxxx-sub11.your-storagebox.de:23/./mailcow::{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f} /etc/borgmatic.d/config.yaml /mnt/source /root/.borgmatic --list
Could be the directory and file rights? According borgmatic documentation is
mount
to extract, how to use it in a container?Trying to mount has another issue with modprobe which does not exist in a container
That Borg command is likely hanging on
qmgr
is because--read-special
is given andqmgr
is a named pipe but isn't excluded (which it would be excluded if run from borgmatic with your current configuration). But what this demonstrates is that the Borg command without--one-filesystem
does descend into some of those directories. And therefore--one-filesystem
might be causing problems here.When I was suggesting that you run
mount
, I wasn't suggestingborgmatic mount
orborg mount
but rather justmount
by itself! That will show you all mounted filesystems in your container and will hopefully indicate whether your sub-directories in/mnt/source
are on separate filesystems.I am executing the commands inside the docker container of borgmatic. And the volumes are already mounted. As I have understood, you mean I should backup outside of container over the volumes. That will be another option to do but insecure as there would be access to the file system.
This is the docker-compose.override.yml file, according to Mailcow it should be done that way:
Last option would be to setup Mailcow again and give a try.
Update:
I setup freshly Mailcow, removed volumes and run it again. Used
borgmatic -v 2
to backup and became the same result. No files has been backed up.Than gave a try to the command above and it began to backup but stucked again on the file
A /mnt/source/postfix/public/pickup
borg create --read-special ssh://uxxxxxx-sub11@uxxxxxx-sub11.your-storagebox.de:23/./mailcow::{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f} /etc/borgmatic.d/config.yaml /mnt/source /root/.borgmatic --list
Sorry, I think we're talking past each other here! I wasn't suggesting you backup outside of the container. I was only suggesting that you run the plain
mount
command (without borgmatic or Borg) within the container so we can get an idea of how your filesystems are mounted. Although based on the Docker Compose file you posted, I think it's pretty clear how they're mounted now!Got it. So it looks like your sub-directories (
/mnt/source/vmail
, etc.) are indeed mounted separately and therefore I suspect they're showing up within your container as separate filesystems. That means that if yoursource_directories
only contain/mnt/source
, borgmatic/Borg will never descend into those sub-directories (as long as--one-filesystem
is implicitly in use).So the solution I think is to replace
/mnt/source
insource_directories
with each of your sub-directories by name. Example:I realize this is less convenient than just specifying
/mnt/source
, but it's necessary due to the interaction between borgmatic/Borg's use of--one-filesystem
and Docker's use of multiple filesystems for the volume mounts.Yeah, you'd need the volumes mounted for borgmatic to be able to find those files. See my answer above about a way to hopefully make things work even with the volumes mounted.
After 3 days of research it worked. Thank you very much for your time and help. I will link this here to the guys at mailcow.
Awesome, I'm so glad to hear it's working now! Thanks for your patience here—and for passing along the info to the Mailcow devs.