Database restoration #969
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
What I'm trying to do and why
I'm trying to restore the databases that I backed up into my borg repo
Steps to reproduce
source_directories:
repositories:
- path: ///bc.borg
label: bc
label: usb
exclude_patterns:
keep_hourly: 6
keep_daily: 7
keep_weekly: 4
keep_monthly: 6
postgresql_databases:
Gitea database
Keycloak database
Coturn database
Synapse database
WikiJS database
sqlite_databases:
Vaultwarden database
mariadb_databases:
Mailcow database
Bookstack database
mongodb_databases:
Wekan database
apprise:
states:
- start
- finish
- fail
Actual behavior
borgmatic restore --archive <ARCHIVE_NAME> -v 2
BORG_RELOCATED_REPO_ACCESS_IS_OK=*** BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=*** BORG_EXIT_CODES=*** borg --version --debug --show-rc
/etc/borgmatic.d/config.yaml: Borg 1.4.0
usb: Running actions for repository
/etc/borgmatic.d/config.yaml: No commands to run for pre-actions hook
usb: Restoring data sources from archive <ARCHIVE_NAME>
usb: Using runtime directory /tmp/borgmatic-<RANDOM_STRING>/borgmatic
/media//backup/usb.borg: Calling bootstrap hook function remove_data_source_dumps
/media//backup/usb.borg: Looking for bootstrap manifest files to remove in /tmp/borgmatic-/borgmatic/bootstrap
/media//backup/usb.borg: Calling btrfs hook function remove_data_source_dumps
/media//backup/usb.borg: Calling lvm hook function remove_data_source_dumps
/media//backup/usb.borg: Calling mariadb hook function remove_data_source_dumps
/media//backup/usb.borg: Removing MariaDB data source dumps
/media//backup/usb.borg: Calling mongodb hook function remove_data_source_dumps
/media//backup/usb.borg: Removing MongoDB data source dumps
/media//backup/usb.borg: Calling mysql hook function remove_data_source_dumps
/media//backup/usb.borg: Removing MySQL data source dumps
/media//backup/usb.borg: Calling postgresql hook function remove_data_source_dumps
/media//backup/usb.borg: Removing PostgreSQL data source dumps
/media//backup/usb.borg: Calling sqlite hook function remove_data_source_dumps
/media//backup/usb.borg: Removing SQLite data source dumps
/media//backup/usb.borg: Calling zfs hook function remove_data_source_dumps
BORG_RELOCATED_REPO_ACCESS_IS_OK=** BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=*** BORG_EXIT_CODES=*** borg list --debug --show-rc --format {path}{NL} /media//backup/usb.borg::<ARCHIVE_NAME> sh:/* sh:/* sh:/*
using builtin fallback logging configuration
33 self tests completed in 0.12 seconds
Verified integrity of /media//backup/usb.borg/index.46
TAM-verified manifest
security: read previous location '/media//backup/usb.borg'
security: read manifest timestamp ''
security: determined newest manifest timestamp as
security: repository checks ok, allowing access
Archive authentication DISABLED.
TAM-verified archive
terminating with success status, rc 0
usb: Restoring data source <DATA_SOURCE>@ (sqlite_databases)
/media//backup/usb.borg: Calling bootstrap hook function make_data_source_dump_patterns
/media//backup/usb.borg: Calling mariadb hook function make_data_source_dump_patterns
/media//backup/usb.borg: Calling mongodb hook function make_data_source_dump_patterns
/media//backup/usb.borg: Calling postgresql hook function make_data_source_dump_patterns
/media//backup/usb.borg: Calling sqlite hook function make_data_source_dump_patterns
BORG_RELOCATED_REPO_ACCESS_IS_OK=*** BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=*** BORG_EXIT_CODES=*** borg extract --debug --list --show-rc --stdout /media//backup/usb.borg::<ARCHIVE_NAME> re:(?s:/./db)|(?s:<RANDOM_STRING>/./db)|(?s:<RANDOM_STRING>/./db) >
/media//backup/usb.borg: Calling sqlite_databases hook function restore_data_source_dump
/media//backup/usb.borg: Restoring SQLite database at /mnt/source/vaultwarden/db.sqlite3
/media//backup/usb.borg: Removed existing SQLite database at /mnt/source/vaultwarden/db.sqlite3
sqlite3 /mnt/source/vaultwarden/db.sqlite3 < 3
using builtin fallback logging configuration
33 self tests completed in 0.11 seconds
Verified integrity of /media//backup/usb.borg/index.46
TAM-verified manifest
security: read previous location '/media//backup/usb.borg'
security: read manifest timestamp ''
security: determined newest manifest timestamp as
security: repository checks ok, allowing access
Archive authentication DISABLED.
TAM-verified archive
<DATA_SOURCE_PATH>
terminating with success status, rc 0
/media//backup/usb.borg: Calling bootstrap hook function remove_data_source_dumps
/media//backup/usb.borg: Looking for bootstrap manifest files to remove in /tmp/borgmatic-/borgmatic/bootstrap
/media//backup/usb.borg: Calling btrfs hook function remove_data_source_dumps
/media//backup/usb.borg: Calling lvm hook function remove_data_source_dumps
/media//backup/usb.borg: Calling mariadb hook function remove_data_source_dumps
/media//backup/usb.borg: Removing MariaDB data source dumps
/media//backup/usb.borg: Calling mongodb hook function remove_data_source_dumps
/media//backup/usb.borg: Removing MongoDB data source dumps
.media//backup/usb.borg: Calling mysql hook function remove_data_source_dumps
.media//backup/usb.borg: Removing MySQL data source dumps
.media//backup/usb.borg: Calling postgresql hook function remove_data_source_dumps
.media//backup/usb.borg: Removing PostgreSQL data source dumps
.media//backup/usb.borg: Calling sqlite hook function remove_data_source_dumps
.media//backup/usb.borg: Removing SQLite data source dumps
.media//backup/usb.borg: Calling zfs hook function remove_data_source_dumps
usb: Error running actions for repository
Cannot restore data sources <DATA_SOURCE_1>, <DATA_SOURCE_2>, <DATA_SOURCE_3>, <DATA_SOURCE_4>, <DATA_SOURCE_5>, <DATA_SOURCE_6>, <DATA_SOURCE_7>, <DATA_SOURCE_8> missing from borgmatic's configuration
/etc/borgmatic.d/config.yaml: An error occurred
summary:
/etc/borgmatic.d/config.yaml: Loading configuration file
/etc/borgmatic.d/config.yaml: An error occurred
usb: Error running actions for repository
Cannot restore data sources <DATA_SOURCE_1>, <DATA_SOURCE_2>, <DATA_SOURCE_3>, <DATA_SOURCE_4>, <DATA_SOURCE_5>, <DATA_SOURCE_6>, <DATA_SOURCE_7>, <DATA_SOURCE_8> missing from borgmatic's configuration
Need some help? https://torsion.org/borgmatic/#issues
Expected behavior
Data is restored to databases
Other notes / implementation ideas
I'm guessing that I made a simple mistake but I've been completely stumped. I am running the docker image listed on a Debian 12 host, all of the versions listed are from inside of the container, let me know if I need to post the versions from the host as well.
I have done the 'borgmatic extract' command and pulled the databases from the archive & repo in question and was able to successfully restore the mariadb databases through a shell script and borgmatic seems to be restoring the sqlite3 database fine on it's own so I have no reason to believe that the repository is corrupt. Also worth noting, all directories other than the databases have been successfully restored as well, I'm just having an issue getting borgmatic to restore the databases.
borgmatic version
borgmatic:/# borgmatic --version 1.9.5
borgmatic installation method
Docker container image: ghcr.io/borgmatic-collective/borgmatic
Borg version
borgmatic:/# borg --version borg 1.4.0
Python version
borgmatic:/# python3 --version Python 3.12.8
Database version (if applicable)
borgmatic:/# psql --version psql (PostgreSQL) 17.2 borgmatic:/# mysql --version mysql: Deprecated program name. It will be removed in a future release, use '/usr/bin/mariadb' instead mysql from 11.4.4-MariaDB, client 15.2 for Linux (x86_64) using readline 5.1 borgmatic:/# mongodump --version mongodump version: 100.9.4 git version: alpine Go version: go1.23.3 os: linux arch: amd64 compiler: gc
Operating system and version
Debian GNU/Linux 12 (bookworm)
Hi, thanks for filing this and providing all the details. Can I get a look at the full output of the following line?
I suspect that changes introduced in 1.9.5 (#418 in particular) may have broken database restoration in certain cases, e.g. when a pre-1.9.5 archive is restored with borgmatic 1.9.5. Seeing the output of that line would help me verify and diagnose that. Thanks!
No problem, the line is as follows
Cannot restore data sources bookstack@bookstack_mysql-1 (mariadb_databases), coturn_db@synapse_postgres (postgresql_databases), gitea@gitea_postgres (postgresql_databases), keycloak@postgres_db_keycloak (postgresql_databases), mailcow@mailcowdockerized-mysql-mailcow-1 (mariadb_databases), synapse@synapse_postgres (postgresql_databases), wekan@wekandb (mongodb_databases), wikijs@wikijs_db (postgresql_databases) missing from borgmatic's configuration
Thanks! Also, while we're at it, can I get a look at the databases portion of your archive listing? E.g.,
borgmatic list --archive latest | grep _databasesNo problem, here are the databases from my 'latest'
borgmatic:/# borgmatic list --archive latest | grep _databases
drwxr-xr-x root root 0 Thu, 2025-01-16 12:28:03 borgmatic/mariadb_databases
drwx------ root root 0 Thu, 2025-01-16 12:28:03 borgmatic/mariadb_databases/mailcowdockerized-mysql-mailcow-1:3306
-rw------- root root 153125 Thu, 2025-01-16 12:28:03 borgmatic/mariadb_databases/mailcowdockerized-mysql-mailcow-1:3306/mailcow
drwx------ root root 0 Thu, 2025-01-16 12:28:03 borgmatic/mariadb_databases/bookstack_mysql-1:3306
-rw------- root root 702566 Thu, 2025-01-16 12:28:03 borgmatic/mariadb_databases/bookstack_mysql-1:3306/bookstack
drwxr-xr-x root root 0 Thu, 2025-01-16 12:28:03 borgmatic/mongodb_databases
drwx------ root root 0 Thu, 2025-01-16 12:28:03 borgmatic/mongodb_databases/wekandb:27017
-rw------- root root 27056 Thu, 2025-01-16 12:28:03 borgmatic/mongodb_databases/wekandb:27017/wekan
drwxr-xr-x root root 0 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases
drwx------ root root 0 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases/gitea_postgres:5432
-rw------- root root 340985 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases/gitea_postgres:5432/gitea
drwx------ root root 0 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases/postgres_db_keycloak:5432
-rw------- root root 226959 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases/postgres_db_keycloak:5432/keycloak
drwx------ root root 0 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases/synapse_postgres:5432
-rw------- root root 10554 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases/synapse_postgres:5432/coturn_db
-rw------- root root 5548375 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases/synapse_postgres:5432/synapse
drwx------ root root 0 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases/wikijs_db:5432
-rw------- root root 63410 Thu, 2025-01-16 12:28:03 borgmatic/postgresql_databases/wikijs_db:5432/wikijs
drwxr-xr-x root root 0 Thu, 2025-01-16 12:28:03 borgmatic/sqlite_databases
drwx------ root root 0 Thu, 2025-01-16 12:28:03 borgmatic/sqlite_databases/localhost
-rw------- root root 314978 Thu, 2025-01-16 12:28:03 borgmatic/sqlite_databases/localhost/db
And here are from the archive I want to restore:
borgmatic:/# borgmatic list --archive borgmatic-2025-01-14T20:28:18.817617 | grep _databases
drwxr-xr-x root root 0 Tue, 2025-01-14 20:28:17 borgmatic/mariadb_databases
drwx------ root root 0 Tue, 2025-01-14 20:28:17 borgmatic/mariadb_databases/mailcowdockerized-mysql-mailcow-1
-rw------- root root 150140 Tue, 2025-01-14 20:28:17 borgmatic/mariadb_databases/mailcowdockerized-mysql-mailcow-1/mailcow
drwx------ root root 0 Tue, 2025-01-14 20:28:17 borgmatic/mariadb_databases/bookstack_mysql-1
-rw------- root root 700366 Tue, 2025-01-14 20:28:17 borgmatic/mariadb_databases/bookstack_mysql-1/bookstack
drwxr-xr-x root root 0 Tue, 2025-01-14 20:28:17 borgmatic/mongodb_databases
drwx------ root root 0 Tue, 2025-01-14 20:28:17 borgmatic/mongodb_databases/wekandb
-rw------- root root 1198989 Tue, 2025-01-14 20:28:17 borgmatic/mongodb_databases/wekandb/wekan
drwxr-xr-x root root 0 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases
drwx------ root root 0 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases/gitea_postgres
-rw------- root root 357638 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases/gitea_postgres/gitea
drwx------ root root 0 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases/postgres_db_keycloak
-rw------- root root 226959 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases/postgres_db_keycloak/keycloak
drwx------ root root 0 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases/synapse_postgres
-rw------- root root 10554 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases/synapse_postgres/coturn_db
-rw------- root root 5552530 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases/synapse_postgres/synapse
drwx------ root root 0 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases/wikijs_db
-rw------- root root 87090 Tue, 2025-01-14 20:28:17 borgmatic/postgresql_databases/wikijs_db/wikijs
drwxr-xr-x root root 0 Tue, 2025-01-14 20:28:17 borgmatic/sqlite_databases
drwx------ root root 0 Tue, 2025-01-14 20:28:17 borgmatic/sqlite_databases/localhost
-rw------- root root 314978 Tue, 2025-01-14 20:28:17 borgmatic/sqlite_databases/localhost/db
Okay, yeah, I think I see what's going on. I'm guessing the archive you want to restore was created pre-borgmatic-1.9.5, before the configured
portwas encoded in the archive dump filename. And when you go to restore such a portless database dump, borgmatic isn't currently smart enough to match that against your configured databases, because those all have ports.So as a temporary workaround, you could comment out the ports from your configured databases (since it looks like they are all using default ports), and then I bet you could restore the archive in question just fine. But that's not a permanent solution, so I think the work in this ticket is to make portless archive database dumps still restore properly even if a (default) port is configured.
summary:
/etc/borgmatic.d/config.yaml: Loading configuration file
/etc/borgmatic.d/config.yaml: An error occurred
usb: Error running actions for repository
pg_restore: error: could not execute query: ERROR: unrecognized configuration parameter "transaction_timeout"
Command was: SET transaction_timeout = 0;
Command 'pg_restore --no-password --if-exists --exit-on-error --clean --dbname keycloak --host postgres_db_keycloak --username keycloak' returned non-zero exit status 1.
Need some help? https://torsion.org/borgmatic/#issues
I'm unfortunately still getting this after commenting out the ports, but it's a step forward, I'll be troubleshooting on my own as well and I'll post any solution I find here.
Yeah, I don't think that error is something caused by borgmatic, at least not directly. Is it possible you have a version mismatch between the Postgres dump and the server? See https://stackoverflow.com/questions/79058325/why-does-pg-dump-include-transaction-timeout-if-psql-doesnt-understand-it for more info.
More specifically, my guess is that the version of the PostgreSQL client in the borgmatic container is different than the version of your PostgreSQL server...? And I'm also guessing that you didn't have a problem restoring database dumps manually because you didn't restore from within the borgmatic container...?
Hey I got wrapped up in stuff, but yesterday I figured out that if I specify the image as 'image: ghcr.io/borgmatic-collective/borgmatic:1.9.4' I was able to backup and restore to another PC, however if I used 'image: ghcr.io/borgmatic-collective/borgmatic:1.9.5' on both then the restore would fail, I'll run the 1.9.5 version on both PCs and grab the error logs for you really quick.
Yeah, I think the difference between 1.9.4 and 1.9.5 is that the latter introduced the regression you found in this ticket. Specifically, you can't restore a database with a (default) port configured if the dump is stored in the archive without a port.
Oh, also, in regards to the PostgreSQL error, one thing that happened with the 1.9.5 container image is that it was upgraded to a PostgreSQL 17 client! https://github.com/borgmatic-collective/docker-borgmatic/releases
So that could very well be the cause of the
pg_restoreerror you're getting.The fix for the original restore problem was just released in borgmatic 1.9.6! I'm pretty sure the remaining PostgreSQL issue isn't borgmatic-specific, so I'll close this ticket for now. However I'd be happy to discuss it further, here or on a new ticket. You might also consider raising the issue with the docker-borgmatic project if it is indeed an issue with the container. Thanks!