Compare commits

...

109 Commits

Author SHA1 Message Date
Dan Helfman 8cec7c74d8 Add "--strip-components all" on the "extract" action to remove leading path components (#647). 2023-03-09 10:09:16 -08:00
Dan Helfman d3086788eb Document how to list database dumps in an archive. 2023-03-08 16:09:41 -08:00
Dan Helfman 8d860ea02c
Enhanced docs with info on fetching mysql database size
Merge pull request #46 from Jelle-SamsonIT/patch-3
2023-03-08 15:52:28 -08:00
Dan Helfman b343363bb8 Change the default action order to: "create", "prune", "compact", "check" (#304). 2023-03-08 14:05:06 -08:00
Dan Helfman 9db31bd1e9 Run any command-line actions in the order specified instead of using a fixed ordering (#304). 2023-03-08 13:19:41 -08:00
Dan Helfman d88bcc8be9 Add Healthchecks "log" state feature to NEWS. 2023-03-07 15:45:23 -08:00
Dan Helfman 332f7c4bb6 Add support for healthchecks "log" feature (#628).
Reviewed-on: borgmatic-collective/borgmatic#645
2023-03-07 22:21:30 +00:00
Dan Helfman 5d19d86e4a Add flake8-quotes to complain about incorrect quoting so I don't have to! 2023-03-07 14:08:35 -08:00
Soumik Dutta 044ae7869a fix tests
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-08 03:30:12 +05:30
Dan Helfman 62ae82f2c0 Mention searching for files in the extract a backup guide. 2023-03-06 22:59:34 -08:00
Dan Helfman 66194b7304 Update dates in documentation examples. 2023-03-06 22:41:43 -08:00
Soumik Dutta 98e429594e added tests to make sure unsupported log states are detected
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 20:31:00 +05:30
Soumik Dutta 4fcfddbe08 return early if unsupported state is passed
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 19:58:57 +05:30
Soumik Dutta f442aeae9c fix logs_monitor_start_error()
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 05:21:56 +05:30
Soumik Dutta e211863cba update test_borgmatic.py
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 05:12:24 +05:30
Soumik Dutta 45256ae33f add test for healthchecks
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 03:38:08 +05:30
Soumik Dutta 1573d68fe2 update schema.yaml description
also add monitor.State.LOG to cronitor.

Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-05 21:57:13 +05:30
Soumik Dutta 69f6695253 Add support for healthchecks "log" feature #628
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-05 19:27:32 +05:30
Dan Helfman a7c055264d
Fix incorrect documentation TOC background by removing extra dark mode styles.
Merge pull request #52 from diivi/fix/remove-special-dark-mode-attributes
2023-03-04 16:18:04 -08:00
Divyansh Singh db18364a73 fix: remove extra dark mode styles 2023-03-05 03:16:46 +05:30
Dan Helfman 22498ebd4c In the documentation, mention what version of borgmatic introduced SQLite support. 2023-03-04 10:50:28 -08:00
Dan Helfman e1f02d9fa5 Add SQLite feature to NEWS and also integrations. 2023-03-04 09:59:16 -08:00
Dan Helfman 9ec220c600
Add SQLite database dump/restore hook (#295).
feat: add dump-restore support for sqlite databases
2023-03-04 09:47:21 -08:00
Divyansh Singh cf0275a3ed remove test path 2023-03-04 23:00:57 +05:30
Divyansh Singh c71eb60cd2 mock os.remove instead of actually removing a file 2023-03-04 13:08:30 +05:30
Divyansh Singh 675e54ba9f use os.remove and improve tests 2023-03-04 12:43:07 +05:30
Divyansh Singh 1793ad74bd add sqlite for e2e tests 2023-03-04 02:41:14 +05:30
Divyansh Singh 767a7d900b e2e tests schema update 2023-03-04 01:29:01 +05:30
Divyansh Singh 903507bd03 code review 2023-03-04 01:27:07 +05:30
Dan Helfman b6cf7d2adc Bump version for release. 2023-03-02 15:34:22 -08:00
Dan Helfman a071e02d20 With the "create" action and the "--list" ("--files") flag, only show excluded files at verbosity 2 (#620). 2023-03-02 15:33:42 -08:00
Divyansh Singh 3aa88085ed formatting fix 2023-03-03 00:01:52 +05:30
Divyansh Singh af1cc27988 feat: add dump-restore support for sqlite databases 2023-03-02 23:55:16 +05:30
Dan Helfman dbf8301c19 Add "checkpoint_volume" configuration option to creates checkpoints every specified number of bytes. 2023-02-27 10:47:17 -08:00
Dan Helfman 2a306bef12 Fix tests. 2023-02-26 23:34:17 -08:00
Dan Helfman 2a36a2a312 Add "--repository" flag to the "rcreate" action. Add "--progress" flag to the "transfer" action. 2023-02-26 23:22:23 -08:00
Dan Helfman d7a07f0428 Support status character changes in Borg 2.0.0b5 when filtering out special files that cause Borg to hang. 2023-02-26 22:36:13 -08:00
Dan Helfman da321e180d Fix the "create" action with the "--dry-run" flag querying for databases when a PostgreSQL/MySQL "all" database is configured. 2023-02-26 22:15:12 -08:00
Dan Helfman c6582e1171 Internally support new Borg 2.0.0b5 "--filter" status characters / item flags for the "create" action. 2023-02-26 17:17:25 -08:00
Dan Helfman 9b83afe491 With the "create" action, only one of "--list" ("--files") and "--progress" flags can be used. 2023-02-26 17:05:56 -08:00
Dan Helfman 2814ac3642 Update Borg 2.0 documentation links. 2023-02-26 16:44:43 -08:00
Dan Helfman 8a9d5d93f5 Add ntfy authentication to NEWS. 2023-02-25 14:23:42 -08:00
Dan Helfman 783a6d3b45 Add authentication to the ntfy hook (#621).
Reviewed-on: borgmatic-collective/borgmatic#644
2023-02-25 22:04:37 +00:00
Tom Hubrecht 95575c3450 Add auth test for the ntfy hook 2023-02-25 20:04:39 +01:00
Tom Hubrecht 9b071ff92f Make the auth logic more explicit and warnings if necessary 2023-02-25 20:04:39 +01:00
Tom Hubrecht d80e716822 Add authentication to the ntfy hook 2023-02-24 17:35:53 +01:00
Dan Helfman 418ebc8843 Add MySQL database hook "add_drop_database" configuration option to control whether dumped MySQL databases get dropped right before restore (#642). 2023-02-20 15:32:47 -08:00
Dan Helfman f5a448c7c2 Fix for potential data loss (data not getting backed up) when dumping large "directory" format PostgreSQL/MongoDB databases (#643). 2023-02-20 15:18:51 -08:00
Dan Helfman 37ac542b31 Merge pull request 'setup: Add link to MacPorts package' (#641) from neverpanic/borgmatic:cal-docs-macports-port into master
Reviewed-on: borgmatic-collective/borgmatic#641
2023-02-15 17:31:03 +00:00
Clemens Lang 8c7d7e3e41 setup: Add link to MacPorts package 2023-02-15 10:47:59 +01:00
Dan Helfman b811f125b2 Clarify "checks" configuration documentation for older versions of borgmatic (#639). 2023-02-12 21:42:43 -08:00
Dan Helfman 061f3e7917 Remove related documentation links. 2023-01-26 16:12:01 -08:00
Dan Helfman 6055918907 Upgrade documentation image dependencies. 2023-01-26 16:11:41 -08:00
Dan Helfman 4a90e090ad Clarify NEWS on database "all" dump feature applying to MySQL as well. 2023-01-26 15:28:17 -08:00
Dan Helfman 301b29ee11 Bump version for release. 2023-01-26 15:17:19 -08:00
Dan Helfman c1eb210253 Fix code style flake issue. 2023-01-26 15:09:35 -08:00
Dan Helfman 30cca62d09 Add configuration options for database command customization (#630). 2023-01-26 14:59:17 -08:00
Dan Helfman 113c0e7616 Update documentation about changes to "all" database restores (#438, #560). 2023-01-26 10:53:58 -08:00
Dan Helfman 0e6b2c6773 Optionally dump "all" PostgreSQL databases to separate files instead of one combined dump file (#438, #560). 2023-01-25 23:31:07 -08:00
Dan Helfman 22c750b949 Mention "before_actions" command hook in soft failure documentation (#631). 2023-01-25 13:01:52 -08:00
Dan Helfman 504cce39a1 Add NEWS entry for #629. 2023-01-14 09:17:27 -08:00
Dan Helfman 6c4abb6803 Merge pull request 'Log warning for excluding special files only if list is not empty' (#629) from palto42/borgmatic:special_files_warn into master
Reviewed-on: borgmatic-collective/borgmatic#629
2023-01-14 17:15:01 +00:00
palto42 fd7ad86daa
conditional warning for excluding special files 2023-01-03 21:53:51 +01:00
Dan Helfman 6f3b23c79d Lowercase borgmatic in documentation. 2022-12-23 14:12:48 -08:00
Dan Helfman 4838f5e810 Add borgmatic minimum version to compact docs (#624).
Reviewed-on: borgmatic-collective/borgmatic#625
2022-12-23 22:11:45 +00:00
Macguire Rintoul 116f1ab989 add borgmatic minimum version to compact docs 2022-12-23 13:32:01 -08:00
Dan Helfman 5e15c9f2bc Fix traceback when include merging on ARM64 (#622). 2022-12-23 10:07:53 -08:00
Dan Helfman 442641f9f6 Update borgmatic social links. 2022-12-16 11:39:05 -08:00
Dan Helfman f67c544be6 Optionally dump "all" PostgreSQL databases to separate files instead of one combined dump file (#438, #560). 2022-12-15 22:59:42 -08:00
Dan Helfman 437fd4dbae Update developer constributing instructions as well. 2022-12-13 23:56:32 -08:00
Dan Helfman 36873252d6 Update developer instructions. 2022-12-13 23:44:27 -08:00
Dan Helfman 1ef82a27fa Clarify data/archives check implicit enabling. 2022-12-12 16:03:05 -08:00
Dan Helfman 6837dcbf42 Clarify documentation about transferring archives between related repositories. 2022-12-10 12:59:44 -08:00
Dan Helfman c657764367 Fix logs that interfere with JSON output by making warnings go to stderr instead of stdout (#602). 2022-12-02 12:12:10 -08:00
Dan Helfman f79286fc91 Bump version for release. 2022-11-27 09:00:40 -08:00
Dan Helfman 694d376d15 Clarify documentation about multiple repositories and separate configuration files (#613). 2022-11-21 13:33:01 -08:00
Dan Helfman ab4c08019c Upgrade pytest test dependency (security). 2022-11-18 11:13:51 -08:00
Dan Helfman fd39f54df7 Code formatting. 2022-11-18 08:35:01 -08:00
Dan Helfman ca7e18bb29
Override PostgreSQL dump/restore commands via configuration options (#311).
Merge pull request #49 from jpaniagualaconich/specify-pg-dump-restore-commands
2022-11-18 08:33:14 -08:00
Dan Helfman 6975a5b155 Fix "data" consistency check to support "check_last" and consistency "prefix" options (#611). 2022-11-17 10:19:48 -08:00
Dan Helfman b627d00595 More consistency checks documentation edits. 2022-11-14 15:13:47 -08:00
Dan Helfman 9bd8f1a6df Clarify consistency check configuration. 2022-11-14 14:58:42 -08:00
Javier Paniagua faf682ca35 specify pg dump/restore commands (#311) 2022-11-06 11:12:53 +01:00
Dan Helfman 6aeb74550d Clarify examples in include merging and deep merging documentation (#607). 2022-10-28 19:33:19 -07:00
Dan Helfman 89500df429 Fix traceback when a configuration section is present but lacking any options (#604). 2022-10-23 13:56:03 -07:00
Dan Helfman 82b072d0b7 Update documentation to mention using blake2 with "transfer" action. 2022-10-17 15:04:30 -07:00
Dan Helfman 018c0296fd Document that special file exclusion also excludes symlinks to special files (#596). 2022-10-15 10:14:46 -07:00
Dan Helfman 9c42e7e817 Fix regression in which "check" action errored on certain systems (#597, #598). 2022-10-14 16:19:26 -07:00
Dan Helfman 953277a066 Fix special file detection when broken symlinks are encountered (#596). 2022-10-14 09:41:08 -07:00
Dan Helfman e2002b5488 Bump version for release. 2022-10-12 10:59:54 -07:00
Dan Helfman c9742e1d04 Code formatting. 2022-10-12 10:52:32 -07:00
Dan Helfman 906da838ef Add missing break-lock action command-line help (#357). 2022-10-12 10:48:10 -07:00
Dan Helfman d7f1c10c8c To prevent Borg hangs, unconditionally delete stale named pipes before dumping databases (#360). 2022-10-12 10:26:09 -07:00
Dan Helfman e8e4d17168 Clean up changelog for the current dev release. 2022-10-06 22:06:03 -07:00
Dan Helfman a31ce337e9 Skip auto-exclusion of special files when user explicitly sets read_special to true (#587). 2022-10-06 11:07:43 -07:00
Dan Helfman 902730df46 Update sample systemd file to allow system idle (#589). 2022-10-05 10:20:25 -07:00
Dan Helfman c969c822ee Do not inhibit idle in borgmatic.service (#589).
Reviewed-on: borgmatic-collective/borgmatic#589
2022-10-05 17:14:19 +00:00
Dan Helfman c31702d092 Fix for potential data loss with "patterns_from". Also, display excluded files (#590). 2022-10-04 22:57:18 -07:00
Dan Helfman ba8fbe7a44 Add "break-lock" action for removing any repository and cache locks leftover from Borg aborting (#357). 2022-10-04 13:42:18 -07:00
Dan Helfman 2774c2e4c0 Add support for Borg 2's "--match-archives" flag (replaces "--glob-archives") (#591). 2022-10-03 22:50:37 -07:00
Dan Helfman ae036aebd7 When the "read_special" option is true or database hooks are enabled, auto-exclude special files for a "create" action to prevent Borg from hanging (#587). 2022-10-03 12:58:13 -07:00
LaserEyess 2e9f70d496 Do not inhibit idle in borgmatic.service
When backing up a machine with a monitor using logind to control idle
timeout and things like DPMS, borgmatic can block the screen from
turning on/off with systemd-inhibit. This is because by default
systemd-inhibit will block "idle:sleep:shutdown". Borgmatic does not
need to care about idle, only about suspend and shutdown. So, add an
explicit `--what` flag for what borgmatic should inhibit.

For more information see systemd-inhibit(1).
2022-10-01 09:33:38 -04:00
Dan Helfman 90be5b84b1 Fix changelog development version. 2022-09-20 14:02:48 -07:00
Dan Helfman 80e95f20a3 Add "borgmatic borg" documentation note about interactive prompts. 2022-09-20 14:01:47 -07:00
Dan Helfman ac7c7d4036 Warn when ignoring a configured "read_special" value of false, as true is needed when database hooks are enabled (#587). 2022-09-20 13:52:13 -07:00
Dan Helfman 858b0b9fbe Note version of borgmatic needed for "borgmatic borg" action (#586). 2022-09-13 09:05:18 -07:00
Dan Helfman 9cc043f60e Update "find" command in documentation to work on BSDs and not just Linux (#583). 2022-09-11 20:02:30 -07:00
Jelle @ Samson-IT 3720f22234
reworded and added 'all' caveat 2022-07-13 22:03:51 +02:00
Jelle @ Samson-IT 1fdec480d6
Added some info about fetching mysql database size 2022-07-13 13:29:45 +02:00
129 changed files with 6600 additions and 2249 deletions

1
.flake8 Normal file
View File

@ -0,0 +1 @@
select = Q0

87
NEWS
View File

@ -1,3 +1,88 @@
1.7.9.dev0
* #295: Add a SQLite database dump/restore hook.
* #304: Change the default action order when no actions are specified on the command-line to:
"create", "prune", "compact", "check". If you'd like to retain the old ordering ("prune" and
"compact" first), then specify actions explicitly on the command-line.
* #304: Run any command-line actions in the order specified instead of using a fixed ordering.
* #628: Add a Healthchecks "log" state to send borgmatic logs to Healthchecks without signalling
success or failure.
* #647: Add "--strip-components all" feature on the "extract" action to remove leading path
components of files you extract. Must be used with the "--path" flag.
1.7.8
* #620: With the "create" action and the "--list" ("--files") flag, only show excluded files at
verbosity 2.
* #621: Add optional authentication to the ntfy monitoring hook.
* With the "create" action, only one of "--list" ("--files") and "--progress" flags can be used.
This lines up with the new behavior in Borg 2.0.0b5.
* Internally support new Borg 2.0.0b5 "--filter" status characters / item flags for the "create"
action.
* Fix the "create" action with the "--dry-run" flag querying for databases when a PostgreSQL/MySQL
"all" database is configured. Now, these queries are skipped due to the dry run.
* Add "--repository" flag to the "rcreate" action to optionally select one configured repository to
create.
* Add "--progress" flag to the "transfer" action, new in Borg 2.0.0b5.
* Add "checkpoint_volume" configuration option to creates checkpoints every specified number of
bytes during a long-running backup, new in Borg 2.0.0b5.
1.7.7
* #642: Add MySQL database hook "add_drop_database" configuration option to control whether dumped
MySQL databases get dropped right before restore.
* #643: Fix for potential data loss (data not getting backed up) when dumping large "directory"
format PostgreSQL/MongoDB databases. Prior to the fix, these dumps would not finish writing to
disk before Borg consumed them. Now, the dumping process completes before Borg starts. This only
applies to "directory" format databases; other formats still stream to Borg without using
temporary disk space.
* Fix MongoDB "directory" format to work with mongodump/mongorestore without error. Prior to this
fix, only the "archive" format worked.
1.7.6
* #393, #438, #560: Optionally dump "all" PostgreSQL/MySQL databases to separate files instead of
one combined dump file, allowing more convenient restores of individual databases. You can enable
this by specifying the database dump "format" option when the database is named "all".
* #602: Fix logs that interfere with JSON output by making warnings go to stderr instead of stdout.
* #622: Fix traceback when include merging configuration files on ARM64.
* #629: Skip warning about excluded special files when no special files have been excluded.
* #630: Add configuration options for database command customization: "list_options",
"restore_options", and "analyze_options" for PostgreSQL, "restore_options" for MySQL, and
"restore_options" for MongoDB.
1.7.5
* #311: Override PostgreSQL dump/restore commands via configuration options.
* #604: Fix traceback when a configuration section is present but lacking any options.
* #607: Clarify documentation examples for include merging and deep merging.
* #611: Fix "data" consistency check to support "check_last" and consistency "prefix" options.
* #613: Clarify documentation about multiple repositories and separate configuration files.
1.7.4
* #596: Fix special file detection erroring when broken symlinks are encountered.
* #597, #598: Fix regression in which "check" action errored on certain systems ("Cannot determine
Borg repository ID").
1.7.3
* #357: Add "break-lock" action for removing any repository and cache locks leftover from Borg
aborting.
* #360: To prevent Borg hangs, unconditionally delete stale named pipes before dumping databases.
* #587: When database hooks are enabled, auto-exclude special files from a "create" action to
prevent Borg from hanging. You can override/prevent this behavior by explicitly setting the
"read_special" option to true.
* #587: Warn when ignoring a configured "read_special" value of false, as true is needed when
database hooks are enabled.
* #589: Update sample systemd service file to allow system "idle" (e.g. a video monitor turning
off) while borgmatic is running.
* #590: Fix for potential data loss (data not getting backed up) when the "patterns_from" option
was used with "source_directories" (or the "~/.borgmatic" path existed, which got injected into
"source_directories" implicitly). The fix is for borgmatic to convert "source_directories" into
patterns whenever "patterns_from" is used, working around a Borg bug:
https://github.com/borgbackup/borg/issues/6994
* #590: In "borgmatic create --list" output, display which files get excluded from the backup due
to patterns or excludes.
* #591: Add support for Borg 2's "--match-archives" flag. This replaces "--glob-archives", which
borgmatic now treats as an alias for "--match-archives". But note that the two flags have
slightly different syntax. See the Borg 2 changelog for more information:
https://borgbackup.readthedocs.io/en/2.0.0b3/changes.html#version-2-0-0b3-2022-10-02
* Fix for "borgmatic --archive latest" not finding the latest archive when a verbosity is set.
1.7.2 1.7.2
* #577: Fix regression in which "borgmatic info --archive ..." showed repository info instead of * #577: Fix regression in which "borgmatic info --archive ..." showed repository info instead of
archive info with Borg 1. archive info with Borg 1.
@ -10,7 +95,7 @@
* #574: Fix for potential data loss (data not getting backed up) when the "patterns" option was * #574: Fix for potential data loss (data not getting backed up) when the "patterns" option was
used with "source_directories" (or the "~/.borgmatic" path existed, which got injected into used with "source_directories" (or the "~/.borgmatic" path existed, which got injected into
"source_directories" implicitly). The fix is for borgmatic to convert "source_directories" into "source_directories" implicitly). The fix is for borgmatic to convert "source_directories" into
patterns whenever "patterns" is used, working around a potential Borg bug: patterns whenever "patterns" is used, working around a Borg bug:
https://github.com/borgbackup/borg/issues/6994 https://github.com/borgbackup/borg/issues/6994
1.7.0 1.7.0

View File

@ -67,6 +67,7 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://sqlite.org/"><img src="docs/static/sqlite.png" alt="SQLite" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
@ -104,23 +105,38 @@ offerings, but do not currently fund borgmatic development or hosting.
### Issues ### Issues
You've got issues? Or an idea for a feature enhancement? We've got an [issue Are you experiencing an issue with borgmatic? Or do you have an idea for a
tracker](https://projects.torsion.org/borgmatic-collective/borgmatic/issues). In order to feature enhancement? Head on over to our [issue
create a new issue or comment on an issue, you'll need to [login tracker](https://projects.torsion.org/borgmatic-collective/borgmatic/issues).
first](https://projects.torsion.org/user/login). Note that you can login with In order to create a new issue or add a comment, you'll need to
an existing GitHub account if you prefer. [register](https://projects.torsion.org/user/sign_up?invite_code=borgmatic)
first. If you prefer to use an existing GitHub account, you can skip account
If you'd like to chat with borgmatic developers or users, head on over to the creation and [login directly](https://projects.torsion.org/user/login).
`#borgmatic` IRC channel on Libera Chat, either via <a
href="https://web.libera.chat/#borgmatic">web chat</a> or a
native <a href="ircs://irc.libera.chat:6697">IRC client</a>. If you
don't get a response right away, please hang around a while—or file a ticket
instead.
Also see the [security Also see the [security
policy](https://torsion.org/borgmatic/docs/security-policy/) for any security policy](https://torsion.org/borgmatic/docs/security-policy/) for any security
issues. issues.
### Social
Check out the [Borg subreddit](https://www.reddit.com/r/BorgBackup/) for
general Borg and borgmatic discussion and support.
Also follow [borgmatic on Mastodon](https://fosstodon.org/@borgmatic).
### Chat
To chat with borgmatic developers or users, check out the `#borgmatic`
IRC channel on Libera Chat, either via <a
href="https://web.libera.chat/#borgmatic">web chat</a> or a native <a
href="ircs://irc.libera.chat:6697">IRC client</a>. If you don't get a response
right away, please hang around a while—or file a ticket instead.
### Other
Other questions or comments? Contact Other questions or comments? Contact
[witten@torsion.org](mailto:witten@torsion.org). [witten@torsion.org](mailto:witten@torsion.org).
@ -135,10 +151,14 @@ borgmatic is licensed under the GNU General Public License version 3 or any
later version. later version.
If you'd like to contribute to borgmatic development, please feel free to If you'd like to contribute to borgmatic development, please feel free to
submit a [Pull Request](https://projects.torsion.org/borgmatic-collective/borgmatic/pulls) submit a [Pull
or open an [issue](https://projects.torsion.org/borgmatic-collective/borgmatic/issues) first Request](https://projects.torsion.org/borgmatic-collective/borgmatic/pulls) or
to discuss your idea. We also accept Pull Requests on GitHub, if that's more open an
your thing. In general, contributions are very welcome. We don't bite! [issue](https://projects.torsion.org/borgmatic-collective/borgmatic/issues) to
discuss your idea. Note that you'll need to
[register](https://projects.torsion.org/user/sign_up?invite_code=borgmatic)
first. We also accept Pull Requests on GitHub, if that's more your thing. In
general, contributions are very welcome. We don't bite!
Also, please check out the [borgmatic development Also, please check out the [borgmatic development
how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for

View File

36
borgmatic/actions/borg.py Normal file
View File

@ -0,0 +1,36 @@
import logging
import borgmatic.borg.borg
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_borg(
repository, storage, local_borg_version, borg_arguments, local_path, remote_path,
):
'''
Run the "borg" action for the given repository.
'''
if borg_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, borg_arguments.repository
):
logger.info('{}: Running arbitrary Borg command'.format(repository))
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository,
borg_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
)
borgmatic.borg.borg.run_arbitrary_borg(
repository,
storage,
local_borg_version,
options=borg_arguments.options,
archive=archive_name,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,21 @@
import logging
import borgmatic.borg.break_lock
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_break_lock(
repository, storage, local_borg_version, break_lock_arguments, local_path, remote_path,
):
'''
Run the "break-lock" action for the given repository.
'''
if break_lock_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, break_lock_arguments.repository
):
logger.info(f'{repository}: Breaking repository and cache locks')
borgmatic.borg.break_lock.break_lock(
repository, storage, local_borg_version, local_path=local_path, remote_path=remote_path,
)

View File

@ -0,0 +1,55 @@
import logging
import borgmatic.borg.check
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_check(
config_filename,
repository,
location,
storage,
consistency,
hooks,
hook_context,
local_borg_version,
check_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "check" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
hooks.get('before_check'),
hooks.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
**hook_context,
)
logger.info('{}: Running consistency checks'.format(repository))
borgmatic.borg.check.check_archives(
repository,
location,
storage,
consistency,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
progress=check_arguments.progress,
repair=check_arguments.repair,
only_checks=check_arguments.only,
force=check_arguments.force,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_check'),
hooks.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,57 @@
import logging
import borgmatic.borg.compact
import borgmatic.borg.feature
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_compact(
config_filename,
repository,
storage,
retention,
hooks,
hook_context,
local_borg_version,
compact_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
):
'''
Run the "compact" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
hooks.get('before_compact'),
hooks.get('umask'),
config_filename,
'pre-compact',
global_arguments.dry_run,
**hook_context,
)
if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
logger.info('{}: Compacting segments{}'.format(repository, dry_run_label))
borgmatic.borg.compact.compact_segments(
global_arguments.dry_run,
repository,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
progress=compact_arguments.progress,
cleanup_commits=compact_arguments.cleanup_commits,
threshold=compact_arguments.threshold,
)
else: # pragma: nocover
logger.info('{}: Skipping compact (only available/needed in Borg 1.2+)'.format(repository))
borgmatic.hooks.command.execute_hook(
hooks.get('after_compact'),
hooks.get('umask'),
config_filename,
'post-compact',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,90 @@
import json
import logging
import borgmatic.borg.create
import borgmatic.hooks.command
import borgmatic.hooks.dispatch
import borgmatic.hooks.dump
logger = logging.getLogger(__name__)
def run_create(
config_filename,
repository,
location,
storage,
hooks,
hook_context,
local_borg_version,
create_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
):
'''
Run the "create" action for the given repository.
If create_arguments.json is True, yield the JSON output from creating the archive.
'''
borgmatic.hooks.command.execute_hook(
hooks.get('before_backup'),
hooks.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
**hook_context,
)
logger.info('{}: Creating archive{}'.format(repository, dry_run_label))
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
active_dumps = borgmatic.hooks.dispatch.call_hooks(
'dump_databases',
hooks,
repository,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
stream_processes = [process for processes in active_dumps.values() for process in processes]
json_output = borgmatic.borg.create.create_archive(
global_arguments.dry_run,
repository,
location,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
progress=create_arguments.progress,
stats=create_arguments.stats,
json=create_arguments.json,
list_files=create_arguments.list_files,
stream_processes=stream_processes,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
config_filename,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_backup'),
hooks.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,48 @@
import logging
import borgmatic.borg.export_tar
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_export_tar(
repository,
storage,
local_borg_version,
export_tar_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "export-tar" action for the given repository.
'''
if export_tar_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_tar_arguments.repository
):
logger.info(
'{}: Exporting archive {} as tar file'.format(repository, export_tar_arguments.archive)
)
borgmatic.borg.export_tar.export_tar_archive(
global_arguments.dry_run,
repository,
borgmatic.borg.rlist.resolve_archive_name(
repository,
export_tar_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
),
export_tar_arguments.paths,
export_tar_arguments.destination,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
tar_filter=export_tar_arguments.tar_filter,
list_files=export_tar_arguments.list_files,
strip_components=export_tar_arguments.strip_components,
)

View File

@ -0,0 +1,67 @@
import logging
import borgmatic.borg.extract
import borgmatic.borg.rlist
import borgmatic.config.validate
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_extract(
config_filename,
repository,
location,
storage,
hooks,
hook_context,
local_borg_version,
extract_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "extract" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
hooks.get('before_extract'),
hooks.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
**hook_context,
)
if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, extract_arguments.repository
):
logger.info('{}: Extracting archive {}'.format(repository, extract_arguments.archive))
borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
repository,
borgmatic.borg.rlist.resolve_archive_name(
repository,
extract_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
),
extract_arguments.paths,
location,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
destination_path=extract_arguments.destination,
strip_components=extract_arguments.strip_components,
progress=extract_arguments.progress,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_extract'),
hooks.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
**hook_context,
)

41
borgmatic/actions/info.py Normal file
View File

@ -0,0 +1,41 @@
import json
import logging
import borgmatic.borg.info
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_info(
repository, storage, local_borg_version, info_arguments, local_path, remote_path,
):
'''
Run the "info" action for the given repository and archive.
If info_arguments.json is True, yield the JSON output from the info for the archive.
'''
if info_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, info_arguments.repository
):
if not info_arguments.json: # pragma: nocover
logger.answer(f'{repository}: Displaying archive summary information')
info_arguments.archive = borgmatic.borg.rlist.resolve_archive_name(
repository,
info_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
)
json_output = borgmatic.borg.info.display_archives_info(
repository,
storage,
local_borg_version,
info_arguments=info_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

43
borgmatic/actions/list.py Normal file
View File

@ -0,0 +1,43 @@
import json
import logging
import borgmatic.borg.list
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_list(
repository, storage, local_borg_version, list_arguments, local_path, remote_path,
):
'''
Run the "list" action for the given repository and archive.
If list_arguments.json is True, yield the JSON output from listing the archive.
'''
if list_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, list_arguments.repository
):
if not list_arguments.json: # pragma: nocover
if list_arguments.find_paths:
logger.answer(f'{repository}: Searching archives')
elif not list_arguments.archive:
logger.answer(f'{repository}: Listing archives')
list_arguments.archive = borgmatic.borg.rlist.resolve_archive_name(
repository,
list_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
)
json_output = borgmatic.borg.list.list_archive(
repository,
storage,
local_borg_version,
list_arguments=list_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -0,0 +1,42 @@
import logging
import borgmatic.borg.mount
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_mount(
repository, storage, local_borg_version, mount_arguments, local_path, remote_path,
):
'''
Run the "mount" action for the given repository.
'''
if mount_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, mount_arguments.repository
):
if mount_arguments.archive:
logger.info('{}: Mounting archive {}'.format(repository, mount_arguments.archive))
else: # pragma: nocover
logger.info('{}: Mounting repository'.format(repository))
borgmatic.borg.mount.mount_archive(
repository,
borgmatic.borg.rlist.resolve_archive_name(
repository,
mount_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
),
mount_arguments.mount_point,
mount_arguments.paths,
mount_arguments.foreground,
mount_arguments.options,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,53 @@
import logging
import borgmatic.borg.prune
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_prune(
config_filename,
repository,
storage,
retention,
hooks,
hook_context,
local_borg_version,
prune_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
):
'''
Run the "prune" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
hooks.get('before_prune'),
hooks.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
**hook_context,
)
logger.info('{}: Pruning archives{}'.format(repository, dry_run_label))
borgmatic.borg.prune.prune_archives(
global_arguments.dry_run,
repository,
storage,
retention,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
stats=prune_arguments.stats,
list_archives=prune_arguments.list_archives,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_prune'),
hooks.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,40 @@
import logging
import borgmatic.borg.rcreate
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_rcreate(
repository,
storage,
local_borg_version,
rcreate_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "rcreate" action for the given repository.
'''
if rcreate_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, rcreate_arguments.repository
):
return
logger.info('{}: Creating repository'.format(repository))
borgmatic.borg.rcreate.create_repository(
global_arguments.dry_run,
repository,
storage,
local_borg_version,
rcreate_arguments.encryption_mode,
rcreate_arguments.source_repository,
rcreate_arguments.copy_crypt_key,
rcreate_arguments.append_only,
rcreate_arguments.storage_quota,
rcreate_arguments.make_parent_dirs,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,345 @@
import copy
import logging
import os
import borgmatic.borg.extract
import borgmatic.borg.list
import borgmatic.borg.mount
import borgmatic.borg.rlist
import borgmatic.borg.state
import borgmatic.config.validate
import borgmatic.hooks.dispatch
import borgmatic.hooks.dump
logger = logging.getLogger(__name__)
UNSPECIFIED_HOOK = object()
def get_configured_database(
hooks, archive_database_names, hook_name, database_name, configuration_database_name=None
):
'''
Find the first database with the given hook name and database name in the configured hooks
dict and the given archive database names dict (from hook name to database names contained in
a particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all database
hooks for the named database. If a configuration database name is given, use that instead of the
database name to lookup the database in the given hooks configuration.
Return the found database as a tuple of (found hook name, database configuration dict).
'''
if not configuration_database_name:
configuration_database_name = database_name
if hook_name == UNSPECIFIED_HOOK:
hooks_to_search = hooks
else:
hooks_to_search = {hook_name: hooks[hook_name]}
return next(
(
(name, hook_database)
for (name, hook) in hooks_to_search.items()
for hook_database in hook
if hook_database['name'] == configuration_database_name
and database_name in archive_database_names.get(name, [])
),
(None, None),
)
def get_configured_hook_name_and_database(hooks, database_name):
'''
Find the hook name and first database dict with the given database name in the configured hooks
dict. This searches across all database hooks.
'''
def restore_single_database(
repository,
location,
storage,
hooks,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
hook_name,
database,
): # pragma: no cover
'''
Given (among other things) an archive name, a database hook name, and a configured database
configuration dict, restore that database from the archive.
'''
logger.info(f'{repository}: Restoring database {database["name"]}')
dump_pattern = borgmatic.hooks.dispatch.call_hooks(
'make_database_dump_pattern',
hooks,
repository,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
database['name'],
)[hook_name]
# Kick off a single database extract to stdout.
extract_process = borgmatic.borg.extract.extract_archive(
dry_run=global_arguments.dry_run,
repository=repository,
archive=archive_name,
paths=borgmatic.hooks.dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
location_config=location,
storage_config=storage,
local_borg_version=local_borg_version,
local_path=local_path,
remote_path=remote_path,
destination_path='/',
# A directory format dump isn't a single file, and therefore can't extract
# to stdout. In this case, the extract_process return value is None.
extract_to_stdout=bool(database.get('format') != 'directory'),
)
# Run a single database restore, consuming the extract stdout (if any).
borgmatic.hooks.dispatch.call_hooks(
'restore_database_dump',
{hook_name: [database]},
repository,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
extract_process,
)
def collect_archive_database_names(
repository, archive, location, storage, local_borg_version, local_path, remote_path,
):
'''
Given a local or remote repository path, a resolved archive name, a location configuration dict,
a storage configuration dict, the local Borg version, and local and remote Borg paths, query the
archive for the names of databases it contains and return them as a dict from hook name to a
sequence of database names.
'''
borgmatic_source_directory = os.path.expanduser(
location.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
).lstrip('/')
parent_dump_path = os.path.expanduser(
borgmatic.hooks.dump.make_database_dump_path(borgmatic_source_directory, '*_databases/*/*')
)
dump_paths = borgmatic.borg.list.capture_archive_listing(
repository,
archive,
storage,
local_borg_version,
list_path=parent_dump_path,
local_path=local_path,
remote_path=remote_path,
)
# Determine the database names corresponding to the dumps found in the archive and
# add them to restore_names.
archive_database_names = {}
for dump_path in dump_paths:
try:
(hook_name, _, database_name) = dump_path.split(
borgmatic_source_directory + os.path.sep, 1
)[1].split(os.path.sep)[0:3]
except (ValueError, IndexError):
logger.warning(
f'{repository}: Ignoring invalid database dump path "{dump_path}" in archive {archive}'
)
else:
if database_name not in archive_database_names.get(hook_name, []):
archive_database_names.setdefault(hook_name, []).extend([database_name])
return archive_database_names
def find_databases_to_restore(requested_database_names, archive_database_names):
'''
Given a sequence of requested database names to restore and a dict of hook name to the names of
databases found in an archive, return an expanded sequence of database names to restore,
replacing "all" with actual database names as appropriate.
Raise ValueError if any of the requested database names cannot be found in the archive.
'''
# A map from database hook name to the database names to restore for that hook.
restore_names = (
{UNSPECIFIED_HOOK: requested_database_names}
if requested_database_names
else {UNSPECIFIED_HOOK: ['all']}
)
# If "all" is in restore_names, then replace it with the names of dumps found within the
# archive.
if 'all' in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove('all')
for (hook_name, database_names) in archive_database_names.items():
restore_names.setdefault(hook_name, []).extend(database_names)
# If a database is to be restored as part of "all", then remove it from restore names so
# it doesn't get restored twice.
for database_name in database_names:
if database_name in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove(database_name)
if not restore_names[UNSPECIFIED_HOOK]:
restore_names.pop(UNSPECIFIED_HOOK)
combined_restore_names = set(
name for database_names in restore_names.values() for name in database_names
)
combined_archive_database_names = set(
name for database_names in archive_database_names.values() for name in database_names
)
missing_names = sorted(set(combined_restore_names) - combined_archive_database_names)
if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError(
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from archive"
)
return restore_names
def ensure_databases_found(restore_names, remaining_restore_names, found_names):
'''
Given a dict from hook name to database names to restore, a dict from hook name to remaining
database names to restore, and a sequence of found (actually restored) database names, raise
ValueError if requested databases to restore were missing from the archive and/or configuration.
'''
combined_restore_names = set(
name
for database_names in tuple(restore_names.values())
+ tuple(remaining_restore_names.values())
for name in database_names
)
if not combined_restore_names and not found_names:
raise ValueError('No databases were found to restore')
missing_names = sorted(set(combined_restore_names) - set(found_names))
if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError(
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from borgmatic's configuration"
)
def run_restore(
repository,
location,
storage,
hooks,
local_borg_version,
restore_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "restore" action for the given repository, but only if the repository matches the
requested repository in restore arguments.
Raise ValueError if a configured database could not be found to restore.
'''
if restore_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, restore_arguments.repository
):
return
logger.info(
'{}: Restoring databases from archive {}'.format(repository, restore_arguments.archive)
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository, restore_arguments.archive, storage, local_borg_version, local_path, remote_path,
)
archive_database_names = collect_archive_database_names(
repository, archive_name, location, storage, local_borg_version, local_path, remote_path,
)
restore_names = find_databases_to_restore(restore_arguments.databases, archive_database_names)
found_names = set()
remaining_restore_names = {}
for hook_name, database_names in restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
hooks, archive_database_names, hook_name, database_name
)
if not found_database:
remaining_restore_names.setdefault(found_hook_name or hook_name, []).append(
database_name
)
continue
found_names.add(database_name)
restore_single_database(
repository,
location,
storage,
hooks,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
found_hook_name or hook_name,
found_database,
)
# For any database that weren't found via exact matches in the hooks configuration, try to
# fallback to "all" entries.
for hook_name, database_names in remaining_restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
hooks, archive_database_names, hook_name, database_name, 'all'
)
if not found_database:
continue
found_names.add(database_name)
database = copy.copy(found_database)
database['name'] = database_name
restore_single_database(
repository,
location,
storage,
hooks,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
found_hook_name or hook_name,
database,
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
ensure_databases_found(restore_names, remaining_restore_names, found_names)

View File

@ -0,0 +1,32 @@
import json
import logging
import borgmatic.borg.rinfo
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_rinfo(
repository, storage, local_borg_version, rinfo_arguments, local_path, remote_path,
):
'''
Run the "rinfo" action for the given repository.
If rinfo_arguments.json is True, yield the JSON output from the info for the repository.
'''
if rinfo_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rinfo_arguments.repository
):
if not rinfo_arguments.json: # pragma: nocover
logger.answer('{}: Displaying repository summary information'.format(repository))
json_output = borgmatic.borg.rinfo.display_repository_info(
repository,
storage,
local_borg_version,
rinfo_arguments=rinfo_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -0,0 +1,32 @@
import json
import logging
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_rlist(
repository, storage, local_borg_version, rlist_arguments, local_path, remote_path,
):
'''
Run the "rlist" action for the given repository.
If rlist_arguments.json is True, yield the JSON output from listing the repository.
'''
if rlist_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rlist_arguments.repository
):
if not rlist_arguments.json: # pragma: nocover
logger.answer('{}: Listing repository'.format(repository))
json_output = borgmatic.borg.rlist.list_repository(
repository,
storage,
local_borg_version,
rlist_arguments=rlist_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -0,0 +1,29 @@
import logging
import borgmatic.borg.transfer
logger = logging.getLogger(__name__)
def run_transfer(
repository,
storage,
local_borg_version,
transfer_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "transfer" action for the given repository.
'''
logger.info(f'{repository}: Transferring archives to repository')
borgmatic.borg.transfer.transfer_archives(
global_arguments.dry_run,
repository,
storage,
local_borg_version,
transfer_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -1,5 +1,6 @@
import logging import logging
import borgmatic.logger
from borgmatic.borg import environment, flags from borgmatic.borg import environment, flags
from borgmatic.execute import execute_command from borgmatic.execute import execute_command
@ -25,6 +26,7 @@ def run_arbitrary_borg(
sequence of arbitrary command-line Borg options, and an optional archive name, run an arbitrary sequence of arbitrary command-line Borg options, and an optional archive name, run an arbitrary
Borg command on the given repository/archive. Borg command on the given repository/archive.
''' '''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
try: try:
@ -60,7 +62,7 @@ def run_arbitrary_borg(
return execute_command( return execute_command(
full_command, full_command,
output_log_level=logging.WARNING, output_log_level=logging.ANSWER,
borg_local_path=local_path, borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config), extra_environment=environment.make_environment(storage_config),
) )

View File

@ -0,0 +1,31 @@
import logging
from borgmatic.borg import environment, flags
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def break_lock(
repository, storage_config, local_borg_version, local_path='borg', remote_path=None,
):
'''
Given a local or remote repository path, a storage configuration dict, the local Borg version,
and optional local and remote Borg paths, break any repository and cache locks leftover from Borg
aborting.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(local_path, 'break-lock')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_repository_flags(repository, local_borg_version)
)
borg_environment = environment.make_environment(storage_config)
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)

View File

@ -5,7 +5,7 @@ import logging
import os import os
import pathlib import pathlib
from borgmatic.borg import environment, extract, flags, rinfo, state from borgmatic.borg import environment, extract, feature, flags, rinfo, state
from borgmatic.execute import DO_NOT_CAPTURE, execute_command from borgmatic.execute import DO_NOT_CAPTURE, execute_command
DEFAULT_CHECKS = ( DEFAULT_CHECKS = (
@ -139,16 +139,17 @@ def filter_checks_on_frequency(
if datetime.datetime.now() < check_time + frequency_delta: if datetime.datetime.now() < check_time + frequency_delta:
remaining = check_time + frequency_delta - datetime.datetime.now() remaining = check_time + frequency_delta - datetime.datetime.now()
logger.info( logger.info(
f"Skipping {check} check due to configured frequency; {remaining} until next check" f'Skipping {check} check due to configured frequency; {remaining} until next check'
) )
filtered_checks.remove(check) filtered_checks.remove(check)
return tuple(filtered_checks) return tuple(filtered_checks)
def make_check_flags(checks, check_last=None, prefix=None): def make_check_flags(local_borg_version, checks, check_last=None, prefix=None):
''' '''
Given a parsed sequence of checks, transform it into tuple of command-line flags. Given the local Borg version and a parsed sequence of checks, transform the checks into tuple of
command-line flags.
For example, given parsed checks of: For example, given parsed checks of:
@ -163,28 +164,33 @@ def make_check_flags(checks, check_last=None, prefix=None):
Additionally, if a check_last value is given and "archives" is in checks, then include a Additionally, if a check_last value is given and "archives" is in checks, then include a
"--last" flag. And if a prefix value is given and "archives" is in checks, then include a "--last" flag. And if a prefix value is given and "archives" is in checks, then include a
"--glob-archives" flag. "--match-archives" flag.
''' '''
if 'archives' in checks:
last_flags = ('--last', str(check_last)) if check_last else ()
glob_archives_flags = ('--glob-archives', f'{prefix}*') if prefix else ()
else:
last_flags = ()
glob_archives_flags = ()
if check_last:
logger.info('Ignoring check_last option, as "archives" is not in consistency checks')
if prefix:
logger.info(
'Ignoring consistency prefix option, as "archives" is not in consistency checks'
)
if 'data' in checks: if 'data' in checks:
data_flags = ('--verify-data',) data_flags = ('--verify-data',)
checks += ('archives',) checks += ('archives',)
else: else:
data_flags = () data_flags = ()
common_flags = last_flags + glob_archives_flags + data_flags if 'archives' in checks:
last_flags = ('--last', str(check_last)) if check_last else ()
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
match_archives_flags = ('--match-archives', f'sh:{prefix}*') if prefix else ()
else:
match_archives_flags = ('--glob-archives', f'{prefix}*') if prefix else ()
else:
last_flags = ()
match_archives_flags = ()
if check_last:
logger.warning(
'Ignoring check_last option, as "archives" or "data" are not in consistency checks'
)
if prefix:
logger.warning(
'Ignoring consistency prefix option, as "archives" or "data" are not in consistency checks'
)
common_flags = last_flags + match_archives_flags + data_flags
if {'repository', 'archives'}.issubset(set(checks)): if {'repository', 'archives'}.issubset(set(checks)):
return common_flags return common_flags
@ -298,7 +304,7 @@ def check_archives(
full_command = ( full_command = (
(local_path, 'check') (local_path, 'check')
+ (('--repair',) if repair else ()) + (('--repair',) if repair else ())
+ make_check_flags(checks, check_last, prefix) + make_check_flags(local_borg_version, checks, check_last, prefix)
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ()) + (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags + verbosity_flags

View File

@ -3,10 +3,17 @@ import itertools
import logging import logging
import os import os
import pathlib import pathlib
import stat
import tempfile import tempfile
import borgmatic.logger
from borgmatic.borg import environment, feature, flags, state from borgmatic.borg import environment, feature, flags, state
from borgmatic.execute import DO_NOT_CAPTURE, execute_command, execute_command_with_processes from borgmatic.execute import (
DO_NOT_CAPTURE,
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -104,18 +111,23 @@ def deduplicate_directories(directory_devices, additional_directory_devices):
return tuple(sorted(deduplicated)) return tuple(sorted(deduplicated))
def write_pattern_file(patterns=None, sources=None): def write_pattern_file(patterns=None, sources=None, pattern_file=None):
''' '''
Given a sequence of patterns and an optional sequence of source directories, write them to a Given a sequence of patterns and an optional sequence of source directories, write them to a
named temporary file (with the source directories as additional roots) and return the file. named temporary file (with the source directories as additional roots) and return the file.
If an optional open pattern file is given, overwrite it instead of making a new temporary file.
Return None if no patterns are provided. Return None if no patterns are provided.
''' '''
if not patterns: if not patterns and not sources:
return None return None
pattern_file = tempfile.NamedTemporaryFile('w') if pattern_file is None:
pattern_file = tempfile.NamedTemporaryFile('w')
else:
pattern_file.seek(0)
pattern_file.write( pattern_file.write(
'\n'.join(tuple(patterns) + tuple(f'R {source}' for source in (sources or []))) '\n'.join(tuple(patterns or ()) + tuple(f'R {source}' for source in (sources or [])))
) )
pattern_file.flush() pattern_file.flush()
@ -184,10 +196,31 @@ def make_exclude_flags(location_config, exclude_filename=None):
) )
def make_list_filter_flags(local_borg_version, dry_run):
'''
Given the local Borg version and whether this is a dry run, return the corresponding flags for
passing to "--list --filter". The general idea is that excludes are shown for a dry run or when
the verbosity is debug.
'''
base_flags = 'AME'
show_excludes = logger.isEnabledFor(logging.DEBUG)
if feature.available(feature.Feature.EXCLUDED_FILES_MINUS, local_borg_version):
if show_excludes or dry_run:
return f'{base_flags}+-'
else:
return base_flags
if show_excludes:
return f'{base_flags}x-'
else:
return f'{base_flags}-'
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}' DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}'
def borgmatic_source_directories(borgmatic_source_directory): def collect_borgmatic_source_directories(borgmatic_source_directory):
''' '''
Return a list of borgmatic-specific source directories used for state like database backups. Return a list of borgmatic-specific source directories used for state like database backups.
''' '''
@ -218,6 +251,61 @@ def pattern_root_directories(patterns=None):
] ]
def special_file(path):
'''
Return whether the given path is a special file (character device, block device, or named pipe
/ FIFO).
'''
try:
mode = os.stat(path).st_mode
except (FileNotFoundError, OSError):
return False
return stat.S_ISCHR(mode) or stat.S_ISBLK(mode) or stat.S_ISFIFO(mode)
def any_parent_directories(path, candidate_parents):
'''
Return whether any of the given candidate parent directories are an actual parent of the given
path. This includes grandparents, etc.
'''
for parent in candidate_parents:
if pathlib.PurePosixPath(parent) in pathlib.PurePath(path).parents:
return True
return False
def collect_special_file_paths(
create_command, local_path, working_directory, borg_environment, skip_directories
):
'''
Given a Borg create command as a tuple, a local Borg path, a working directory, and a dict of
environment variables to pass to Borg, and a sequence of parent directories to skip, collect the
paths for any special files (character devices, block devices, and named pipes / FIFOs) that
Borg would encounter during a create. These are all paths that could cause Borg to hang if its
--read-special flag is used.
'''
paths_output = execute_command_and_capture_output(
create_command + ('--dry-run', '--list'),
capture_stderr=True,
working_directory=working_directory,
extra_environment=borg_environment,
)
paths = tuple(
path_line.split(' ', 1)[1]
for path_line in paths_output.split('\n')
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
return tuple(
path
for path in paths
if special_file(path) and not any_parent_directories(path, skip_directories)
)
def create_archive( def create_archive(
dry_run, dry_run,
repository, repository,
@ -239,11 +327,14 @@ def create_archive(
If a sequence of stream processes is given (instances of subprocess.Popen), then execute the If a sequence of stream processes is given (instances of subprocess.Popen), then execute the
create command while also triggering the given processes to produce output. create command while also triggering the given processes to produce output.
''' '''
borgmatic.logger.add_custom_log_levels()
borgmatic_source_directories = expand_directories(
collect_borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
)
sources = deduplicate_directories( sources = deduplicate_directories(
map_directories_to_devices( map_directories_to_devices(
expand_directories( expand_directories(
location_config.get('source_directories', []) tuple(location_config.get('source_directories', ())) + borgmatic_source_directories
+ borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
) )
), ),
additional_directory_devices=map_directories_to_devices( additional_directory_devices=map_directories_to_devices(
@ -251,20 +342,29 @@ def create_archive(
), ),
) )
ensure_files_readable(location_config.get('patterns_from'), location_config.get('exclude_from'))
try: try:
working_directory = os.path.expanduser(location_config.get('working_directory')) working_directory = os.path.expanduser(location_config.get('working_directory'))
except TypeError: except TypeError:
working_directory = None working_directory = None
pattern_file = write_pattern_file(location_config.get('patterns'), sources)
pattern_file = (
write_pattern_file(location_config.get('patterns'), sources)
if location_config.get('patterns') or location_config.get('patterns_from')
else None
)
exclude_file = write_pattern_file( exclude_file = write_pattern_file(
expand_home_directories(location_config.get('exclude_patterns')) expand_home_directories(location_config.get('exclude_patterns'))
) )
checkpoint_interval = storage_config.get('checkpoint_interval', None) checkpoint_interval = storage_config.get('checkpoint_interval', None)
checkpoint_volume = storage_config.get('checkpoint_volume', None)
chunker_params = storage_config.get('chunker_params', None) chunker_params = storage_config.get('chunker_params', None)
compression = storage_config.get('compression', None) compression = storage_config.get('compression', None)
upload_rate_limit = storage_config.get('upload_rate_limit', None) upload_rate_limit = storage_config.get('upload_rate_limit', None)
umask = storage_config.get('umask', None) umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
list_filter_flags = make_list_filter_flags(local_borg_version, dry_run)
files_cache = location_config.get('files_cache') files_cache = location_config.get('files_cache')
archive_name_format = storage_config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT) archive_name_format = storage_config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('create', '') extra_borg_options = storage_config.get('extra_borg_options', {}).get('create', '')
@ -293,14 +393,18 @@ def create_archive(
('--remote-ratelimit', str(upload_rate_limit)) if upload_rate_limit else () ('--remote-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
) )
ensure_files_readable(location_config.get('patterns_from'), location_config.get('exclude_from')) if stream_processes and location_config.get('read_special') is False:
logger.warning(
f'{repository}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
full_command = ( create_command = (
tuple(local_path.split(' ')) tuple(local_path.split(' '))
+ ('create',) + ('create',)
+ make_pattern_flags(location_config, pattern_file.name if pattern_file else None) + make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
+ make_exclude_flags(location_config, exclude_file.name if exclude_file else None) + make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
+ (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ()) + (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ())
+ (('--checkpoint-volume', str(checkpoint_volume)) if checkpoint_volume else ())
+ (('--chunker-params', chunker_params) if chunker_params else ()) + (('--chunker-params', chunker_params) if chunker_params else ())
+ (('--compression', compression) if compression else ()) + (('--compression', compression) if compression else ())
+ upload_ratelimit_flags + upload_ratelimit_flags
@ -313,19 +417,18 @@ def create_archive(
+ atime_flags + atime_flags
+ (('--noctime',) if location_config.get('ctime') is False else ()) + (('--noctime',) if location_config.get('ctime') is False else ())
+ (('--nobirthtime',) if location_config.get('birthtime') is False else ()) + (('--nobirthtime',) if location_config.get('birthtime') is False else ())
+ (('--read-special',) if (location_config.get('read_special') or stream_processes) else ()) + (('--read-special',) if location_config.get('read_special') or stream_processes else ())
+ noflags_flags + noflags_flags
+ (('--files-cache', files_cache) if files_cache else ()) + (('--files-cache', files_cache) if files_cache else ())
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ()) + (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ()) + (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--list', '--filter', 'AME-') if list_files and not json and not progress else ()) + (
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ()) ('--list', '--filter', list_filter_flags)
+ (('--stats',) if stats and not json and not dry_run else ()) if list_files and not json and not progress
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ()) else ()
)
+ (('--dry-run',) if dry_run else ()) + (('--dry-run',) if dry_run else ())
+ (('--progress',) if progress else ())
+ (('--json',) if json else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ()) + (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_archive_flags(repository, archive_name_format, local_borg_version) + flags.make_repository_archive_flags(repository, archive_name_format, local_borg_version)
+ (sources if not pattern_file else ()) + (sources if not pattern_file else ())
@ -333,8 +436,8 @@ def create_archive(
if json: if json:
output_log_level = None output_log_level = None
elif (stats or list_files) and logger.getEffectiveLevel() == logging.WARNING: elif list_files or (stats and not dry_run):
output_log_level = logging.WARNING output_log_level = logging.ANSWER
else: else:
output_log_level = logging.INFO output_log_level = logging.INFO
@ -344,9 +447,41 @@ def create_archive(
borg_environment = environment.make_environment(storage_config) borg_environment = environment.make_environment(storage_config)
# If database hooks are enabled (as indicated by streaming processes), exclude files that might
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
if stream_processes and not location_config.get('read_special'):
logger.debug(f'{repository}: Collecting special file paths')
special_file_paths = collect_special_file_paths(
create_command,
local_path,
working_directory,
borg_environment,
skip_directories=borgmatic_source_directories,
)
if special_file_paths:
logger.warning(
f'{repository}: Excluding special files to prevent Borg from hanging: {", ".join(special_file_paths)}'
)
exclude_file = write_pattern_file(
expand_home_directories(
tuple(location_config.get('exclude_patterns') or ()) + special_file_paths
),
pattern_file=exclude_file,
)
create_command += make_exclude_flags(location_config, exclude_file.name)
create_command += (
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
+ (('--progress',) if progress else ())
+ (('--json',) if json else ())
)
if stream_processes: if stream_processes:
return execute_command_with_processes( return execute_command_with_processes(
full_command, create_command,
stream_processes, stream_processes,
output_log_level, output_log_level,
output_file, output_file,
@ -354,12 +489,16 @@ def create_archive(
working_directory=working_directory, working_directory=working_directory,
extra_environment=borg_environment, extra_environment=borg_environment,
) )
elif output_log_level is None:
return execute_command( return execute_command_and_capture_output(
full_command, create_command, working_directory=working_directory, extra_environment=borg_environment,
output_log_level, )
output_file, else:
borg_local_path=local_path, execute_command(
working_directory=working_directory, create_command,
extra_environment=borg_environment, output_log_level,
) output_file,
borg_local_path=local_path,
working_directory=working_directory,
extra_environment=borg_environment,
)

View File

@ -1,6 +1,7 @@
import logging import logging
import os import os
import borgmatic.logger
from borgmatic.borg import environment, flags from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command from borgmatic.execute import DO_NOT_CAPTURE, execute_command
@ -30,6 +31,7 @@ def export_tar_archive(
If the destination path is "-", then stream the output to stdout instead of to a file. If the destination path is "-", then stream the output to stdout instead of to a file.
''' '''
borgmatic.logger.add_custom_log_levels()
umask = storage_config.get('umask', None) umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
@ -53,8 +55,8 @@ def export_tar_archive(
+ (tuple(paths) if paths else ()) + (tuple(paths) if paths else ())
) )
if list_files and logger.getEffectiveLevel() == logging.WARNING: if list_files:
output_log_level = logging.WARNING output_log_level = logging.ANSWER
else: else:
output_log_level = logging.INFO output_log_level = logging.INFO

View File

@ -87,6 +87,13 @@ def extract_archive(
else: else:
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else () numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
if strip_components == 'all':
if not paths:
raise ValueError('The --strip-components flag with "all" requires at least one --path')
# Calculate the maximum number of leading path components of the given paths.
strip_components = max(0, *(len(path.split(os.path.sep)) - 1 for path in paths))
full_command = ( full_command = (
(local_path, 'extract') (local_path, 'extract')
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())

View File

@ -13,6 +13,8 @@ class Feature(Enum):
RCREATE = 7 RCREATE = 7
RLIST = 8 RLIST = 8
RINFO = 9 RINFO = 9
MATCH_ARCHIVES = 10
EXCLUDED_FILES_MINUS = 11
FEATURE_TO_MINIMUM_BORG_VERSION = { FEATURE_TO_MINIMUM_BORG_VERSION = {
@ -25,6 +27,8 @@ FEATURE_TO_MINIMUM_BORG_VERSION = {
Feature.RCREATE: parse_version('2.0.0a2'), # borg rcreate Feature.RCREATE: parse_version('2.0.0a2'), # borg rcreate
Feature.RLIST: parse_version('2.0.0a2'), # borg rlist Feature.RLIST: parse_version('2.0.0a2'), # borg rlist
Feature.RINFO: parse_version('2.0.0a2'), # borg rinfo Feature.RINFO: parse_version('2.0.0a2'), # borg rinfo
Feature.MATCH_ARCHIVES: parse_version('2.0.0b3'), # borg --match-archives
Feature.EXCLUDED_FILES_MINUS: parse_version('2.0.0b5'), # --list --filter uses "-" for excludes
} }

View File

@ -1,7 +1,8 @@
import logging import logging
from borgmatic.borg import environment, flags import borgmatic.logger
from borgmatic.execute import execute_command from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -19,6 +20,7 @@ def display_archives_info(
arguments to the info action, display summary information for Borg archives in the repository or arguments to the info action, display summary information for Borg archives in the repository or
return JSON summary information. return JSON summary information.
''' '''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
full_command = ( full_command = (
@ -36,7 +38,11 @@ def display_archives_info(
+ flags.make_flags('remote-path', remote_path) + flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait) + flags.make_flags('lock-wait', lock_wait)
+ ( + (
flags.make_flags('glob-archives', f'{info_arguments.prefix}*') (
flags.make_flags('match-archives', f'sh:{info_arguments.prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else flags.make_flags('glob-archives', f'{info_arguments.prefix}*')
)
if info_arguments.prefix if info_arguments.prefix
else () else ()
) )
@ -44,12 +50,21 @@ def display_archives_info(
info_arguments, excludes=('repository', 'archive', 'prefix') info_arguments, excludes=('repository', 'archive', 'prefix')
) )
+ flags.make_repository_flags(repository, local_borg_version) + flags.make_repository_flags(repository, local_borg_version)
+ flags.make_flags('glob-archives', info_arguments.archive) + (
flags.make_flags('match-archives', info_arguments.archive)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else flags.make_flags('glob-archives', info_arguments.archive)
)
) )
return execute_command( if info_arguments.json:
full_command, return execute_command_and_capture_output(
output_log_level=None if info_arguments.json else logging.WARNING, full_command, extra_environment=environment.make_environment(storage_config),
borg_local_path=local_path, )
extra_environment=environment.make_environment(storage_config), else:
) execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

View File

@ -3,13 +3,14 @@ import copy
import logging import logging
import re import re
import borgmatic.logger
from borgmatic.borg import environment, feature, flags, rlist from borgmatic.borg import environment, feature, flags, rlist
from borgmatic.execute import execute_command from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST = ('prefix', 'glob_archives', 'sort_by', 'first', 'last') ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST = ('prefix', 'match_archives', 'sort_by', 'first', 'last')
MAKE_FLAGS_EXCLUDES = ( MAKE_FLAGS_EXCLUDES = (
'repository', 'repository',
'archive', 'archive',
@ -84,6 +85,46 @@ def make_find_paths(find_paths):
) )
def capture_archive_listing(
repository,
archive,
storage_config,
local_borg_version,
list_path=None,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a storage config dict, the local Borg
version, the archive path in which to list files, and local and remote Borg paths, capture the
output of listing that archive and return it as a list of file paths.
'''
borg_environment = environment.make_environment(storage_config)
return tuple(
execute_command_and_capture_output(
make_list_command(
repository,
storage_config,
local_borg_version,
argparse.Namespace(
repository=repository,
archive=archive,
paths=[f'sh:{list_path}'],
find_paths=None,
json=None,
format='{path}{NL}',
),
local_path,
remote_path,
),
extra_environment=borg_environment,
)
.strip('\n')
.split('\n')
)
def list_archive( def list_archive(
repository, repository,
storage_config, storage_config,
@ -99,6 +140,8 @@ def list_archive(
list the files by searching across multiple archives. If neither find_paths nor archive name list the files by searching across multiple archives. If neither find_paths nor archive name
are given, instead list the archives in the given repository. are given, instead list the archives in the given repository.
''' '''
borgmatic.logger.add_custom_log_levels()
if not list_arguments.archive and not list_arguments.find_paths: if not list_arguments.archive and not list_arguments.find_paths:
if feature.available(feature.Feature.RLIST, local_borg_version): if feature.available(feature.Feature.RLIST, local_borg_version):
logger.warning( logger.warning(
@ -111,7 +154,7 @@ def list_archive(
format=list_arguments.format, format=list_arguments.format,
json=list_arguments.json, json=list_arguments.json,
prefix=list_arguments.prefix, prefix=list_arguments.prefix,
glob_archives=list_arguments.glob_archives, match_archives=list_arguments.match_archives,
sort_by=list_arguments.sort_by, sort_by=list_arguments.sort_by,
first=list_arguments.first, first=list_arguments.first,
last=list_arguments.last, last=list_arguments.last,
@ -143,7 +186,7 @@ def list_archive(
format=None, format=None,
json=None, json=None,
prefix=list_arguments.prefix, prefix=list_arguments.prefix,
glob_archives=list_arguments.glob_archives, match_archives=list_arguments.match_archives,
sort_by=list_arguments.sort_by, sort_by=list_arguments.sort_by,
first=list_arguments.first, first=list_arguments.first,
last=list_arguments.last, last=list_arguments.last,
@ -151,7 +194,7 @@ def list_archive(
# Ask Borg to list archives. Capture its output for use below. # Ask Borg to list archives. Capture its output for use below.
archive_lines = tuple( archive_lines = tuple(
execute_command( execute_command_and_capture_output(
rlist.make_rlist_command( rlist.make_rlist_command(
repository, repository,
storage_config, storage_config,
@ -160,8 +203,6 @@ def list_archive(
local_path, local_path,
remote_path, remote_path,
), ),
output_log_level=None,
borg_local_path=local_path,
extra_environment=borg_environment, extra_environment=borg_environment,
) )
.strip('\n') .strip('\n')
@ -172,7 +213,7 @@ def list_archive(
# For each archive listed by Borg, run list on the contents of that archive. # For each archive listed by Borg, run list on the contents of that archive.
for archive in archive_lines: for archive in archive_lines:
logger.warning(f'{repository}: Listing archive {archive}') logger.answer(f'{repository}: Listing archive {archive}')
archive_arguments = copy.copy(list_arguments) archive_arguments = copy.copy(list_arguments)
archive_arguments.archive = archive archive_arguments.archive = archive
@ -193,7 +234,7 @@ def list_archive(
execute_command( execute_command(
main_command, main_command,
output_log_level=logging.WARNING, output_log_level=logging.ANSWER,
borg_local_path=local_path, borg_local_path=local_path,
extra_environment=borg_environment, extra_environment=borg_environment,
) )

View File

@ -39,7 +39,11 @@ def mount_archive(
+ ( + (
( (
flags.make_repository_flags(repository, local_borg_version) flags.make_repository_flags(repository, local_borg_version)
+ ('--glob-archives', archive) + (
('--match-archives', archive)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else ('--glob-archives', archive)
)
) )
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version) if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else ( else (

View File

@ -1,12 +1,13 @@
import logging import logging
from borgmatic.borg import environment, flags import borgmatic.logger
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command from borgmatic.execute import execute_command
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def make_prune_flags(retention_config): def make_prune_flags(retention_config, local_borg_version):
''' '''
Given a retention config dict mapping from option name to value, tranform it into an iterable of Given a retention config dict mapping from option name to value, tranform it into an iterable of
command-line name-value flag pairs. command-line name-value flag pairs.
@ -24,8 +25,12 @@ def make_prune_flags(retention_config):
''' '''
config = retention_config.copy() config = retention_config.copy()
prefix = config.pop('prefix', '{hostname}-') prefix = config.pop('prefix', '{hostname}-')
if prefix: if prefix:
config['glob_archives'] = f'{prefix}*' if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
config['match_archives'] = f'sh:{prefix}*'
else:
config['glob_archives'] = f'{prefix}*'
return ( return (
('--' + option_name.replace('_', '-'), str(value)) for option_name, value in config.items() ('--' + option_name.replace('_', '-'), str(value)) for option_name, value in config.items()
@ -48,13 +53,18 @@ def prune_archives(
retention config dict, prune Borg archives according to the retention policy specified in that retention config dict, prune Borg archives according to the retention policy specified in that
configuration. configuration.
''' '''
borgmatic.logger.add_custom_log_levels()
umask = storage_config.get('umask', None) umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('prune', '') extra_borg_options = storage_config.get('extra_borg_options', {}).get('prune', '')
full_command = ( full_command = (
(local_path, 'prune') (local_path, 'prune')
+ tuple(element for pair in make_prune_flags(retention_config) for element in pair) + tuple(
element
for pair in make_prune_flags(retention_config, local_borg_version)
for element in pair
)
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ()) + (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ()) + (('--lock-wait', str(lock_wait)) if lock_wait else ())
@ -67,8 +77,8 @@ def prune_archives(
+ flags.make_repository_flags(repository, local_borg_version) + flags.make_repository_flags(repository, local_borg_version)
) )
if (stats or list_archives) and logger.getEffectiveLevel() == logging.WARNING: if stats or list_archives:
output_log_level = logging.WARNING output_log_level = logging.ANSWER
else: else:
output_log_level = logging.INFO output_log_level = logging.INFO

View File

@ -1,7 +1,8 @@
import logging import logging
import borgmatic.logger
from borgmatic.borg import environment, feature, flags from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -19,6 +20,7 @@ def display_repository_info(
arguments to the rinfo action, display summary information for the Borg repository or return arguments to the rinfo action, display summary information for the Borg repository or return
JSON summary information. JSON summary information.
''' '''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
full_command = ( full_command = (
@ -44,9 +46,16 @@ def display_repository_info(
+ flags.make_repository_flags(repository, local_borg_version) + flags.make_repository_flags(repository, local_borg_version)
) )
return execute_command( extra_environment = environment.make_environment(storage_config)
full_command,
output_log_level=None if rinfo_arguments.json else logging.WARNING, if rinfo_arguments.json:
borg_local_path=local_path, return execute_command_and_capture_output(
extra_environment=environment.make_environment(storage_config), full_command, extra_environment=extra_environment,
) )
else:
execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=extra_environment,
)

View File

@ -1,7 +1,8 @@
import logging import logging
import borgmatic.logger
from borgmatic.borg import environment, feature, flags from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -16,7 +17,7 @@ def resolve_archive_name(
Raise ValueError if "latest" is given but there are no archives in the repository. Raise ValueError if "latest" is given but there are no archives in the repository.
''' '''
if archive != "latest": if archive != 'latest':
return archive return archive
lock_wait = storage_config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
@ -26,8 +27,6 @@ def resolve_archive_name(
local_path, local_path,
'rlist' if feature.available(feature.Feature.RLIST, local_borg_version) else 'list', 'rlist' if feature.available(feature.Feature.RLIST, local_borg_version) else 'list',
) )
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('remote-path', remote_path) + flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait) + flags.make_flags('lock-wait', lock_wait)
+ flags.make_flags('last', 1) + flags.make_flags('last', 1)
@ -35,11 +34,8 @@ def resolve_archive_name(
+ flags.make_repository_flags(repository, local_borg_version) + flags.make_repository_flags(repository, local_borg_version)
) )
output = execute_command( output = execute_command_and_capture_output(
full_command, full_command, extra_environment=environment.make_environment(storage_config),
output_log_level=None,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
) )
try: try:
latest_archive = output.strip().splitlines()[-1] latest_archive = output.strip().splitlines()[-1]
@ -87,7 +83,11 @@ def make_rlist_command(
+ flags.make_flags('remote-path', remote_path) + flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait) + flags.make_flags('lock-wait', lock_wait)
+ ( + (
flags.make_flags('glob-archives', f'{rlist_arguments.prefix}*') (
flags.make_flags('match-archives', f'sh:{rlist_arguments.prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else flags.make_flags('glob-archives', f'{rlist_arguments.prefix}*')
)
if rlist_arguments.prefix if rlist_arguments.prefix
else () else ()
) )
@ -109,18 +109,19 @@ def list_repository(
arguments to the list action, and local and remote Borg paths, display the output of listing arguments to the list action, and local and remote Borg paths, display the output of listing
Borg archives in the given repository (or return JSON output). Borg archives in the given repository (or return JSON output).
''' '''
borgmatic.logger.add_custom_log_levels()
borg_environment = environment.make_environment(storage_config) borg_environment = environment.make_environment(storage_config)
main_command = make_rlist_command( main_command = make_rlist_command(
repository, storage_config, local_borg_version, rlist_arguments, local_path, remote_path repository, storage_config, local_borg_version, rlist_arguments, local_path, remote_path
) )
output = execute_command(
main_command,
output_log_level=None if rlist_arguments.json else logging.WARNING,
borg_local_path=local_path,
extra_environment=borg_environment,
)
if rlist_arguments.json: if rlist_arguments.json:
return output return execute_command_and_capture_output(main_command, extra_environment=borg_environment,)
else:
execute_command(
main_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=borg_environment,
)

View File

@ -1,7 +1,8 @@
import logging import logging
import borgmatic.logger
from borgmatic.borg import environment, flags from borgmatic.borg import environment, flags
from borgmatic.execute import execute_command from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -19,18 +20,23 @@ def transfer_archives(
Given a dry-run flag, a local or remote repository path, a storage config dict, the local Borg Given a dry-run flag, a local or remote repository path, a storage config dict, the local Borg
version, and the arguments to the transfer action, transfer archives to the given repository. version, and the arguments to the transfer action, transfer archives to the given repository.
''' '''
borgmatic.logger.add_custom_log_levels()
full_command = ( full_command = (
(local_path, 'transfer') (local_path, 'transfer')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ()) + (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ()) + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('remote-path', remote_path) + flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', storage_config.get('lock_wait', None)) + flags.make_flags('lock-wait', storage_config.get('lock_wait', None))
+ flags.make_flags( + (('--progress',) if transfer_arguments.progress else ())
'glob-archives', transfer_arguments.glob_archives or transfer_arguments.archive + (
flags.make_flags(
'match-archives', transfer_arguments.match_archives or transfer_arguments.archive
)
) )
+ flags.make_flags_from_arguments( + flags.make_flags_from_arguments(
transfer_arguments, transfer_arguments,
excludes=('repository', 'source_repository', 'archive', 'glob_archives'), excludes=('repository', 'source_repository', 'archive', 'match_archives'),
) )
+ flags.make_repository_flags(repository, local_borg_version) + flags.make_repository_flags(repository, local_borg_version)
+ flags.make_flags('other-repo', transfer_arguments.source_repository) + flags.make_flags('other-repo', transfer_arguments.source_repository)
@ -39,7 +45,8 @@ def transfer_archives(
return execute_command( return execute_command(
full_command, full_command,
output_log_level=logging.WARNING, output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
borg_local_path=local_path, borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config), extra_environment=environment.make_environment(storage_config),
) )

View File

@ -1,7 +1,7 @@
import logging import logging
from borgmatic.borg import environment from borgmatic.borg import environment
from borgmatic.execute import execute_command from borgmatic.execute import execute_command_and_capture_output
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -18,11 +18,8 @@ def local_borg_version(storage_config, local_path='borg'):
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ()) + (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ()) + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
) )
output = execute_command( output = execute_command_and_capture_output(
full_command, full_command, extra_environment=environment.make_environment(storage_config),
output_log_level=None,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
) )
try: try:

View File

@ -19,6 +19,7 @@ SUBPARSER_ALIASES = {
'rinfo': [], 'rinfo': [],
'info': ['-i'], 'info': ['-i'],
'transfer': [], 'transfer': [],
'break-lock': [],
'borg': [], 'borg': [],
} }
@ -45,11 +46,12 @@ def parse_subparser_arguments(unparsed_arguments, subparsers):
if 'borg' in unparsed_arguments: if 'borg' in unparsed_arguments:
subparsers = {'borg': subparsers['borg']} subparsers = {'borg': subparsers['borg']}
for subparser_name, subparser in subparsers.items(): for argument in remaining_arguments:
if subparser_name not in remaining_arguments: canonical_name = alias_to_subparser_name.get(argument, argument)
continue subparser = subparsers.get(canonical_name)
canonical_name = alias_to_subparser_name.get(subparser_name, subparser_name) if not subparser:
continue
# If a parsed value happens to be the same as the name of a subparser, remove it from the # If a parsed value happens to be the same as the name of a subparser, remove it from the
# remaining arguments. This prevents, for instance, "check --only extract" from triggering # remaining arguments. This prevents, for instance, "check --only extract" from triggering
@ -66,9 +68,9 @@ def parse_subparser_arguments(unparsed_arguments, subparsers):
arguments[canonical_name] = parsed arguments[canonical_name] = parsed
# If no actions are explicitly requested, assume defaults: prune, compact, create, and check. # If no actions are explicitly requested, assume defaults.
if not arguments and '--help' not in unparsed_arguments and '-h' not in unparsed_arguments: if not arguments and '--help' not in unparsed_arguments and '-h' not in unparsed_arguments:
for subparser_name in ('prune', 'compact', 'create', 'check'): for subparser_name in ('create', 'prune', 'compact', 'check'):
subparser = subparsers[subparser_name] subparser = subparsers[subparser_name]
parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments) parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments)
arguments[subparser_name] = parsed arguments[subparser_name] = parsed
@ -214,7 +216,7 @@ def make_parsers():
top_level_parser = ArgumentParser( top_level_parser = ArgumentParser(
description=''' description='''
Simple, configuration-driven backup software for servers and workstations. If none of Simple, configuration-driven backup software for servers and workstations. If none of
the action options are given, then borgmatic defaults to: prune, compact, create, and the action options are given, then borgmatic defaults to: create, prune, compact, and
check. check.
''', ''',
parents=[global_parser], parents=[global_parser],
@ -223,7 +225,7 @@ def make_parsers():
subparsers = top_level_parser.add_subparsers( subparsers = top_level_parser.add_subparsers(
title='actions', title='actions',
metavar='', metavar='',
help='Specify zero or more actions. Defaults to prune, compact, create, and check. Use --help with action for details:', help='Specify zero or more actions. Defaults to creat, prune, compact, and check. Use --help with action for details:',
) )
rcreate_parser = subparsers.add_parser( rcreate_parser = subparsers.add_parser(
'rcreate', 'rcreate',
@ -246,6 +248,10 @@ def make_parsers():
metavar='KEY_REPOSITORY', metavar='KEY_REPOSITORY',
help='Path to an existing Borg repository whose key material should be reused (Borg 2.x+ only)', help='Path to an existing Borg repository whose key material should be reused (Borg 2.x+ only)',
) )
rcreate_group.add_argument(
'--repository',
help='Path of the new repository to create (must be already specified in a borgmatic configuration file), defaults to the configured repository if there is only one',
)
rcreate_group.add_argument( rcreate_group.add_argument(
'--copy-crypt-key', '--copy-crypt-key',
action='store_true', action='store_true',
@ -291,11 +297,18 @@ def make_parsers():
'--upgrader', '--upgrader',
help='Upgrader type used to convert the transfered data, e.g. "From12To20" to upgrade data from Borg 1.2 to 2.0 format, defaults to no conversion', help='Upgrader type used to convert the transfered data, e.g. "From12To20" to upgrade data from Borg 1.2 to 2.0 format, defaults to no conversion',
) )
transfer_group.add_argument(
'--progress',
default=False,
action='store_true',
help='Display progress as each archive is transferred',
)
transfer_group.add_argument( transfer_group.add_argument(
'-a', '-a',
'--match-archives',
'--glob-archives', '--glob-archives',
metavar='GLOB', metavar='PATTERN',
help='Only transfer archives with names matching this glob', help='Only transfer archives with names matching this pattern',
) )
transfer_group.add_argument( transfer_group.add_argument(
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys' '--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
@ -335,8 +348,8 @@ def make_parsers():
compact_parser = subparsers.add_parser( compact_parser = subparsers.add_parser(
'compact', 'compact',
aliases=SUBPARSER_ALIASES['compact'], aliases=SUBPARSER_ALIASES['compact'],
help='Compact segments to free space (Borg 1.2+ only)', help='Compact segments to free space (Borg 1.2+, borgmatic 1.5.23+ only)',
description='Compact segments to free space (Borg 1.2+ only)', description='Compact segments to free space (Borg 1.2+, borgmatic 1.5.23+ only)',
add_help=False, add_help=False,
) )
compact_group = compact_parser.add_argument_group('compact arguments') compact_group = compact_parser.add_argument_group('compact arguments')
@ -463,10 +476,9 @@ def make_parsers():
) )
extract_group.add_argument( extract_group.add_argument(
'--strip-components', '--strip-components',
type=int, type=lambda number: number if number == 'all' else int(number),
metavar='NUMBER', metavar='NUMBER',
dest='strip_components', help='Number of leading path components to remove from each extracted path or "all" to strip all leading path components. Skip paths with fewer elements',
help='Number of leading path components to remove from each extracted path. Skip paths with fewer elements',
) )
extract_group.add_argument( extract_group.add_argument(
'--progress', '--progress',
@ -599,7 +611,7 @@ def make_parsers():
metavar='NAME', metavar='NAME',
nargs='+', nargs='+',
dest='databases', dest='databases',
help='Names of databases to restore from archive, defaults to all databases. Note that any databases to restore must be defined in borgmatic\'s configuration', help="Names of databases to restore from archive, defaults to all databases. Note that any databases to restore must be defined in borgmatic's configuration",
) )
restore_group.add_argument( restore_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit' '-h', '--help', action='help', help='Show this help message and exit'
@ -627,7 +639,11 @@ def make_parsers():
'-P', '--prefix', help='Only list archive names starting with this prefix' '-P', '--prefix', help='Only list archive names starting with this prefix'
) )
rlist_group.add_argument( rlist_group.add_argument(
'-a', '--glob-archives', metavar='GLOB', help='Only list archive names matching this glob' '-a',
'--match-archives',
'--glob-archives',
metavar='PATTERN',
help='Only list archive names matching this pattern',
) )
rlist_group.add_argument( rlist_group.add_argument(
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys' '--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
@ -678,7 +694,11 @@ def make_parsers():
'-P', '--prefix', help='Only list archive names starting with this prefix' '-P', '--prefix', help='Only list archive names starting with this prefix'
) )
list_group.add_argument( list_group.add_argument(
'-a', '--glob-archives', metavar='GLOB', help='Only list archive names matching this glob' '-a',
'--match-archives',
'--glob-archives',
metavar='PATTERN',
help='Only list archive names matching this pattern',
) )
list_group.add_argument( list_group.add_argument(
'--successful', '--successful',
@ -747,9 +767,10 @@ def make_parsers():
) )
info_group.add_argument( info_group.add_argument(
'-a', '-a',
'--match-archives',
'--glob-archives', '--glob-archives',
metavar='GLOB', metavar='PATTERN',
help='Only show info for archive names matching this glob', help='Only show info for archive names matching this pattern',
) )
info_group.add_argument( info_group.add_argument(
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys' '--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
@ -764,11 +785,27 @@ def make_parsers():
) )
info_group.add_argument('-h', '--help', action='help', help='Show this help message and exit') info_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
break_lock_parser = subparsers.add_parser(
'break-lock',
aliases=SUBPARSER_ALIASES['break-lock'],
help='Break the repository and cache locks left behind by Borg aborting',
description='Break Borg repository and cache locks left behind by Borg aborting',
add_help=False,
)
break_lock_group = break_lock_parser.add_argument_group('break-lock arguments')
break_lock_group.add_argument(
'--repository',
help='Path of repository to break the lock for, defaults to the configured repository if there is only one',
)
break_lock_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
borg_parser = subparsers.add_parser( borg_parser = subparsers.add_parser(
'borg', 'borg',
aliases=SUBPARSER_ALIASES['borg'], aliases=SUBPARSER_ALIASES['borg'],
help='Run an arbitrary Borg command', help='Run an arbitrary Borg command',
description='Run an arbitrary Borg command based on borgmatic\'s configuration', description="Run an arbitrary Borg command based on borgmatic's configuration",
add_help=False, add_help=False,
) )
borg_group = borg_parser.add_argument_group('borg arguments') borg_group = borg_parser.add_argument_group('borg arguments')
@ -806,6 +843,11 @@ def parse_arguments(*unparsed_arguments):
'The --excludes flag has been replaced with exclude_patterns in configuration.' 'The --excludes flag has been replaced with exclude_patterns in configuration.'
) )
if 'create' in arguments and arguments['create'].list_files and arguments['create'].progress:
raise ValueError(
'With the create action, only one of --list (--files) and --progress flags can be used.'
)
if ( if (
('list' in arguments and 'rinfo' in arguments and arguments['list'].json) ('list' in arguments and 'rinfo' in arguments and arguments['list'].json)
or ('list' in arguments and 'info' in arguments and arguments['list'].json) or ('list' in arguments and 'info' in arguments and arguments['list'].json)
@ -816,7 +858,7 @@ def parse_arguments(*unparsed_arguments):
if ( if (
'transfer' in arguments 'transfer' in arguments
and arguments['transfer'].archive and arguments['transfer'].archive
and arguments['transfer'].glob_archives and arguments['transfer'].match_archives
): ):
raise ValueError( raise ValueError(
'With the transfer action, only one of --archive and --glob-archives flags can be used.' 'With the transfer action, only one of --archive and --glob-archives flags can be used.'
@ -824,11 +866,11 @@ def parse_arguments(*unparsed_arguments):
if 'info' in arguments and ( if 'info' in arguments and (
(arguments['info'].archive and arguments['info'].prefix) (arguments['info'].archive and arguments['info'].prefix)
or (arguments['info'].archive and arguments['info'].glob_archives) or (arguments['info'].archive and arguments['info'].match_archives)
or (arguments['info'].prefix and arguments['info'].glob_archives) or (arguments['info'].prefix and arguments['info'].match_archives)
): ):
raise ValueError( raise ValueError(
'With the info action, only one of --archive, --prefix, or --glob-archives flags can be used.' 'With the info action, only one of --archive, --prefix, or --match-archives flags can be used.'
) )
return arguments return arguments

View File

@ -1,5 +1,4 @@
import collections import collections
import copy
import json import json
import logging import logging
import os import os
@ -11,28 +10,29 @@ from subprocess import CalledProcessError
import colorama import colorama
import pkg_resources import pkg_resources
import borgmatic.actions.borg
import borgmatic.actions.break_lock
import borgmatic.actions.check
import borgmatic.actions.compact
import borgmatic.actions.create
import borgmatic.actions.export_tar
import borgmatic.actions.extract
import borgmatic.actions.info
import borgmatic.actions.list
import borgmatic.actions.mount
import borgmatic.actions.prune
import borgmatic.actions.rcreate
import borgmatic.actions.restore
import borgmatic.actions.rinfo
import borgmatic.actions.rlist
import borgmatic.actions.transfer
import borgmatic.commands.completion import borgmatic.commands.completion
from borgmatic.borg import borg as borg_borg
from borgmatic.borg import check as borg_check
from borgmatic.borg import compact as borg_compact
from borgmatic.borg import create as borg_create
from borgmatic.borg import export_tar as borg_export_tar
from borgmatic.borg import extract as borg_extract
from borgmatic.borg import feature as borg_feature
from borgmatic.borg import info as borg_info
from borgmatic.borg import list as borg_list
from borgmatic.borg import mount as borg_mount
from borgmatic.borg import prune as borg_prune
from borgmatic.borg import rcreate as borg_rcreate
from borgmatic.borg import rinfo as borg_rinfo
from borgmatic.borg import rlist as borg_rlist
from borgmatic.borg import transfer as borg_transfer
from borgmatic.borg import umount as borg_umount from borgmatic.borg import umount as borg_umount
from borgmatic.borg import version as borg_version from borgmatic.borg import version as borg_version
from borgmatic.commands.arguments import parse_arguments from borgmatic.commands.arguments import parse_arguments
from borgmatic.config import checks, collect, convert, validate from borgmatic.config import checks, collect, convert, validate
from borgmatic.hooks import command, dispatch, dump, monitor from borgmatic.hooks import command, dispatch, monitor
from borgmatic.logger import configure_logging, should_do_markup from borgmatic.logger import add_custom_log_levels, configure_logging, should_do_markup
from borgmatic.signals import configure_signals from borgmatic.signals import configure_signals
from borgmatic.verbosity import verbosity_to_log_level from borgmatic.verbosity import verbosity_to_log_level
@ -44,8 +44,8 @@ LEGACY_CONFIG_PATH = '/etc/borgmatic/config'
def run_configuration(config_filename, config, arguments): def run_configuration(config_filename, config, arguments):
''' '''
Given a config filename, the corresponding parsed config dict, and command-line arguments as a Given a config filename, the corresponding parsed config dict, and command-line arguments as a
dict from subparser name to a namespace of parsed arguments, execute the defined prune, compact, dict from subparser name to a namespace of parsed arguments, execute the defined create, prune,
create, check, and/or other actions. compact, check, and/or other actions.
Yield a combination of: Yield a combination of:
@ -64,7 +64,7 @@ def run_configuration(config_filename, config, arguments):
retry_wait = storage.get('retry_wait', 0) retry_wait = storage.get('retry_wait', 0)
encountered_error = None encountered_error = None
error_repository = '' error_repository = ''
using_primary_action = {'prune', 'compact', 'create', 'check'}.intersection(arguments) using_primary_action = {'create', 'prune', 'compact', 'check'}.intersection(arguments)
monitoring_log_level = verbosity_to_log_level(global_arguments.monitoring_verbosity) monitoring_log_level = verbosity_to_log_level(global_arguments.monitoring_verbosity)
try: try:
@ -152,6 +152,25 @@ def run_configuration(config_filename, config, arguments):
encountered_error = error encountered_error = error
error_repository = repository_path error_repository = repository_path
try:
if using_primary_action:
# send logs irrespective of error
dispatch.call_hooks(
'ping_monitor',
hooks,
config_filename,
monitor.MONITOR_HOOK_NAMES,
monitor.State.LOG,
monitoring_log_level,
global_arguments.dry_run,
)
except (OSError, CalledProcessError) as error:
if command.considered_soft_failure(config_filename, error):
return
encountered_error = error
yield from log_error_records('{}: Error pinging monitor'.format(config_filename), error)
if not encountered_error: if not encountered_error:
try: try:
if using_primary_action: if using_primary_action:
@ -243,6 +262,7 @@ def run_actions(
action or a hook. Raise ValueError if the arguments or configuration passed to action are action or a hook. Raise ValueError if the arguments or configuration passed to action are
invalid. invalid.
''' '''
add_custom_log_levels()
repository = os.path.expanduser(repository_path) repository = os.path.expanduser(repository_path)
global_arguments = arguments['global'] global_arguments = arguments['global']
dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else '' dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else ''
@ -261,497 +281,161 @@ def run_actions(
**hook_context, **hook_context,
) )
if 'rcreate' in arguments: for (action_name, action_arguments) in arguments.items():
logger.info('{}: Creating repository'.format(repository)) if action_name == 'rcreate':
borg_rcreate.create_repository( borgmatic.actions.rcreate.run_rcreate(
global_arguments.dry_run,
repository,
storage,
local_borg_version,
arguments['rcreate'].encryption_mode,
arguments['rcreate'].source_repository,
arguments['rcreate'].copy_crypt_key,
arguments['rcreate'].append_only,
arguments['rcreate'].storage_quota,
arguments['rcreate'].make_parent_dirs,
local_path=local_path,
remote_path=remote_path,
)
if 'transfer' in arguments:
logger.info(f'{repository}: Transferring archives to repository')
borg_transfer.transfer_archives(
global_arguments.dry_run,
repository,
storage,
local_borg_version,
transfer_arguments=arguments['transfer'],
local_path=local_path,
remote_path=remote_path,
)
if 'prune' in arguments:
command.execute_hook(
hooks.get('before_prune'),
hooks.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
**hook_context,
)
logger.info('{}: Pruning archives{}'.format(repository, dry_run_label))
borg_prune.prune_archives(
global_arguments.dry_run,
repository,
storage,
retention,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
stats=arguments['prune'].stats,
list_archives=arguments['prune'].list_archives,
)
command.execute_hook(
hooks.get('after_prune'),
hooks.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
**hook_context,
)
if 'compact' in arguments:
command.execute_hook(
hooks.get('before_compact'),
hooks.get('umask'),
config_filename,
'pre-compact',
global_arguments.dry_run,
)
if borg_feature.available(borg_feature.Feature.COMPACT, local_borg_version):
logger.info('{}: Compacting segments{}'.format(repository, dry_run_label))
borg_compact.compact_segments(
global_arguments.dry_run,
repository, repository,
storage, storage,
local_borg_version, local_borg_version,
local_path=local_path, action_arguments,
remote_path=remote_path, global_arguments,
progress=arguments['compact'].progress, local_path,
cleanup_commits=arguments['compact'].cleanup_commits, remote_path,
threshold=arguments['compact'].threshold,
) )
else: # pragma: nocover elif action_name == 'transfer':
logger.info( borgmatic.actions.transfer.run_transfer(
'{}: Skipping compact (only available/needed in Borg 1.2+)'.format(repository) repository,
) storage,
command.execute_hook( local_borg_version,
hooks.get('after_compact'), action_arguments,
hooks.get('umask'), global_arguments,
config_filename, local_path,
'post-compact', remote_path,
global_arguments.dry_run, )
) elif action_name == 'create':
if 'create' in arguments: yield from borgmatic.actions.create.run_create(
command.execute_hook( config_filename,
hooks.get('before_backup'),
hooks.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
**hook_context,
)
logger.info('{}: Creating archive{}'.format(repository, dry_run_label))
dispatch.call_hooks(
'remove_database_dumps',
hooks,
repository,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
active_dumps = dispatch.call_hooks(
'dump_databases',
hooks,
repository,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
stream_processes = [process for processes in active_dumps.values() for process in processes]
json_output = borg_create.create_archive(
global_arguments.dry_run,
repository,
location,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
progress=arguments['create'].progress,
stats=arguments['create'].stats,
json=arguments['create'].json,
list_files=arguments['create'].list_files,
stream_processes=stream_processes,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
dispatch.call_hooks(
'remove_database_dumps',
hooks,
config_filename,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
command.execute_hook(
hooks.get('after_backup'),
hooks.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
**hook_context,
)
if 'check' in arguments and checks.repository_enabled_for_checks(repository, consistency):
command.execute_hook(
hooks.get('before_check'),
hooks.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
**hook_context,
)
logger.info('{}: Running consistency checks'.format(repository))
borg_check.check_archives(
repository,
location,
storage,
consistency,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
progress=arguments['check'].progress,
repair=arguments['check'].repair,
only_checks=arguments['check'].only,
force=arguments['check'].force,
)
command.execute_hook(
hooks.get('after_check'),
hooks.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
**hook_context,
)
if 'extract' in arguments:
command.execute_hook(
hooks.get('before_extract'),
hooks.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
**hook_context,
)
if arguments['extract'].repository is None or validate.repositories_match(
repository, arguments['extract'].repository
):
logger.info(
'{}: Extracting archive {}'.format(repository, arguments['extract'].archive)
)
borg_extract.extract_archive(
global_arguments.dry_run,
repository, repository,
borg_rlist.resolve_archive_name(
repository,
arguments['extract'].archive,
storage,
local_borg_version,
local_path,
remote_path,
),
arguments['extract'].paths,
location, location,
storage, storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
destination_path=arguments['extract'].destination,
strip_components=arguments['extract'].strip_components,
progress=arguments['extract'].progress,
)
command.execute_hook(
hooks.get('after_extract'),
hooks.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
**hook_context,
)
if 'export-tar' in arguments:
if arguments['export-tar'].repository is None or validate.repositories_match(
repository, arguments['export-tar'].repository
):
logger.info(
'{}: Exporting archive {} as tar file'.format(
repository, arguments['export-tar'].archive
)
)
borg_export_tar.export_tar_archive(
global_arguments.dry_run,
repository,
borg_rlist.resolve_archive_name(
repository,
arguments['export-tar'].archive,
storage,
local_borg_version,
local_path,
remote_path,
),
arguments['export-tar'].paths,
arguments['export-tar'].destination,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
tar_filter=arguments['export-tar'].tar_filter,
list_files=arguments['export-tar'].list_files,
strip_components=arguments['export-tar'].strip_components,
)
if 'mount' in arguments:
if arguments['mount'].repository is None or validate.repositories_match(
repository, arguments['mount'].repository
):
if arguments['mount'].archive:
logger.info(
'{}: Mounting archive {}'.format(repository, arguments['mount'].archive)
)
else: # pragma: nocover
logger.info('{}: Mounting repository'.format(repository))
borg_mount.mount_archive(
repository,
borg_rlist.resolve_archive_name(
repository,
arguments['mount'].archive,
storage,
local_borg_version,
local_path,
remote_path,
),
arguments['mount'].mount_point,
arguments['mount'].paths,
arguments['mount'].foreground,
arguments['mount'].options,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
)
if 'restore' in arguments: # pragma: nocover
if arguments['restore'].repository is None or validate.repositories_match(
repository, arguments['restore'].repository
):
logger.info(
'{}: Restoring databases from archive {}'.format(
repository, arguments['restore'].archive
)
)
dispatch.call_hooks(
'remove_database_dumps',
hooks, hooks,
repository, hook_context,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
restore_names = arguments['restore'].databases or []
if 'all' in restore_names:
restore_names = []
archive_name = borg_rlist.resolve_archive_name(
repository,
arguments['restore'].archive,
storage,
local_borg_version, local_borg_version,
action_arguments,
global_arguments,
dry_run_label,
local_path, local_path,
remote_path, remote_path,
) )
found_names = set() elif action_name == 'prune':
borgmatic.actions.prune.run_prune(
for hook_name, per_hook_restore_databases in hooks.items(): config_filename,
if hook_name not in dump.DATABASE_HOOK_NAMES: repository,
continue storage,
retention,
for restore_database in per_hook_restore_databases:
database_name = restore_database['name']
if restore_names and database_name not in restore_names:
continue
found_names.add(database_name)
dump_pattern = dispatch.call_hooks(
'make_database_dump_pattern',
hooks,
repository,
dump.DATABASE_HOOK_NAMES,
location,
database_name,
)[hook_name]
# Kick off a single database extract to stdout.
extract_process = borg_extract.extract_archive(
dry_run=global_arguments.dry_run,
repository=repository,
archive=archive_name,
paths=dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
location_config=location,
storage_config=storage,
local_borg_version=local_borg_version,
local_path=local_path,
remote_path=remote_path,
destination_path='/',
# A directory format dump isn't a single file, and therefore can't extract
# to stdout. In this case, the extract_process return value is None.
extract_to_stdout=bool(restore_database.get('format') != 'directory'),
)
# Run a single database restore, consuming the extract stdout (if any).
dispatch.call_hooks(
'restore_database_dump',
{hook_name: [restore_database]},
repository,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
extract_process,
)
dispatch.call_hooks(
'remove_database_dumps',
hooks, hooks,
repository, hook_context,
dump.DATABASE_HOOK_NAMES, local_borg_version,
location, action_arguments,
global_arguments.dry_run, global_arguments,
dry_run_label,
local_path,
remote_path,
) )
elif action_name == 'compact':
if not restore_names and not found_names: borgmatic.actions.compact.run_compact(
raise ValueError('No databases were found to restore') config_filename,
repository,
missing_names = sorted(set(restore_names) - found_names) storage,
if missing_names: retention,
raise ValueError( hooks,
'Cannot restore database(s) {} missing from borgmatic\'s configuration'.format( hook_context,
', '.join(missing_names) local_borg_version,
) action_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
)
elif action_name == 'check':
if checks.repository_enabled_for_checks(repository, consistency):
borgmatic.actions.check.run_check(
config_filename,
repository,
location,
storage,
consistency,
hooks,
hook_context,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
) )
if 'rlist' in arguments: elif action_name == 'extract':
if arguments['rlist'].repository is None or validate.repositories_match( borgmatic.actions.extract.run_extract(
repository, arguments['rlist'].repository config_filename,
):
rlist_arguments = copy.copy(arguments['rlist'])
if not rlist_arguments.json: # pragma: nocover
logger.warning('{}: Listing repository'.format(repository))
json_output = borg_rlist.list_repository(
repository, repository,
location,
storage, storage,
hooks,
hook_context,
local_borg_version, local_borg_version,
rlist_arguments=rlist_arguments, action_arguments,
local_path=local_path, global_arguments,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
if 'list' in arguments:
if arguments['list'].repository is None or validate.repositories_match(
repository, arguments['list'].repository
):
list_arguments = copy.copy(arguments['list'])
if not list_arguments.json: # pragma: nocover
if list_arguments.find_paths:
logger.warning('{}: Searching archives'.format(repository))
elif not list_arguments.archive:
logger.warning('{}: Listing archives'.format(repository))
list_arguments.archive = borg_rlist.resolve_archive_name(
repository,
list_arguments.archive,
storage,
local_borg_version,
local_path, local_path,
remote_path, remote_path,
) )
json_output = borg_list.list_archive( elif action_name == 'export-tar':
borgmatic.actions.export_tar.run_export_tar(
repository, repository,
storage, storage,
local_borg_version, local_borg_version,
list_arguments=list_arguments, action_arguments,
local_path=local_path, global_arguments,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
if 'rinfo' in arguments:
if arguments['rinfo'].repository is None or validate.repositories_match(
repository, arguments['rinfo'].repository
):
rinfo_arguments = copy.copy(arguments['rinfo'])
if not rinfo_arguments.json: # pragma: nocover
logger.warning('{}: Displaying repository summary information'.format(repository))
json_output = borg_rinfo.display_repository_info(
repository,
storage,
local_borg_version,
rinfo_arguments=rinfo_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
if 'info' in arguments:
if arguments['info'].repository is None or validate.repositories_match(
repository, arguments['info'].repository
):
info_arguments = copy.copy(arguments['info'])
if not info_arguments.json: # pragma: nocover
logger.warning('{}: Displaying archive summary information'.format(repository))
info_arguments.archive = borg_rlist.resolve_archive_name(
repository,
info_arguments.archive,
storage,
local_borg_version,
local_path, local_path,
remote_path, remote_path,
) )
json_output = borg_info.display_archives_info( elif action_name == 'mount':
borgmatic.actions.mount.run_mount(
repository, repository,
storage, storage,
local_borg_version, local_borg_version,
info_arguments=info_arguments, arguments['mount'],
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
if 'borg' in arguments:
if arguments['borg'].repository is None or validate.repositories_match(
repository, arguments['borg'].repository
):
logger.warning('{}: Running arbitrary Borg command'.format(repository))
archive_name = borg_rlist.resolve_archive_name(
repository,
arguments['borg'].archive,
storage,
local_borg_version,
local_path, local_path,
remote_path, remote_path,
) )
borg_borg.run_arbitrary_borg( elif action_name == 'restore':
borgmatic.actions.restore.run_restore(
repository,
location,
storage,
hooks,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'rlist':
yield from borgmatic.actions.rlist.run_rlist(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
)
elif action_name == 'list':
yield from borgmatic.actions.list.run_list(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
)
elif action_name == 'rinfo':
yield from borgmatic.actions.rinfo.run_rinfo(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
)
elif action_name == 'info':
yield from borgmatic.actions.info.run_info(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
)
elif action_name == 'break-lock':
borgmatic.actions.break_lock.run_break_lock(
repository, repository,
storage, storage,
local_borg_version, local_borg_version,
options=arguments['borg'].options, arguments['break-lock'],
archive=archive_name, local_path,
local_path=local_path, remote_path,
remote_path=remote_path, )
elif action_name == 'borg':
borgmatic.actions.borg.run_borg(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
) )
command.execute_hook( command.execute_hook(

View File

@ -1,3 +1,4 @@
import functools
import logging import logging
import os import os
@ -6,43 +7,17 @@ import ruamel.yaml
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class Yaml_with_loader_stream(ruamel.yaml.YAML): def include_configuration(loader, filename_node, include_directory):
''' '''
A derived class of ruamel.yaml.YAML that simply tacks the loaded stream (file object) onto the Given a ruamel.yaml.loader.Loader, a ruamel.yaml.serializer.ScalarNode containing the included
loader class so that it's available anywhere that's passed a loader (in this case, filename, and an include directory path to search for matching files, load the given YAML
include_configuration() below). filename (ignoring the given loader so we can use our own) and return its contents as a data
''' structure of nested dicts and lists. If the filename is relative, probe for it within 1. the
current working directory and 2. the given include directory.
def get_constructor_parser(self, stream):
constructor, parser = super(Yaml_with_loader_stream, self).get_constructor_parser(stream)
constructor.loader.stream = stream
return constructor, parser
def load_configuration(filename):
'''
Load the given configuration file and return its contents as a data structure of nested dicts
and lists.
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes.
'''
yaml = Yaml_with_loader_stream(typ='safe')
yaml.Constructor = Include_constructor
return yaml.load(open(filename))
def include_configuration(loader, filename_node):
'''
Load the given YAML filename (ignoring the given loader so we can use our own) and return its
contents as a data structure of nested dicts and lists. If the filename is relative, probe for
it within 1. the current working directory and 2. the directory containing the YAML file doing
the including.
Raise FileNotFoundError if an included file was not found. Raise FileNotFoundError if an included file was not found.
''' '''
include_directories = [os.getcwd(), os.path.abspath(os.path.dirname(loader.stream.name))] include_directories = [os.getcwd(), os.path.abspath(include_directory)]
include_filename = os.path.expanduser(filename_node.value) include_filename = os.path.expanduser(filename_node.value)
if not os.path.isabs(include_filename): if not os.path.isabs(include_filename):
@ -62,6 +37,70 @@ def include_configuration(loader, filename_node):
return load_configuration(include_filename) return load_configuration(include_filename)
class Include_constructor(ruamel.yaml.SafeConstructor):
'''
A YAML "constructor" (a ruamel.yaml concept) that supports a custom "!include" tag for including
separate YAML configuration files. Example syntax: `retention: !include common.yaml`
'''
def __init__(self, preserve_quotes=None, loader=None, include_directory=None):
super(Include_constructor, self).__init__(preserve_quotes, loader)
self.add_constructor(
'!include',
functools.partial(include_configuration, include_directory=include_directory),
)
def flatten_mapping(self, node):
'''
Support the special case of deep merging included configuration into an existing mapping
using the YAML '<<' merge key. Example syntax:
```
retention:
keep_daily: 1
<<: !include common.yaml
```
These includes are deep merged into the current configuration file. For instance, in this
example, any "retention" options in common.yaml will get merged into the "retention" section
in the example configuration file.
'''
representer = ruamel.yaml.representer.SafeRepresenter()
for index, (key_node, value_node) in enumerate(node.value):
if key_node.tag == u'tag:yaml.org,2002:merge' and value_node.tag == '!include':
included_value = representer.represent_data(self.construct_object(value_node))
node.value[index] = (key_node, included_value)
super(Include_constructor, self).flatten_mapping(node)
node.value = deep_merge_nodes(node.value)
def load_configuration(filename):
'''
Load the given configuration file and return its contents as a data structure of nested dicts
and lists.
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes.
'''
# Use an embedded derived class for the include constructor so as to capture the filename
# value. (functools.partial doesn't work for this use case because yaml.Constructor has to be
# an actual class.)
class Include_constructor_with_include_directory(Include_constructor):
def __init__(self, preserve_quotes=None, loader=None):
super(Include_constructor_with_include_directory, self).__init__(
preserve_quotes, loader, include_directory=os.path.dirname(filename)
)
yaml = ruamel.yaml.YAML(typ='safe')
yaml.Constructor = Include_constructor_with_include_directory
return yaml.load(open(filename))
DELETED_NODE = object() DELETED_NODE = object()
@ -175,41 +214,3 @@ def deep_merge_nodes(nodes):
return [ return [
replaced_nodes.get(node, node) for node in nodes if replaced_nodes.get(node) != DELETED_NODE replaced_nodes.get(node, node) for node in nodes if replaced_nodes.get(node) != DELETED_NODE
] ]
class Include_constructor(ruamel.yaml.SafeConstructor):
'''
A YAML "constructor" (a ruamel.yaml concept) that supports a custom "!include" tag for including
separate YAML configuration files. Example syntax: `retention: !include common.yaml`
'''
def __init__(self, preserve_quotes=None, loader=None):
super(Include_constructor, self).__init__(preserve_quotes, loader)
self.add_constructor('!include', include_configuration)
def flatten_mapping(self, node):
'''
Support the special case of deep merging included configuration into an existing mapping
using the YAML '<<' merge key. Example syntax:
```
retention:
keep_daily: 1
<<: !include common.yaml
```
These includes are deep merged into the current configuration file. For instance, in this
example, any "retention" options in common.yaml will get merged into the "retention" section
in the example configuration file.
'''
representer = ruamel.yaml.representer.SafeRepresenter()
for index, (key_node, value_node) in enumerate(node.value):
if key_node.tag == u'tag:yaml.org,2002:merge' and value_node.tag == '!include':
included_value = representer.represent_data(self.construct_object(value_node))
node.value[index] = (key_node, included_value)
super(Include_constructor, self).flatten_mapping(node)
node.value = deep_merge_nodes(node.value)

View File

@ -8,49 +8,53 @@ def normalize(config_filename, config):
message warnings produced based on the normalization performed. message warnings produced based on the normalization performed.
''' '''
logs = [] logs = []
location = config.get('location') or {}
storage = config.get('storage') or {}
consistency = config.get('consistency') or {}
hooks = config.get('hooks') or {}
# Upgrade exclude_if_present from a string to a list. # Upgrade exclude_if_present from a string to a list.
exclude_if_present = config.get('location', {}).get('exclude_if_present') exclude_if_present = location.get('exclude_if_present')
if isinstance(exclude_if_present, str): if isinstance(exclude_if_present, str):
config['location']['exclude_if_present'] = [exclude_if_present] config['location']['exclude_if_present'] = [exclude_if_present]
# Upgrade various monitoring hooks from a string to a dict. # Upgrade various monitoring hooks from a string to a dict.
healthchecks = config.get('hooks', {}).get('healthchecks') healthchecks = hooks.get('healthchecks')
if isinstance(healthchecks, str): if isinstance(healthchecks, str):
config['hooks']['healthchecks'] = {'ping_url': healthchecks} config['hooks']['healthchecks'] = {'ping_url': healthchecks}
cronitor = config.get('hooks', {}).get('cronitor') cronitor = hooks.get('cronitor')
if isinstance(cronitor, str): if isinstance(cronitor, str):
config['hooks']['cronitor'] = {'ping_url': cronitor} config['hooks']['cronitor'] = {'ping_url': cronitor}
pagerduty = config.get('hooks', {}).get('pagerduty') pagerduty = hooks.get('pagerduty')
if isinstance(pagerduty, str): if isinstance(pagerduty, str):
config['hooks']['pagerduty'] = {'integration_key': pagerduty} config['hooks']['pagerduty'] = {'integration_key': pagerduty}
cronhub = config.get('hooks', {}).get('cronhub') cronhub = hooks.get('cronhub')
if isinstance(cronhub, str): if isinstance(cronhub, str):
config['hooks']['cronhub'] = {'ping_url': cronhub} config['hooks']['cronhub'] = {'ping_url': cronhub}
# Upgrade consistency checks from a list of strings to a list of dicts. # Upgrade consistency checks from a list of strings to a list of dicts.
checks = config.get('consistency', {}).get('checks') checks = consistency.get('checks')
if isinstance(checks, list) and len(checks) and isinstance(checks[0], str): if isinstance(checks, list) and len(checks) and isinstance(checks[0], str):
config['consistency']['checks'] = [{'name': check_type} for check_type in checks] config['consistency']['checks'] = [{'name': check_type} for check_type in checks]
# Rename various configuration options. # Rename various configuration options.
numeric_owner = config.get('location', {}).pop('numeric_owner', None) numeric_owner = location.pop('numeric_owner', None)
if numeric_owner is not None: if numeric_owner is not None:
config['location']['numeric_ids'] = numeric_owner config['location']['numeric_ids'] = numeric_owner
bsd_flags = config.get('location', {}).pop('bsd_flags', None) bsd_flags = location.pop('bsd_flags', None)
if bsd_flags is not None: if bsd_flags is not None:
config['location']['flags'] = bsd_flags config['location']['flags'] = bsd_flags
remote_rate_limit = config.get('storage', {}).pop('remote_rate_limit', None) remote_rate_limit = storage.pop('remote_rate_limit', None)
if remote_rate_limit is not None: if remote_rate_limit is not None:
config['storage']['upload_rate_limit'] = remote_rate_limit config['storage']['upload_rate_limit'] = remote_rate_limit
# Upgrade remote repositories to ssh:// syntax, required in Borg 2. # Upgrade remote repositories to ssh:// syntax, required in Borg 2.
repositories = config.get('location', {}).get('repositories') repositories = location.get('repositories')
if repositories: if repositories:
config['location']['repositories'] = [] config['location']['repositories'] = []
for repository in repositories: for repository in repositories:

View File

@ -70,8 +70,8 @@ def parse_overrides(raw_overrides):
def apply_overrides(config, raw_overrides): def apply_overrides(config, raw_overrides):
''' '''
Given a sequence of configuration file override strings in the form of "section.option=value" Given a configuration dict and a sequence of configuration file override strings in the form of
and a configuration dict, parse each override and set it the configuration dict. "section.option=value", parse each override and set it the configuration dict.
''' '''
overrides = parse_overrides(raw_overrides) overrides = parse_overrides(raw_overrides)

View File

@ -240,6 +240,16 @@ properties:
for details. Defaults to checkpoints every 1800 seconds (30 for details. Defaults to checkpoints every 1800 seconds (30
minutes). minutes).
example: 1800 example: 1800
checkpoint_volume:
type: integer
description: |
Number of backed up bytes between each checkpoint during a
long-running backup. Only supported with Borg 2+. See
https://borgbackup.readthedocs.io/en/stable/faq.html
for details. Defaults to only time-based checkpointing (see
"checkpoint_interval") instead of volume-based
checkpointing.
example: 1048576
chunker_params: chunker_params:
type: string type: string
description: | description: |
@ -359,6 +369,11 @@ properties:
description: | description: |
Extra command-line options to pass to "borg init". Extra command-line options to pass to "borg init".
example: "--extra-option" example: "--extra-option"
create:
type: string
description: |
Extra command-line options to pass to "borg create".
example: "--extra-option"
prune: prune:
type: string type: string
description: | description: |
@ -369,11 +384,6 @@ properties:
description: | description: |
Extra command-line options to pass to "borg compact". Extra command-line options to pass to "borg compact".
example: "--extra-option" example: "--extra-option"
create:
type: string
description: |
Extra command-line options to pass to "borg create".
example: "--extra-option"
check: check:
type: string type: string
description: | description: |
@ -653,11 +663,11 @@ properties:
type: string type: string
description: | description: |
List of one or more shell commands or scripts to execute List of one or more shell commands or scripts to execute
when an exception occurs during a "prune", "compact", when an exception occurs during a "create", "prune",
"create", or "check" action or an associated before/after "compact", or "check" action or an associated before/after
hook. hook.
example: example:
- echo "Error during prune/compact/create/check." - echo "Error during create/prune/compact/check."
before_everything: before_everything:
type: array type: array
items: items:
@ -691,10 +701,13 @@ properties:
type: string type: string
description: | description: |
Database name (required if using this hook). Or Database name (required if using this hook). Or
"all" to dump all databases on the host. Note "all" to dump all databases on the host. (Also
that using this database hook implicitly enables set the "format" to dump each database to a
both read_special and one_file_system (see separate file instead of one combined file.)
above) to support dump and restore streaming. Note that using this database hook implicitly
enables both read_special and one_file_system
(see above) to support dump and restore
streaming.
example: users example: users
hostname: hostname:
type: string type: string
@ -729,9 +742,14 @@ properties:
description: | description: |
Database dump output format. One of "plain", Database dump output format. One of "plain",
"custom", "directory", or "tar". Defaults to "custom", "directory", or "tar". Defaults to
"custom" (unlike raw pg_dump). See pg_dump "custom" (unlike raw pg_dump) for a single
documentation for details. Note that format is database. Or, when database name is "all" and
ignored when the database name is "all". format is blank, dumps all databases to a single
file. But if a format is specified with an "all"
database name, dumps each database to a separate
file of that format, allowing more convenient
restores of individual databases. See the
pg_dump documentation for more about formats.
example: directory example: directory
ssl_mode: ssl_mode:
type: string type: string
@ -764,6 +782,32 @@ properties:
description: | description: |
Path to a certificate revocation list. Path to a certificate revocation list.
example: "/root/.postgresql/root.crl" example: "/root/.postgresql/root.crl"
pg_dump_command:
type: string
description: |
Command to use instead of "pg_dump" or
"pg_dumpall". This can be used to run a specific
pg_dump version (e.g., one inside a running
docker container). Defaults to "pg_dump" for
single database dump or "pg_dumpall" to dump
all databases.
example: docker exec my_pg_container pg_dump
pg_restore_command:
type: string
description: |
Command to use instead of "pg_restore". This
can be used to run a specific pg_restore
version (e.g., one inside a running docker
container). Defaults to "pg_restore".
example: docker exec my_pg_container pg_restore
psql_command:
type: string
description: |
Command to use instead of "psql". This can be
used to run a specific psql version (e.g.,
one inside a running docker container).
Defaults to "psql".
example: docker exec my_pg_container psql
options: options:
type: string type: string
description: | description: |
@ -772,6 +816,30 @@ properties:
any validation on them. See pg_dump any validation on them. See pg_dump
documentation for details. documentation for details.
example: --role=someone example: --role=someone
list_options:
type: string
description: |
Additional psql options to pass directly to the
psql command that lists available databases,
without performing any validation on them. See
psql documentation for details.
example: --role=someone
restore_options:
type: string
description: |
Additional pg_restore/psql options to pass
directly to the restore command, without
performing any validation on them. See
pg_restore/psql documentation for details.
example: --role=someone
analyze_options:
type: string
description: |
Additional psql options to pass directly to the
analyze command run after a restore, without
performing any validation on them. See psql
documentation for details.
example: --role=someone
description: | description: |
List of one or more PostgreSQL databases to dump before List of one or more PostgreSQL databases to dump before
creating a backup, run once per configuration file. The creating a backup, run once per configuration file. The
@ -821,14 +889,26 @@ properties:
configured to trust the configured username configured to trust the configured username
without a password. without a password.
example: trustsome1 example: trustsome1
list_options: format:
type: string type: string
enum: ['sql']
description: | description: |
Additional mysql options to pass directly to Database dump output format. Currenly only "sql"
the mysql command that lists available is supported. Defaults to "sql" for a single
databases, without performing any validation on database. Or, when database name is "all" and
them. See mysql documentation for details. format is blank, dumps all databases to a single
example: --defaults-extra-file=my.cnf file. But if a format is specified with an "all"
database name, dumps each database to a separate
file of that format, allowing more convenient
restores of individual databases.
example: directory
add_drop_database:
type: boolean
description: |
Use the "--add-drop-database" flag with
mysqldump, causing the database to be dropped
right before restore. Defaults to true.
example: false
options: options:
type: string type: string
description: | description: |
@ -837,6 +917,22 @@ properties:
validation on them. See mysqldump documentation validation on them. See mysqldump documentation
for details. for details.
example: --skip-comments example: --skip-comments
list_options:
type: string
description: |
Additional mysql options to pass directly to
the mysql command that lists available
databases, without performing any validation on
them. See mysql documentation for details.
example: --defaults-extra-file=my.cnf
restore_options:
type: string
description: |
Additional mysql options to pass directly to
the mysql command that restores database dumps,
without performing any validation on them. See
mysql documentation for details.
example: --defaults-extra-file=my.cnf
description: | description: |
List of one or more MySQL/MariaDB databases to dump before List of one or more MySQL/MariaDB databases to dump before
creating a backup, run once per configuration file. The creating a backup, run once per configuration file. The
@ -845,6 +941,31 @@ properties:
mysqldump/mysql commands (from either MySQL or MariaDB). See mysqldump/mysql commands (from either MySQL or MariaDB). See
https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html or https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html or
https://mariadb.com/kb/en/library/mysqldump/ for details. https://mariadb.com/kb/en/library/mysqldump/ for details.
sqlite_databases:
type: array
items:
type: object
required: ['path','name']
additionalProperties: false
properties:
name:
type: string
description: |
This is used to tag the database dump file
with a name. It is not the path to the database
file itself. The name "all" has no special
meaning for SQLite databases.
example: users
path:
type: string
description: |
Path to the SQLite database file to dump. If
relative, it is relative to the current working
directory. Note that using this
database hook implicitly enables both
read_special and one_file_system (see above) to
support dump and restore streaming.
example: /var/lib/sqlite/users.db
mongodb_databases: mongodb_databases:
type: array type: array
items: items:
@ -909,7 +1030,15 @@ properties:
directly to the dump command, without performing directly to the dump command, without performing
any validation on them. See mongodump any validation on them. See mongodump
documentation for details. documentation for details.
example: --role=someone example: --dumpDbUsersAndRoles
restore_options:
type: string
description: |
Additional mongorestore options to pass
directly to the dump command, without performing
any validation on them. See mongorestore
documentation for details.
example: --restoreDbUsersAndRoles
description: | description: |
List of one or more MongoDB databases to dump before List of one or more MongoDB databases to dump before
creating a backup, run once per configuration file. The creating a backup, run once per configuration file. The
@ -935,6 +1064,16 @@ properties:
description: | description: |
The address of your self-hosted ntfy.sh instance. The address of your self-hosted ntfy.sh instance.
example: https://ntfy.your-domain.com example: https://ntfy.your-domain.com
username:
type: string
description: |
The username used for authentication.
example: testuser
password:
type: string
description: |
The password used for authentication.
example: fakepassword
start: start:
type: object type: object
properties: properties:
@ -1029,7 +1168,7 @@ properties:
type: string type: string
description: | description: |
Healthchecks ping URL or UUID to notify when a Healthchecks ping URL or UUID to notify when a
backup begins, ends, or errors. backup begins, ends, errors or just to send logs.
example: https://hc-ping.com/your-uuid-here example: https://hc-ping.com/your-uuid-here
verify_tls: verify_tls:
type: boolean type: boolean
@ -1041,7 +1180,8 @@ properties:
type: boolean type: boolean
description: | description: |
Send borgmatic logs to Healthchecks as part the Send borgmatic logs to Healthchecks as part the
"finish" state. Defaults to true. "finish", "fail", and "log" states. Defaults to
true.
example: false example: false
ping_body_limit: ping_body_limit:
type: integer type: integer
@ -1060,10 +1200,11 @@ properties:
- start - start
- finish - finish
- fail - fail
- log
uniqueItems: true uniqueItems: true
description: | description: |
List of one or more monitoring states to ping for: List of one or more monitoring states to ping for:
"start", "finish", and/or "fail". Defaults to "start", "finish", "fail", and/or "log". Defaults to
pinging for all states. pinging for all states.
example: example:
- finish - finish

View File

@ -186,5 +186,5 @@ def guard_single_repository_selected(repository, configurations):
if count != 1: if count != 1:
raise ValueError( raise ValueError(
'Can\'t determine which repository to use. Use --repository to disambiguate' "Can't determine which repository to use. Use --repository to disambiguate"
) )

View File

@ -49,7 +49,8 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
''' '''
Given a sequence of subprocess.Popen() instances for multiple processes, log the output for each Given a sequence of subprocess.Popen() instances for multiple processes, log the output for each
process with the requested log level. Additionally, raise a CalledProcessError if a process process with the requested log level. Additionally, raise a CalledProcessError if a process
exits with an error (or a warning for exit code 1, if that process matches the Borg local path). exits with an error (or a warning for exit code 1, if that process does not match the Borg local
path).
If output log level is None, then instead of logging, capture output for each process and return If output log level is None, then instead of logging, capture output for each process and return
it as a dict from the process to its output. it as a dict from the process to its output.
@ -147,7 +148,7 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
} }
def log_command(full_command, input_file, output_file): def log_command(full_command, input_file=None, output_file=None):
''' '''
Log the given command (a sequence of command/argument strings), along with its input/output file Log the given command (a sequence of command/argument strings), along with its input/output file
paths. paths.
@ -178,15 +179,14 @@ def execute_command(
): ):
''' '''
Execute the given command (a sequence of command/argument strings) and log its output at the Execute the given command (a sequence of command/argument strings) and log its output at the
given log level. If output log level is None, instead capture and return the output. (Implies given log level. If an open output file object is given, then write stdout to the file and only
run_to_completion.) If an open output file object is given, then write stdout to the file and log stderr. If an open input file object is given, then read stdin from the file. If shell is
only log stderr (but only if an output log level is set). If an open input file object is given, True, execute the command within a shell. If an extra environment dict is given, then use it to
then read stdin from the file. If shell is True, execute the command within a shell. If an extra augment the current environment, and pass the result into the command. If a working directory is
environment dict is given, then use it to augment the current environment, and pass the result given, use that as the present working directory when running the command. If a Borg local path
into the command. If a working directory is given, use that as the present working directory is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning
when running the command. If a Borg local path is given, and the command matches it (regardless instead of an error. If run to completion is False, then return the process for the command
of arguments), treat exit code 1 as a warning instead of an error. If run to completion is without executing it to completion.
False, then return the process for the command without executing it to completion.
Raise subprocesses.CalledProcessError if an error occurs while running the command. Raise subprocesses.CalledProcessError if an error occurs while running the command.
''' '''
@ -195,12 +195,6 @@ def execute_command(
do_not_capture = bool(output_file is DO_NOT_CAPTURE) do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command command = ' '.join(full_command) if shell else full_command
if output_log_level is None:
output = subprocess.check_output(
command, shell=shell, env=environment, cwd=working_directory
)
return output.decode() if output is not None else None
process = subprocess.Popen( process = subprocess.Popen(
command, command,
stdin=input_file, stdin=input_file,
@ -218,6 +212,33 @@ def execute_command(
) )
def execute_command_and_capture_output(
full_command, capture_stderr=False, shell=False, extra_environment=None, working_directory=None,
):
'''
Execute the given command (a sequence of command/argument strings), capturing and returning its
output (stdout). If capture stderr is True, then capture and return stderr in addition to
stdout. If shell is True, execute the command within a shell. If an extra environment dict is
given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the command.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
log_command(full_command)
environment = {**os.environ, **extra_environment} if extra_environment else None
command = ' '.join(full_command) if shell else full_command
output = subprocess.check_output(
command,
stderr=subprocess.STDOUT if capture_stderr else None,
shell=shell,
env=environment,
cwd=working_directory,
)
return output.decode() if output is not None else None
def execute_command_with_processes( def execute_command_with_processes(
full_command, full_command,
processes, processes,

View File

@ -27,6 +27,12 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
Ping the configured Cronhub URL, modified with the monitor.State. Use the given configuration Ping the configured Cronhub URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything. filename in any log entries. If this is a dry run, then don't actually ping anything.
''' '''
if state not in MONITOR_STATE_TO_CRONHUB:
logger.debug(
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in Cronhub hook'
)
return
dry_run_label = ' (dry run; not actually pinging)' if dry_run else '' dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
formatted_state = '/{}/'.format(MONITOR_STATE_TO_CRONHUB[state]) formatted_state = '/{}/'.format(MONITOR_STATE_TO_CRONHUB[state])
ping_url = ( ping_url = (

View File

@ -27,6 +27,12 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
Ping the configured Cronitor URL, modified with the monitor.State. Use the given configuration Ping the configured Cronitor URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything. filename in any log entries. If this is a dry run, then don't actually ping anything.
''' '''
if state not in MONITOR_STATE_TO_CRONITOR:
logger.debug(
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in Cronitor hook'
)
return
dry_run_label = ' (dry run; not actually pinging)' if dry_run else '' dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
ping_url = '{}/{}'.format(hook_config['ping_url'], MONITOR_STATE_TO_CRONITOR[state]) ping_url = '{}/{}'.format(hook_config['ping_url'], MONITOR_STATE_TO_CRONITOR[state])

View File

@ -9,6 +9,7 @@ from borgmatic.hooks import (
ntfy, ntfy,
pagerduty, pagerduty,
postgresql, postgresql,
sqlite,
) )
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -22,6 +23,7 @@ HOOK_NAME_TO_MODULE = {
'ntfy': ntfy, 'ntfy': ntfy,
'pagerduty': pagerduty, 'pagerduty': pagerduty,
'postgresql_databases': postgresql, 'postgresql_databases': postgresql,
'sqlite_databases': sqlite,
} }
@ -29,19 +31,14 @@ def call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs):
''' '''
Given the hooks configuration dict and a prefix to use in log entries, call the requested Given the hooks configuration dict and a prefix to use in log entries, call the requested
function of the Python module corresponding to the given hook name. Supply that call with the function of the Python module corresponding to the given hook name. Supply that call with the
configuration for this hook, the log prefix, and any given args and kwargs. Return any return configuration for this hook (if any), the log prefix, and any given args and kwargs. Return any
value. return value.
If the hook name is not present in the hooks configuration, then bail without calling anything.
Raise ValueError if the hook name is unknown. Raise ValueError if the hook name is unknown.
Raise AttributeError if the function name is not found in the module. Raise AttributeError if the function name is not found in the module.
Raise anything else that the called function raises. Raise anything else that the called function raises.
''' '''
config = hooks.get(hook_name) config = hooks.get(hook_name, {})
if not config:
logger.debug('{}: No {} hook configured.'.format(log_prefix, hook_name))
return
try: try:
module = HOOK_NAME_TO_MODULE[hook_name] module = HOOK_NAME_TO_MODULE[hook_name]
@ -59,7 +56,7 @@ def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
values into a dict from hook name to return value. values into a dict from hook name to return value.
If the hook name is not present in the hooks configuration, then don't call the function for it, If the hook name is not present in the hooks configuration, then don't call the function for it
and omit it from the return values. and omit it from the return values.
Raise ValueError if the hook name is unknown. Raise ValueError if the hook name is unknown.
@ -71,3 +68,19 @@ def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
for hook_name in hook_names for hook_name in hook_names
if hooks.get(hook_name) if hooks.get(hook_name)
} }
def call_hooks_even_if_unconfigured(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
'''
Given the hooks configuration dict and a prefix to use in log entries, call the requested
function of the Python module corresponding to each given hook name. Supply each call with the
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
values into a dict from hook name to return value.
Raise AttributeError if the function name is not found in the module.
Raise anything else that a called function raises. An error stops calls to subsequent functions.
'''
return {
hook_name: call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs)
for hook_name in hook_names
}

View File

@ -6,7 +6,12 @@ from borgmatic.borg.state import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
DATABASE_HOOK_NAMES = ('postgresql_databases', 'mysql_databases', 'mongodb_databases') DATABASE_HOOK_NAMES = (
'postgresql_databases',
'mysql_databases',
'mongodb_databases',
'sqlite_databases',
)
def make_database_dump_path(borgmatic_source_directory, database_hook_name): def make_database_dump_path(borgmatic_source_directory, database_hook_name):
@ -55,7 +60,7 @@ def remove_database_dumps(dump_path, database_type_name, log_prefix, dry_run):
''' '''
dry_run_label = ' (dry run; not actually removing anything)' if dry_run else '' dry_run_label = ' (dry run; not actually removing anything)' if dry_run else ''
logger.info( logger.debug(
'{}: Removing {} database dumps{}'.format(log_prefix, database_type_name, dry_run_label) '{}: Removing {} database dumps{}'.format(log_prefix, database_type_name, dry_run_label)
) )

View File

@ -10,6 +10,7 @@ MONITOR_STATE_TO_HEALTHCHECKS = {
monitor.State.START: 'start', monitor.State.START: 'start',
monitor.State.FINISH: None, # Healthchecks doesn't append to the URL for the finished state. monitor.State.FINISH: None, # Healthchecks doesn't append to the URL for the finished state.
monitor.State.FAIL: 'fail', monitor.State.FAIL: 'fail',
monitor.State.LOG: 'log',
} }
PAYLOAD_TRUNCATION_INDICATOR = '...\n' PAYLOAD_TRUNCATION_INDICATOR = '...\n'
@ -117,7 +118,7 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
) )
logger.debug('{}: Using Healthchecks ping URL {}'.format(config_filename, ping_url)) logger.debug('{}: Using Healthchecks ping URL {}'.format(config_filename, ping_url))
if state in (monitor.State.FINISH, monitor.State.FAIL): if state in (monitor.State.FINISH, monitor.State.FAIL, monitor.State.LOG):
payload = format_buffered_logs_for_payload() payload = format_buffered_logs_for_payload()
else: else:
payload = '' payload = ''

View File

@ -45,13 +45,14 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
if dry_run: if dry_run:
continue continue
command = build_dump_command(database, dump_filename, dump_format)
if dump_format == 'directory': if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename) dump.create_parent_directory_for_dump(dump_filename)
execute_command(command, shell=True)
else: else:
dump.create_named_pipe_for_dump(dump_filename) dump.create_named_pipe_for_dump(dump_filename)
processes.append(execute_command(command, shell=True, run_to_completion=False))
command = build_dump_command(database, dump_filename, dump_format)
processes.append(execute_command(command, shell=True, run_to_completion=False))
return processes return processes
@ -61,9 +62,9 @@ def build_dump_command(database, dump_filename, dump_format):
Return the mongodump command from a single database configuration. Return the mongodump command from a single database configuration.
''' '''
all_databases = database['name'] == 'all' all_databases = database['name'] == 'all'
command = ['mongodump', '--archive'] command = ['mongodump']
if dump_format == 'directory': if dump_format == 'directory':
command.append(dump_filename) command.extend(('--out', dump_filename))
if 'hostname' in database: if 'hostname' in database:
command.extend(('--host', database['hostname'])) command.extend(('--host', database['hostname']))
if 'port' in database: if 'port' in database:
@ -79,7 +80,7 @@ def build_dump_command(database, dump_filename, dump_format):
if 'options' in database: if 'options' in database:
command.extend(database['options'].split(' ')) command.extend(database['options'].split(' '))
if dump_format != 'directory': if dump_format != 'directory':
command.extend(('>', dump_filename)) command.extend(('--archive', '>', dump_filename))
return command return command
@ -145,9 +146,11 @@ def build_restore_command(extract_process, database, dump_filename):
''' '''
Return the mongorestore command from a single database configuration. Return the mongorestore command from a single database configuration.
''' '''
command = ['mongorestore', '--archive'] command = ['mongorestore']
if not extract_process: if extract_process:
command.append(dump_filename) command.append('--archive')
else:
command.extend(('--dir', dump_filename))
if database['name'] != 'all': if database['name'] != 'all':
command.extend(('--drop', '--db', database['name'])) command.extend(('--drop', '--db', database['name']))
if 'hostname' in database: if 'hostname' in database:
@ -160,4 +163,6 @@ def build_restore_command(extract_process, database, dump_filename):
command.extend(('--password', database['password'])) command.extend(('--password', database['password']))
if 'authentication_database' in database: if 'authentication_database' in database:
command.extend(('--authenticationDatabase', database['authentication_database'])) command.extend(('--authenticationDatabase', database['authentication_database']))
if 'restore_options' in database:
command.extend(database['restore_options'].split(' '))
return command return command

View File

@ -7,3 +7,4 @@ class State(Enum):
START = 1 START = 1
FINISH = 2 FINISH = 2
FAIL = 3 FAIL = 3
LOG = 4

View File

@ -1,6 +1,12 @@
import copy
import logging import logging
import os
from borgmatic.execute import execute_command, execute_command_with_processes from borgmatic.execute import (
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
from borgmatic.hooks import dump from borgmatic.hooks import dump
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -18,16 +24,16 @@ def make_dump_path(location_config): # pragma: no cover
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys') SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
def database_names_to_dump(database, extra_environment, log_prefix, dry_run_label): def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
''' '''
Given a requested database name, return the corresponding sequence of database names to dump. Given a requested database config, return the corresponding sequence of database names to dump.
In the case of "all", query for the names of databases on the configured host and return them, In the case of "all", query for the names of databases on the configured host and return them,
excluding any system databases that will cause problems during restore. excluding any system databases that will cause problems during restore.
''' '''
requested_name = database['name'] if database['name'] != 'all':
return (database['name'],)
if requested_name != 'all': if dry_run:
return (requested_name,) return ()
show_command = ( show_command = (
('mysql',) ('mysql',)
@ -39,11 +45,9 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run_labe
+ ('--skip-column-names', '--batch') + ('--skip-column-names', '--batch')
+ ('--execute', 'show schemas') + ('--execute', 'show schemas')
) )
logger.debug( logger.debug(f'{log_prefix}: Querying for "all" MySQL databases to dump')
'{}: Querying for "all" MySQL databases to dump{}'.format(log_prefix, dry_run_label) show_output = execute_command_and_capture_output(
) show_command, extra_environment=extra_environment
show_output = execute_command(
show_command, output_log_level=None, extra_environment=extra_environment
) )
return tuple( return tuple(
@ -53,6 +57,55 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run_labe
) )
def execute_dump_command(
database, log_prefix, dump_path, database_names, extra_environment, dry_run, dry_run_label
):
'''
Kick off a dump for the given MySQL/MariaDB database (provided as a configuration dict) to a
named pipe constructed from the given dump path and database names. Use the given log prefix in
any log entries.
Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if
this is a dry run, then don't actually dump anything and return None.
'''
database_name = database['name']
dump_filename = dump.make_database_dump_filename(
dump_path, database['name'], database.get('hostname')
)
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of MySQL database "{database_name}" to {dump_filename}'
)
return None
dump_command = (
('mysqldump',)
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ ('--databases',)
+ database_names
# Use shell redirection rather than execute_command(output_file=open(...)) to prevent
# the open() call on a named pipe from hanging the main borgmatic process.
+ ('>', dump_filename)
)
logger.debug(
f'{log_prefix}: Dumping MySQL database "{database_name}" to {dump_filename}{dry_run_label}'
)
if dry_run:
return None
dump.create_named_pipe_for_dump(dump_filename)
return execute_command(
dump_command, shell=True, extra_environment=extra_environment, run_to_completion=False,
)
def dump_databases(databases, log_prefix, location_config, dry_run): def dump_databases(databases, log_prefix, location_config, dry_run):
''' '''
Dump the given MySQL/MariaDB databases to a named pipe. The databases are supplied as a sequence Dump the given MySQL/MariaDB databases to a named pipe. The databases are supplied as a sequence
@ -69,52 +122,47 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
logger.info('{}: Dumping MySQL databases{}'.format(log_prefix, dry_run_label)) logger.info('{}: Dumping MySQL databases{}'.format(log_prefix, dry_run_label))
for database in databases: for database in databases:
requested_name = database['name'] dump_path = make_dump_path(location_config)
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), requested_name, database.get('hostname')
)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
dump_database_names = database_names_to_dump( dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run_label database, extra_environment, log_prefix, dry_run
) )
if not dump_database_names: if not dump_database_names:
if dry_run:
continue
raise ValueError('Cannot find any MySQL databases to dump.') raise ValueError('Cannot find any MySQL databases to dump.')
dump_command = ( if database['name'] == 'all' and database.get('format'):
('mysqldump',) for dump_name in dump_database_names:
+ (tuple(database['options'].split(' ')) if 'options' in database else ()) renamed_database = copy.copy(database)
+ ('--add-drop-database',) renamed_database['name'] = dump_name
+ (('--host', database['hostname']) if 'hostname' in database else ()) processes.append(
+ (('--port', str(database['port'])) if 'port' in database else ()) execute_dump_command(
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ()) renamed_database,
+ (('--user', database['username']) if 'username' in database else ()) log_prefix,
+ ('--databases',) dump_path,
+ dump_database_names (dump_name,),
# Use shell redirection rather than execute_command(output_file=open(...)) to prevent extra_environment,
# the open() call on a named pipe from hanging the main borgmatic process. dry_run,
+ ('>', dump_filename) dry_run_label,
) )
)
logger.debug( else:
'{}: Dumping MySQL database {} to {}{}'.format( processes.append(
log_prefix, requested_name, dump_filename, dry_run_label execute_dump_command(
database,
log_prefix,
dump_path,
dump_database_names,
extra_environment,
dry_run,
dry_run_label,
)
) )
)
if dry_run:
continue
dump.create_named_pipe_for_dump(dump_filename) return [process for process in processes if process]
processes.append(
execute_command(
dump_command,
shell=True,
extra_environment=extra_environment,
run_to_completion=False,
)
)
return processes
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
@ -153,6 +201,7 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
database = database_config[0] database = database_config[0]
restore_command = ( restore_command = (
('mysql', '--batch') ('mysql', '--batch')
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ()) + (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ()) + (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())

View File

@ -2,16 +2,8 @@ import logging
import requests import requests
from borgmatic.hooks import monitor
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
MONITOR_STATE_TO_NTFY = {
monitor.State.START: None,
monitor.State.FINISH: None,
monitor.State.FAIL: None,
}
def initialize_monitor( def initialize_monitor(
ping_url, config_filename, monitoring_log_level, dry_run ping_url, config_filename, monitoring_log_level, dry_run
@ -56,14 +48,30 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
'X-Tags': state_config.get('tags'), 'X-Tags': state_config.get('tags'),
} }
username = hook_config.get('username')
password = hook_config.get('password')
auth = None
if (username and password) is not None:
auth = requests.auth.HTTPBasicAuth(username, password)
logger.info(f'{config_filename}: Using basic auth with user {username} for ntfy')
elif username is not None:
logger.warning(
f'{config_filename}: Password missing for ntfy authentication, defaulting to no auth'
)
elif password is not None:
logger.warning(
f'{config_filename}: Username missing for ntfy authentication, defaulting to no auth'
)
if not dry_run: if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR) logging.getLogger('urllib3').setLevel(logging.ERROR)
try: try:
response = requests.post(f'{base_url}/{topic}', headers=headers) response = requests.post(f'{base_url}/{topic}', headers=headers, auth=auth)
if not response.ok: if not response.ok:
response.raise_for_status() response.raise_for_status()
except requests.exceptions.RequestException as error: except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Ntfy error: {error}') logger.warning(f'{config_filename}: ntfy error: {error}')
def destroy_monitor( def destroy_monitor(

View File

@ -1,6 +1,12 @@
import csv
import logging import logging
import os
from borgmatic.execute import execute_command, execute_command_with_processes from borgmatic.execute import (
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
from borgmatic.hooks import dump from borgmatic.hooks import dump
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -34,6 +40,44 @@ def make_extra_environment(database):
return extra return extra
EXCLUDED_DATABASE_NAMES = ('template0', 'template1')
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
'''
Given a requested database config, return the corresponding sequence of database names to dump.
In the case of "all" when a database format is given, query for the names of databases on the
configured host and return them. For "all" without a database format, just return a sequence
containing "all".
'''
requested_name = database['name']
if requested_name != 'all':
return (requested_name,)
if not database.get('format'):
return ('all',)
if dry_run:
return ()
list_command = (
('psql', '--list', '--no-password', '--csv', '--tuples-only')
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
)
logger.debug(f'{log_prefix}: Querying for "all" PostgreSQL databases to dump')
list_output = execute_command_and_capture_output(
list_command, extra_environment=extra_environment
)
return tuple(
row[0]
for row in csv.reader(list_output.splitlines(), delimiter=',', quotechar='"')
if row[0] not in EXCLUDED_DATABASE_NAMES
)
def dump_databases(databases, log_prefix, location_config, dry_run): def dump_databases(databases, log_prefix, location_config, dry_run):
''' '''
Dump the given PostgreSQL databases to a named pipe. The databases are supplied as a sequence of Dump the given PostgreSQL databases to a named pipe. The databases are supplied as a sequence of
@ -43,6 +87,8 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence. pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
Raise ValueError if the databases to dump cannot be determined.
''' '''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else '' dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = [] processes = []
@ -50,51 +96,67 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
logger.info('{}: Dumping PostgreSQL databases{}'.format(log_prefix, dry_run_label)) logger.info('{}: Dumping PostgreSQL databases{}'.format(log_prefix, dry_run_label))
for database in databases: for database in databases:
name = database['name']
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), name, database.get('hostname')
)
all_databases = bool(name == 'all')
dump_format = database.get('format', 'custom')
command = (
(
'pg_dumpall' if all_databases else 'pg_dump',
'--no-password',
'--clean',
'--if-exists',
)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (() if all_databases else ('--format', dump_format))
+ (('--file', dump_filename) if dump_format == 'directory' else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (() if all_databases else (name,))
# Use shell redirection rather than the --file flag to sidestep synchronization issues
# when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump
# format in a particular, a named destination is required, and redirection doesn't work.
+ (('>', dump_filename) if dump_format != 'directory' else ())
)
extra_environment = make_extra_environment(database) extra_environment = make_extra_environment(database)
dump_path = make_dump_path(location_config)
logger.debug( dump_database_names = database_names_to_dump(
'{}: Dumping PostgreSQL database {} to {}{}'.format( database, extra_environment, log_prefix, dry_run
log_prefix, name, dump_filename, dry_run_label
)
) )
if dry_run:
continue
if dump_format == 'directory': if not dump_database_names:
dump.create_parent_directory_for_dump(dump_filename) if dry_run:
else: continue
dump.create_named_pipe_for_dump(dump_filename)
processes.append( raise ValueError('Cannot find any PostgreSQL databases to dump.')
execute_command(
command, shell=True, extra_environment=extra_environment, run_to_completion=False for database_name in dump_database_names:
dump_format = database.get('format', None if database_name == 'all' else 'custom')
default_dump_command = 'pg_dumpall' if database_name == 'all' else 'pg_dump'
dump_command = database.get('pg_dump_command') or default_dump_command
dump_filename = dump.make_database_dump_filename(
dump_path, database_name, database.get('hostname')
) )
) if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of PostgreSQL database "{database_name}" to {dump_filename}'
)
continue
command = (
(dump_command, '--no-password', '--clean', '--if-exists',)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (('--format', dump_format) if dump_format else ())
+ (('--file', dump_filename) if dump_format == 'directory' else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (() if database_name == 'all' else (database_name,))
# Use shell redirection rather than the --file flag to sidestep synchronization issues
# when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump
# format in a particular, a named destination is required, and redirection doesn't work.
+ (('>', dump_filename) if dump_format != 'directory' else ())
)
logger.debug(
f'{log_prefix}: Dumping PostgreSQL database "{database_name}" to {dump_filename}{dry_run_label}'
)
if dry_run:
continue
if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename)
execute_command(
command, shell=True, extra_environment=extra_environment,
)
else:
dump.create_named_pipe_for_dump(dump_filename)
processes.append(
execute_command(
command,
shell=True,
extra_environment=extra_environment,
run_to_completion=False,
)
)
return processes return processes
@ -140,16 +202,19 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
dump_filename = dump.make_database_dump_filename( dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), database['name'], database.get('hostname') make_dump_path(location_config), database['name'], database.get('hostname')
) )
psql_command = database.get('psql_command') or 'psql'
analyze_command = ( analyze_command = (
('psql', '--no-password', '--quiet') (psql_command, '--no-password', '--quiet')
+ (('--host', database['hostname']) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ()) + (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ()) + (('--username', database['username']) if 'username' in database else ())
+ (('--dbname', database['name']) if not all_databases else ()) + (('--dbname', database['name']) if not all_databases else ())
+ (tuple(database['analyze_options'].split(' ')) if 'analyze_options' in database else ())
+ ('--command', 'ANALYZE') + ('--command', 'ANALYZE')
) )
pg_restore_command = database.get('pg_restore_command') or 'pg_restore'
restore_command = ( restore_command = (
('psql' if all_databases else 'pg_restore', '--no-password') (psql_command if all_databases else pg_restore_command, '--no-password')
+ ( + (
('--if-exists', '--exit-on-error', '--clean', '--dbname', database['name']) ('--if-exists', '--exit-on-error', '--clean', '--dbname', database['name'])
if not all_databases if not all_databases
@ -158,6 +223,7 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
+ (('--host', database['hostname']) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ()) + (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ()) + (('--username', database['username']) if 'username' in database else ())
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (() if extract_process else (dump_filename,)) + (() if extract_process else (dump_filename,))
) )
extra_environment = make_extra_environment(database) extra_environment = make_extra_environment(database)

125
borgmatic/hooks/sqlite.py Normal file
View File

@ -0,0 +1,125 @@
import logging
import os
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'sqlite_databases'
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given SQLite3 databases to a file. The databases are supplied as a sequence of
configuration dicts, as per the configuration schema. Use the given log prefix in any log
entries. Use the given location configuration dict to construct the destination path. If this
is a dry run, then don't actually dump anything.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info('{}: Dumping SQLite databases{}'.format(log_prefix, dry_run_label))
for database in databases:
database_path = database['path']
if database['name'] == 'all':
logger.warning('The "all" database name has no meaning for SQLite3 databases')
if not os.path.exists(database_path):
logger.warning(
f'{log_prefix}: No SQLite database at {database_path}; An empty database will be created and dumped'
)
dump_path = make_dump_path(location_config)
dump_filename = dump.make_database_dump_filename(dump_path, database['name'])
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of SQLite database at {database_path} to {dump_filename}'
)
continue
command = (
'sqlite3',
database_path,
'.dump',
'>',
dump_filename,
)
logger.debug(
f'{log_prefix}: Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}'
)
if dry_run:
continue
dump.create_parent_directory_for_dump(dump_filename)
processes.append(execute_command(command, shell=True, run_to_completion=False))
return processes
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
'''
Remove the given SQLite3 database dumps from the filesystem. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema. Use the given log prefix in
any log entries. Use the given location configuration dict to construct the destination path.
If this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'SQLite', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
'''
Make a pattern that matches the given SQLite3 databases. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name)
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
'''
Restore the given SQLite3 database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database_path = database_config[0]['path']
logger.debug(f'{log_prefix}: Restoring SQLite database at {database_path}{dry_run_label}')
if dry_run:
return
try:
os.remove(database_path)
logger.warning(f'{log_prefix}: Removed existing SQLite database at {database_path}')
except FileNotFoundError: # pragma: no cover
pass
restore_command = (
'sqlite3',
database_path,
)
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout,
)

View File

@ -85,18 +85,19 @@ class Multi_stream_handler(logging.Handler):
handler.setLevel(level) handler.setLevel(level)
LOG_LEVEL_TO_COLOR = {
logging.CRITICAL: colorama.Fore.RED,
logging.ERROR: colorama.Fore.RED,
logging.WARN: colorama.Fore.YELLOW,
logging.INFO: colorama.Fore.GREEN,
logging.DEBUG: colorama.Fore.CYAN,
}
class Console_color_formatter(logging.Formatter): class Console_color_formatter(logging.Formatter):
def format(self, record): def format(self, record):
color = LOG_LEVEL_TO_COLOR.get(record.levelno) add_custom_log_levels()
color = {
logging.CRITICAL: colorama.Fore.RED,
logging.ERROR: colorama.Fore.RED,
logging.WARN: colorama.Fore.YELLOW,
logging.ANSWER: colorama.Fore.MAGENTA,
logging.INFO: colorama.Fore.GREEN,
logging.DEBUG: colorama.Fore.CYAN,
}.get(record.levelno)
return color_text(color, record.msg) return color_text(color, record.msg)
@ -110,6 +111,45 @@ def color_text(color, message):
return '{}{}{}'.format(color, message, colorama.Style.RESET_ALL) return '{}{}{}'.format(color, message, colorama.Style.RESET_ALL)
def add_logging_level(level_name, level_number):
'''
Globally add a custom logging level based on the given (all uppercase) level name and number.
Do this idempotently.
Inspired by https://stackoverflow.com/questions/2183233/how-to-add-a-custom-loglevel-to-pythons-logging-facility/35804945#35804945
'''
method_name = level_name.lower()
if not hasattr(logging, level_name):
logging.addLevelName(level_number, level_name)
setattr(logging, level_name, level_number)
if not hasattr(logging, method_name):
def log_for_level(self, message, *args, **kwargs): # pragma: no cover
if self.isEnabledFor(level_number):
self._log(level_number, message, args, **kwargs)
setattr(logging.getLoggerClass(), method_name, log_for_level)
if not hasattr(logging.getLoggerClass(), method_name):
def log_to_root(message, *args, **kwargs): # pragma: no cover
logging.log(level_number, message, *args, **kwargs)
setattr(logging, method_name, log_to_root)
ANSWER = logging.WARN - 5
def add_custom_log_levels(): # pragma: no cover
'''
Add a custom log level between WARN and INFO for user-requested answers.
'''
add_logging_level('ANSWER', ANSWER)
def configure_logging( def configure_logging(
console_log_level, console_log_level,
syslog_log_level=None, syslog_log_level=None,
@ -130,6 +170,8 @@ def configure_logging(
if monitoring_log_level is None: if monitoring_log_level is None:
monitoring_log_level = console_log_level monitoring_log_level = console_log_level
add_custom_log_levels()
# Log certain log levels to console stderr and others to stdout. This supports use cases like # Log certain log levels to console stderr and others to stdout. This supports use cases like
# grepping (non-error) output. # grepping (non-error) output.
console_error_handler = logging.StreamHandler(sys.stderr) console_error_handler = logging.StreamHandler(sys.stderr)
@ -138,7 +180,8 @@ def configure_logging(
{ {
logging.CRITICAL: console_error_handler, logging.CRITICAL: console_error_handler,
logging.ERROR: console_error_handler, logging.ERROR: console_error_handler,
logging.WARN: console_standard_handler, logging.WARN: console_error_handler,
logging.ANSWER: console_standard_handler,
logging.INFO: console_standard_handler, logging.INFO: console_standard_handler,
logging.DEBUG: console_standard_handler, logging.DEBUG: console_standard_handler,
} }

View File

@ -1,7 +1,9 @@
import logging import logging
import borgmatic.logger
VERBOSITY_ERROR = -1 VERBOSITY_ERROR = -1
VERBOSITY_WARNING = 0 VERBOSITY_ANSWER = 0
VERBOSITY_SOME = 1 VERBOSITY_SOME = 1
VERBOSITY_LOTS = 2 VERBOSITY_LOTS = 2
@ -10,9 +12,11 @@ def verbosity_to_log_level(verbosity):
''' '''
Given a borgmatic verbosity value, return the corresponding Python log level. Given a borgmatic verbosity value, return the corresponding Python log level.
''' '''
borgmatic.logger.add_custom_log_levels()
return { return {
VERBOSITY_ERROR: logging.ERROR, VERBOSITY_ERROR: logging.ERROR,
VERBOSITY_WARNING: logging.WARNING, VERBOSITY_ANSWER: logging.ANSWER,
VERBOSITY_SOME: logging.INFO, VERBOSITY_SOME: logging.INFO,
VERBOSITY_LOTS: logging.DEBUG, VERBOSITY_LOTS: logging.DEBUG,
}.get(verbosity, logging.WARNING) }.get(verbosity, logging.WARNING)

View File

@ -1,14 +1,14 @@
FROM alpine:3.16.0 as borgmatic FROM alpine:3.17.1 as borgmatic
COPY . /app COPY . /app
RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib
RUN pip install --no-cache /app && generate-borgmatic-config && chmod +r /etc/borgmatic/config.yaml RUN pip install --no-cache /app && generate-borgmatic-config && chmod +r /etc/borgmatic/config.yaml
RUN borgmatic --help > /command-line.txt \ RUN borgmatic --help > /command-line.txt \
&& for action in rcreate transfer prune compact create check extract export-tar mount umount restore rlist list rinfo info borg; do \ && for action in rcreate transfer create prune compact check extract export-tar mount umount restore rlist list rinfo info break-lock borg; do \
echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \ echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \
&& borgmatic "$action" --help >> /command-line.txt; done && borgmatic "$action" --help >> /command-line.txt; done
FROM node:18.4.0-alpine as html FROM node:19.5.0-alpine as html
ARG ENVIRONMENT=production ARG ENVIRONMENT=production
@ -27,7 +27,7 @@ COPY . /source
RUN NODE_ENV=${ENVIRONMENT} npx eleventy --input=/source/docs --output=/output/docs \ RUN NODE_ENV=${ENVIRONMENT} npx eleventy --input=/source/docs --output=/output/docs \
&& mv /output/docs/index.html /output/index.html && mv /output/docs/index.html /output/index.html
FROM nginx:1.22.0-alpine FROM nginx:1.22.1-alpine
COPY --from=html /output /usr/share/nginx/html COPY --from=html /output /usr/share/nginx/html
COPY --from=borgmatic /etc/borgmatic/config.yaml /usr/share/nginx/html/docs/reference/config.yaml COPY --from=borgmatic /etc/borgmatic/config.yaml /usr/share/nginx/html/docs/reference/config.yaml

View File

@ -63,11 +63,6 @@
top: -2px; top: -2px;
bottom: 2px; bottom: 2px;
} }
@media (prefers-color-scheme: dark) {
.inlinelist .inlinelist-item code:before {
border-left-color: rgba(0,0,0,.8);
}
}
} }
a.buzzword { a.buzzword {
text-decoration: underline; text-decoration: underline;
@ -91,26 +86,9 @@ a.buzzword {
.buzzword { .buzzword {
background-color: #f7f7f7; background-color: #f7f7f7;
} }
@media (prefers-color-scheme: dark) {
.buzzword-list li,
.buzzword {
background-color: #080808;
}
}
.inlinelist .inlinelist-item { .inlinelist .inlinelist-item {
background-color: #e9e9e9; background-color: #e9e9e9;
} }
@media (prefers-color-scheme: dark) {
.inlinelist .inlinelist-item {
background-color: #000;
}
.inlinelist .inlinelist-item a {
color: #fff;
}
.inlinelist .inlinelist-item code {
color: inherit;
}
}
.inlinelist .inlinelist-item:hover, .inlinelist .inlinelist-item:hover,
.inlinelist .inlinelist-item:focus, .inlinelist .inlinelist-item:focus,
.buzzword-list li:hover, .buzzword-list li:hover,
@ -217,12 +195,6 @@ main p a.buzzword {
height: 1.75em; height: 1.75em;
font-weight: 600; font-weight: 600;
} }
@media (prefers-color-scheme: dark) {
.numberflag {
background-color: #00bcd4;
color: #222;
}
}
h1 .numberflag, h1 .numberflag,
h2 .numberflag, h2 .numberflag,
h3 .numberflag, h3 .numberflag,
@ -244,11 +216,6 @@ h2 .numberflag:after {
background-color: #fff; background-color: #fff;
width: calc(100% + 0.4em); /* 16px /40 */ width: calc(100% + 0.4em); /* 16px /40 */
} }
@media (prefers-color-scheme: dark) {
h2 .numberflag:after {
background-color: #222;
}
}
/* Super featured list on home page */ /* Super featured list on home page */
.list-superfeatured .avatar { .list-superfeatured .avatar {

View File

@ -12,16 +12,6 @@
line-height: 1.285714285714; /* 18px /14 */ line-height: 1.285714285714; /* 18px /14 */
font-family: system-ui, -apple-system, sans-serif; font-family: system-ui, -apple-system, sans-serif;
} }
@media (prefers-color-scheme: dark) {
.minilink {
background-color: #222;
/*
!important to override .elv-callout a
see _includes/components/callout.css
*/
color: #fff !important;
}
}
table .minilink { table .minilink {
margin-top: 6px; margin-top: 6px;
} }
@ -32,12 +22,6 @@ table .minilink {
.minilink[href]:focus { .minilink[href]:focus {
background-color: #bbb; background-color: #bbb;
} }
@media (prefers-color-scheme: dark) {
.minilink[href]:hover,
.minilink[href]:focus {
background-color: #444;
}
}
pre + .minilink { pre + .minilink {
color: #fff; color: #fff;
border-radius: 0 0 0.2857142857143em 0.2857142857143em; /* 4px /14 */ border-radius: 0 0 0.2857142857143em 0.2857142857143em; /* 4px /14 */
@ -74,11 +58,6 @@ h4 .minilink {
text-transform: none; text-transform: none;
box-shadow: 0 0 0 1px rgba(0,0,0,0.3); box-shadow: 0 0 0 1px rgba(0,0,0,0.3);
} }
@media (prefers-color-scheme: dark) {
.minilink-addedin {
box-shadow: 0 0 0 1px rgba(255,255,255,0.3);
}
}
.minilink-addedin:not(:first-child) { .minilink-addedin:not(:first-child) {
margin-left: .5em; margin-left: .5em;
} }

View File

@ -79,22 +79,11 @@
border-bottom: 1px solid #ddd; border-bottom: 1px solid #ddd;
margin-bottom: 0.25em; /* 4px /16 */ margin-bottom: 0.25em; /* 4px /16 */
} }
@media (prefers-color-scheme: dark) {
.elv-toc-list > li > a {
color: #fff;
border-color: #444;
}
}
/* Active links */ /* Active links */
.elv-toc-list li.elv-toc-active > a { .elv-toc-list li.elv-toc-active > a {
background-color: #dff7ff; background-color: #dff7ff;
} }
@media (prefers-color-scheme: dark) {
.elv-toc-list li.elv-toc-active > a {
background-color: #353535;
}
}
.elv-toc-list ul .elv-toc-active > a:after { .elv-toc-list ul .elv-toc-active > a:after {
content: ""; content: "";
} }

View File

@ -285,11 +285,6 @@ footer.elv-layout {
.elv-hero { .elv-hero {
background-color: #222; background-color: #222;
} }
@media (prefers-color-scheme: dark) {
.elv-hero {
background-color: #292929;
}
}
.elv-hero img, .elv-hero img,
.elv-hero svg { .elv-hero svg {
width: 42.95774646vh; width: 42.95774646vh;

View File

@ -68,6 +68,9 @@ borgmatic. borgmatic logs the soft failure, skips all further actions in that
configurable file, and proceeds onward to any other borgmatic configuration configurable file, and proceeds onward to any other borgmatic configuration
files you may have. files you may have.
Note that `before_backup` only runs on the `create` action. See below about
optionally using `before_actions` instead.
You can imagine a similar check for the sometimes-online server case: You can imagine a similar check for the sometimes-online server case:
```yaml ```yaml
@ -93,6 +96,12 @@ hooks:
(Writing the battery script is left as an exercise to the reader.) (Writing the battery script is left as an exercise to the reader.)
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. So if you'd like your soft
failure command hook to run regardless of action, consider using
`before_actions` instead of `before_backup`.
## Caveats and details ## Caveats and details

View File

@ -15,8 +15,7 @@ consistent snapshot that is more suited for backups.
Fortunately, borgmatic includes built-in support for creating database dumps Fortunately, borgmatic includes built-in support for creating database dumps
prior to running backups. For example, here is everything you need to dump and prior to running backups. For example, here is everything you need to dump and
backup a couple of local PostgreSQL databases, a MySQL/MariaDB database, and a backup a couple of local PostgreSQL databases and a MySQL/MariaDB database.
MongoDB database:
```yaml ```yaml
hooks: hooks:
@ -25,10 +24,27 @@ hooks:
- name: orders - name: orders
mysql_databases: mysql_databases:
- name: posts - name: posts
```
<span class="minilink minilink-addedin">New in version 1.5.22</span> You can
also dump MongoDB databases. For example:
```yaml
hooks:
mongodb_databases: mongodb_databases:
- name: messages - name: messages
``` ```
<span class="minilink minilink-addedin">New in version 1.7.9</span>
Additionally, you can dump SQLite databases. For example:
```yaml
hooks:
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
```
As part of each backup, borgmatic streams a database dump for each configured As part of each backup, borgmatic streams a database dump for each configured
database directly to Borg, so it's included in the backup without consuming database directly to Borg, so it's included in the backup without consuming
additional disk space. (The exceptions are the PostgreSQL/MongoDB "directory" additional disk space. (The exceptions are the PostgreSQL/MongoDB "directory"
@ -74,8 +90,19 @@ hooks:
password: trustsome1 password: trustsome1
authentication_database: mongousers authentication_database: mongousers
options: "--ssl" options: "--ssl"
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
``` ```
See your [borgmatic configuration
file](https://torsion.org/borgmatic/docs/reference/configuration/) for
additional customization of the options passed to database commands (when
listing databases, restoring databases, etc.).
### All databases
If you want to dump all databases on a host, use `all` for the database name: If you want to dump all databases on a host, use `all` for the database name:
```yaml ```yaml
@ -91,9 +118,32 @@ hooks:
Note that you may need to use a `username` of the `postgres` superuser for Note that you may need to use a `username` of the `postgres` superuser for
this to work with PostgreSQL. this to work with PostgreSQL.
If you would like to backup databases only and not source directories, you can The SQLite hook in particular does not consider "all" a special database name.
specify an empty `source_directories` value (as it is a mandatory field prior
to borgmatic 1.7.1): <span class="minilink minilink-addedin">New in version 1.7.6</span> With
PostgreSQL and MySQL, you can optionally dump "all" databases to separate
files instead of one combined dump file, allowing more convenient restores of
individual databases. Enable this by specifying your desired database dump
`format`:
```yaml
hooks:
postgresql_databases:
- name: all
format: custom
mysql_databases:
- name: all
format: sql
```
### No source directories
<span class="minilink minilink-addedin">New in version 1.7.1</span> If you
would like to backup databases only and not source directories, you can omit
`source_directories` entirely.
In older versions of borgmatic, instead specify an empty `source_directories`
value, as it is a mandatory option prior to version 1.7.1:
```yaml ```yaml
location: location:
@ -103,8 +153,7 @@ hooks:
- name: all - name: all
``` ```
<span class="minilink minilink-addedin">New in version 1.7.1</span> You can
omit `source_directories` entirely.
### External passwords ### External passwords
@ -126,11 +175,11 @@ bring back any missing configuration files in order to restore a database.
## Supported databases ## Supported databases
As of now, borgmatic supports PostgreSQL, MySQL/MariaDB, and MongoDB databases As of now, borgmatic supports PostgreSQL, MySQL/MariaDB, MongoDB, and SQLite
directly. But see below about general-purpose preparation and cleanup hooks as databases directly. But see below about general-purpose preparation and
a work-around with other database systems. Also, please [file a cleanup hooks as a work-around with other database systems. Also, please [file
ticket](https://torsion.org/borgmatic/#issues) for additional database systems a ticket](https://torsion.org/borgmatic/#issues) for additional database
that you'd like supported. systems that you'd like supported.
## Database restoration ## Database restoration
@ -148,15 +197,15 @@ borgmatic rlist
That should yield output looking something like: That should yield output looking something like:
```text ```text
host-2019-01-01T04:05:06.070809 Tue, 2019-01-01 04:05:06 [...] host-2023-01-01T04:05:06.070809 Tue, 2023-01-01 04:05:06 [...]
host-2019-01-02T04:06:07.080910 Wed, 2019-01-02 04:06:07 [...] host-2023-01-02T04:06:07.080910 Wed, 2023-01-02 04:06:07 [...]
``` ```
Assuming that you want to restore all database dumps from the archive with the Assuming that you want to restore all database dumps from the archive with the
most up-to-date files and therefore the latest timestamp, run a command like: most up-to-date files and therefore the latest timestamp, run a command like:
```bash ```bash
borgmatic restore --archive host-2019-01-02T04:06:07.080910 borgmatic restore --archive host-2023-01-02T04:06:07.080910
``` ```
(No borgmatic `restore` action? Upgrade borgmatic!) (No borgmatic `restore` action? Upgrade borgmatic!)
@ -185,7 +234,7 @@ But if you have multiple repositories configured, then you'll need to specify
the repository path containing the archive to restore. Here's an example: the repository path containing the archive to restore. Here's an example:
```bash ```bash
borgmatic restore --repository repo.borg --archive host-2019-... borgmatic restore --repository repo.borg --archive host-2023-...
``` ```
### Restore particular databases ### Restore particular databases
@ -195,9 +244,39 @@ restore one of them, use the `--database` flag to select one or more
databases. For instance: databases. For instance:
```bash ```bash
borgmatic restore --archive host-2019-... --database users borgmatic restore --archive host-2023-... --database users
``` ```
<span class="minilink minilink-addedin">New in version 1.7.6</span> You can
also restore individual databases even if you dumped them as "all"—as long as
you dumped them into separate files via use of the "format" option. See above
for more information.
### Restore all databases
To restore all databases:
```bash
borgmatic restore --archive host-2023-... --database all
```
Or omit the `--database` flag entirely:
```bash
borgmatic restore --archive host-2023-...
```
Prior to borgmatic version 1.7.6, this restores a combined "all" database
dump from the archive.
<span class="minilink minilink-addedin">New in version 1.7.6</span> Restoring
"all" databases restores each database found in the selected archive. That
includes any combined dump file named "all" and any other individual database
dumps found in the archive.
### Limitations ### Limitations
There are a few important limitations with borgmatic's current database There are a few important limitations with borgmatic's current database
@ -215,8 +294,13 @@ databases that share the exact same name on different hosts.
setting to support dump and restore streaming, you'll need to ensure that any setting to support dump and restore streaming, you'll need to ensure that any
special files are excluded from backups (named pipes, block devices, special files are excluded from backups (named pipes, block devices,
character devices, and sockets) to prevent hanging. Try a command like character devices, and sockets) to prevent hanging. Try a command like
`find /your/source/path -type c,b,p,s` to find such files. Common directories `find /your/source/path -type b -or -type c -or -type p -or -type s` to find
to exclude are `/dev` and `/run`, but that may not be exhaustive. such files. Common directories to exclude are `/dev` and `/run`, but that may
not be exhaustive. <span class="minilink minilink-addedin">New in version
1.7.3</span> When database hooks are enabled, borgmatic automatically excludes
special files that may cause Borg to hang, so you no longer need to manually
exclude them. (This includes symlinks with special files as a destination.) You
can override/prevent this behavior by explicitly setting `read_special` to true.
### Manual restoration ### Manual restoration
@ -232,7 +316,10 @@ user and you're extracting to `/tmp`, then the dump will be in
`/tmp/root/.borgmatic`. `/tmp/root/.borgmatic`.
After extraction, you can manually restore the dump file using native database After extraction, you can manually restore the dump file using native database
commands like `pg_restore`, `mysql`, `mongorestore` or similar. commands like `pg_restore`, `mysql`, `mongorestore`, `sqlite`, or similar.
Also see the documentation on [listing database
dumps](https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#listing-database-dumps).
## Preparation and cleanup hooks ## Preparation and cleanup hooks
@ -272,3 +359,7 @@ Alternatively, if excluding special files is too onerous, you can create two
separate borgmatic configuration files—one for your source files and a separate borgmatic configuration files—one for your source files and a
separate one for backing up databases. That way, the database `read_special` separate one for backing up databases. That way, the database `read_special`
option will not be active when backing up special files. option will not be active when backing up special files.
<span class="minilink minilink-addedin">New in version 1.7.3</span> See
Limitations above about borgmatic's automatic exclusion of special files to
prevent Borg hangs.

View File

@ -9,44 +9,56 @@ eleventyNavigation:
Borg itself is great for efficiently de-duplicating data across successive Borg itself is great for efficiently de-duplicating data across successive
backup archives, even when dealing with very large repositories. But you may backup archives, even when dealing with very large repositories. But you may
find that while borgmatic's default mode of `prune`, `compact`, `create`, and find that while borgmatic's default actions of `create`, `prune`, `compact`,
`check` works well on small repositories, it's not so great on larger ones. and `check` works well on small repositories, it's not so great on larger
That's because running the default pruning, compact, and consistency checks ones. That's because running the default pruning, compact, and consistency
take a long time on large repositories. checks take a long time on large repositories.
<span class="minilink minilink-addedin">Prior to version 1.7.9</span> The
default action ordering was `prune`, `compact`, `create`, and `check`.
### A la carte actions ### A la carte actions
If you find yourself in this situation, you have some options. First, you can If you find yourself wanting to customize the actions, you have some options.
run borgmatic's `prune`, `compact`, `create`, or `check` actions separately. First, you can run borgmatic's `prune`, `compact`, `create`, or `check`
For instance, the following optional actions are available: actions separately. For instance, the following optional actions are
available (among others):
```bash ```bash
borgmatic create
borgmatic prune borgmatic prune
borgmatic compact borgmatic compact
borgmatic create
borgmatic check borgmatic check
``` ```
You can run with only one of these actions provided, or you can mix and match You can run borgmatic with only one of these actions provided, or you can mix
any number of them in a single borgmatic run. This supports approaches like and match any number of them in a single borgmatic run. This supports
skipping certain actions while running others. For instance, this skips approaches like skipping certain actions while running others. For instance,
`prune` and `compact` and only runs `create` and `check`: this skips `prune` and `compact` and only runs `create` and `check`:
```bash ```bash
borgmatic create check borgmatic create check
``` ```
Or, you can make backups with `create` on a frequent schedule (e.g. with <span class="minilink minilink-addedin">New in version 1.7.9</span> borgmatic
`borgmatic create` called from one cron job), while only running expensive now respects your specified command-line action order, running actions in the
consistency checks with `check` on a much less frequent basis (e.g. with order you specify. In previous versions, borgmatic ran your specified actions
`borgmatic check` called from a separate cron job). in a fixed ordering regardless of the order they appeared on the command-line.
But instead of running actions together, another option is to run backups with
`create` on a frequent schedule (e.g. with `borgmatic create` called from one
cron job), while only running expensive consistency checks with `check` on a
much less frequent basis (e.g. with `borgmatic check` called from a separate
cron job).
### Consistency check configuration ### Consistency check configuration
Another option is to customize your consistency checks. The default Another option is to customize your consistency checks. By default, if you
consistency checks run both full-repository checks and per-archive checks omit consistency checks from configuration, borgmatic runs full-repository
within each repository no more than once a month. checks (`repository`) and per-archive checks (`archives`) within each
repository, no more than once a month. This is equivalent to what `borg check`
does if run without options.
But if you find that archive checks are too slow, for example, you can But if you find that archive checks are too slow, for example, you can
configure borgmatic to run repository checks only. Configure this in the configure borgmatic to run repository checks only. Configure this in the
@ -58,14 +70,25 @@ consistency:
- name: repository - name: repository
``` ```
(Prior to borgmatic 1.6.2, `checks` was a plain list of strings without the `name:` part.) <span class="minilink minilink-addedin">Prior to version 1.6.2</span> `checks`
was a plain list of strings without the `name:` part. For example:
```yaml
consistency:
checks:
- repository
```
Here are the available checks from fastest to slowest: Here are the available checks from fastest to slowest:
* `repository`: Checks the consistency of the repository itself. * `repository`: Checks the consistency of the repository itself.
* `archives`: Checks all of the archives in the repository. * `archives`: Checks all of the archives in the repository.
* `extract`: Performs an extraction dry-run of the most recent archive. * `extract`: Performs an extraction dry-run of the most recent archive.
* `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data (implies `archives` as well). * `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data.
Note that the `data` check is a more thorough version of the `archives` check,
so enabling the `data` check implicitly enables the `archives` check as well.
See [Borg's check See [Borg's check
documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html) documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html)
@ -120,7 +143,16 @@ consistency:
- name: disabled - name: disabled
``` ```
Or, if you have multiple repositories in your borgmatic configuration file, <span class="minilink minilink-addedin">Prior to version 1.6.2</span> `checks`
was a plain list of strings without the `name:` part. For instance:
```yaml
consistency:
checks:
- disabled
```
If you have multiple repositories in your borgmatic configuration file,
you can keep running consistency checks, but only against a subset of the you can keep running consistency checks, but only against a subset of the
repositories: repositories:

View File

@ -20,15 +20,15 @@ borgmatic rlist
That should yield output looking something like: That should yield output looking something like:
```text ```text
host-2019-01-01T04:05:06.070809 Tue, 2019-01-01 04:05:06 [...] host-2023-01-01T04:05:06.070809 Tue, 2023-01-01 04:05:06 [...]
host-2019-01-02T04:06:07.080910 Wed, 2019-01-02 04:06:07 [...] host-2023-01-02T04:06:07.080910 Wed, 2023-01-02 04:06:07 [...]
``` ```
Assuming that you want to extract the archive with the most up-to-date files Assuming that you want to extract the archive with the most up-to-date files
and therefore the latest timestamp, run a command like: and therefore the latest timestamp, run a command like:
```bash ```bash
borgmatic extract --archive host-2019-01-02T04:06:07.080910 borgmatic extract --archive host-2023-01-02T04:06:07.080910
``` ```
(No borgmatic `extract` action? Upgrade borgmatic!) (No borgmatic `extract` action? Upgrade borgmatic!)
@ -54,7 +54,7 @@ But if you have multiple repositories configured, then you'll need to specify
the repository path containing the archive to extract. Here's an example: the repository path containing the archive to extract. Here's an example:
```bash ```bash
borgmatic extract --repository repo.borg --archive host-2019-... borgmatic extract --repository repo.borg --archive host-2023-...
``` ```
## Extract particular files ## Extract particular files
@ -74,6 +74,13 @@ run the `extract` command above, borgmatic will extract `/var/path/1` and
`/var/path/2`. `/var/path/2`.
### Searching for files
If you're not sure which archive contains the files you're looking for, you
can [search across
archives](https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#searching-for-a-file).
## Extract to a particular destination ## Extract to a particular destination
By default, borgmatic extracts files into the current directory. To instead By default, borgmatic extracts files into the current directory. To instead

View File

@ -84,13 +84,26 @@ be a [Borg
pattern](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-patterns). pattern](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-patterns).
To limit the archives searched, use the standard `list` parameters for To limit the archives searched, use the standard `list` parameters for
filtering archives such as `--last`, `--archive`, `--glob-archives`, etc. For filtering archives such as `--last`, `--archive`, `--match-archives`, etc. For
example, to search only the last five archives: example, to search only the last five archives:
```bash ```bash
borgmatic list --find foo.txt --last 5 borgmatic list --find foo.txt --last 5
``` ```
## Listing database dumps
If you have enabled borgmatic's [database
hooks](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/), you
can list backed up database dumps via borgmatic. For example:
```bash
borgmatic list --archive latest --find .borgmatic/*_databases
```
This gives you a listing of all database dump files contained in the latest
archive, complete with file sizes.
## Logging ## Logging

View File

@ -42,3 +42,13 @@ potentially across providers.
See [Borg repository URLs See [Borg repository URLs
documentation](https://borgbackup.readthedocs.io/en/stable/usage/general.html#repository-urls) documentation](https://borgbackup.readthedocs.io/en/stable/usage/general.html#repository-urls)
for more information on how to specify local and remote repository paths. for more information on how to specify local and remote repository paths.
### Different options per repository
What if you want borgmatic to backup to multiple repositories—while also
setting different options for each one? In that case, you'll need to use
[a separate borgmatic configuration file for each
repository](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/)
instead of the multiple repositories in one configuration file as described
above. That's because all of the repositories in a particular configuration
file get the same options applied.

View File

@ -106,11 +106,60 @@ But if you do want to merge in a YAML key *and* its values, keep reading!
## Include merging ## Include merging
If you need to get even fancier and pull in common configuration options while If you need to get even fancier and merge in common configuration options, you
potentially overriding individual options, you can perform a YAML merge of can perform a YAML merge of included configuration using the YAML `<<` key.
included configuration using the YAML `<<` key. For instance, here's an For instance, here's an example of a main configuration file that pulls in
example of a main configuration file that pulls in two retention options via retention and consistency options via a single include:
an include and then overrides one of them locally:
```yaml
<<: !include /etc/borgmatic/common.yaml
location:
...
```
This is what `common.yaml` might look like:
```yaml
retention:
keep_hourly: 24
keep_daily: 7
consistency:
checks:
- name: repository
```
Once this include gets merged in, the resulting configuration would have all
of the `location` options from the original configuration file *and* the
`retention` and `consistency` options from the include.
Prior to borgmatic version 1.6.0, when there's a section collision between the
local file and the merged include, the local file's section takes precedence.
So if the `retention` section appears in both the local file and the include
file, the included `retention` is ignored in favor of the local `retention`.
But see below about deep merge in version 1.6.0+.
Note that this `<<` include merging syntax is only for merging in mappings
(configuration options and their values). But if you'd like to include a
single value directly, please see the section above about standard includes.
Additionally, there is a limitation preventing multiple `<<` include merges
per section. So for instance, that means you can do one `<<` merge at the
global level, another `<<` within each configuration section, etc. (This is a
YAML limitation.)
### Deep merge
<span class="minilink minilink-addedin">New in version 1.6.0</span> borgmatic
performs a deep merge of merged include files, meaning that values are merged
at all levels in the two configuration files. This allows you to include
common configuration—up to full borgmatic configuration files—while overriding
only the parts you want to customize.
For instance, here's an example of a main configuration file that pulls in two
retention options via an include and then overrides one of them locally:
```yaml ```yaml
<<: !include /etc/borgmatic/common.yaml <<: !include /etc/borgmatic/common.yaml
@ -136,24 +185,8 @@ Once this include gets merged in, the resulting configuration would have a
When there's an option collision between the local file and the merged When there's an option collision between the local file and the merged
include, the local file's option takes precedence. include, the local file's option takes precedence.
Note that this `<<` include merging syntax is only for merging in mappings <span class="minilink minilink-addedin">New in version 1.6.1</span> Colliding
(configuration options and their values). But if you'd like to include a list values are appended together.
single value directly, please see the section above about standard includes.
Additionally, there is a limitation preventing multiple `<<` include merges
per section. So for instance, that means you can do one `<<` merge at the
global level, another `<<` within each configuration section, etc. (This is a
YAML limitation.)
### Deep merge
<span class="minilink minilink-addedin">New in version 1.6.0</span> borgmatic
performs a deep merge of merged include files, meaning that values are merged
at all levels in the two configuration files. Colliding list values are
appended together. This allows you to include common configuration—up to full
borgmatic configuration files—while overriding only the parts you want to
customize.
## Configuration overrides ## Configuration overrides

View File

@ -83,7 +83,7 @@ tests](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/).
## Error hooks ## Error hooks
When an error occurs during a `prune`, `compact`, `create`, or `check` action, When an error occurs during a `create`, `prune`, `compact`, or `check` action,
borgmatic can run configurable shell commands to fire off custom error borgmatic can run configurable shell commands to fire off custom error
notifications or take other actions, so you can get alerted as soon as notifications or take other actions, so you can get alerted as soon as
something goes wrong. Here's a not-so-useful example: something goes wrong. Here's a not-so-useful example:
@ -116,8 +116,8 @@ the repository. Here's the full set of supported variables you can use here:
* `output`: output of the command that failed (may be blank if an error * `output`: output of the command that failed (may be blank if an error
occurred without running a command) occurred without running a command)
Note that borgmatic runs the `on_error` hooks only for `prune`, `compact`, Note that borgmatic runs the `on_error` hooks only for `create`, `prune`,
`create`, or `check` actions or hooks in which an error occurs, and not other `compact`, or `check` actions or hooks in which an error occurs, and not other
actions. borgmatic does not run `on_error` hooks if an error occurs within a actions. borgmatic does not run `on_error` hooks if an error occurs within a
`before_everything` or `after_everything` hook. For more about hooks, see the `before_everything` or `after_everything` hook. For more about hooks, see the
[borgmatic hooks [borgmatic hooks
@ -144,7 +144,7 @@ With this hook in place, borgmatic pings your Healthchecks project when a
backup begins, ends, or errors. Specifically, after the <a backup begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup` href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Healthchecks know that it has started if any of hooks</a> run, borgmatic lets Healthchecks know that it has started if any of
the `prune`, `compact`, `create`, or `check` actions are run. the `create`, `prune`, `compact`, or `check` actions are run.
Then, if the actions complete successfully, borgmatic notifies Healthchecks of Then, if the actions complete successfully, borgmatic notifies Healthchecks of
the success after the `after_backup` hooks run, and includes borgmatic logs in the success after the `after_backup` hooks run, and includes borgmatic logs in
@ -154,8 +154,8 @@ in the Healthchecks UI, although be aware that Healthchecks currently has a
If an error occurs during any action or hook, borgmatic notifies Healthchecks If an error occurs during any action or hook, borgmatic notifies Healthchecks
after the `on_error` hooks run, also tacking on logs including the error after the `on_error` hooks run, also tacking on logs including the error
itself. But the logs are only included for errors that occur when a `prune`, itself. But the logs are only included for errors that occur when a `create`,
`compact`, `create`, or `check` action is run. `prune`, `compact`, or `check` action is run.
You can customize the verbosity of the logs that are sent to Healthchecks with You can customize the verbosity of the logs that are sent to Healthchecks with
borgmatic's `--monitoring-verbosity` flag. The `--list` and `--stats` flags borgmatic's `--monitoring-verbosity` flag. The `--list` and `--stats` flags

View File

@ -30,8 +30,8 @@ based on your borgmatic configuration files or command-line arguments:
### borg action ### borg action
The way you run Borg with borgmatic is via the `borg` action. Here's a simple <span class="minilink minilink-addedin">New in version 1.5.15</span> The way
example: you run Borg with borgmatic is via the `borg` action. Here's a simple example:
```bash ```bash
borgmatic borg break-lock borgmatic borg break-lock
@ -87,6 +87,9 @@ borgmatic's `borg` action is not without limitations:
borgmatic action. In this case, only the Borg command is run. borgmatic action. In this case, only the Borg command is run.
* Unlike normal borgmatic actions that support JSON, the `borg` action will * Unlike normal borgmatic actions that support JSON, the `borg` action will
not disable certain borgmatic logs to avoid interfering with JSON output. not disable certain borgmatic logs to avoid interfering with JSON output.
* Unlike other borgmatic actions, the `borg` action captures (and logs) all
output, so interactive prompts or flags like `--progress` will not work as
expected.
In general, this `borgmatic borg` feature should be considered an escape In general, this `borgmatic borg` feature should be considered an escape
valve—a feature of second resort. In the long run, it's preferable to wrap valve—a feature of second resort. In the long run, it's preferable to wrap

View File

@ -93,6 +93,7 @@ installing borgmatic:
* [OpenBSD](http://ports.su/sysutils/borgmatic) * [OpenBSD](http://ports.su/sysutils/borgmatic)
* [openSUSE](https://software.opensuse.org/package/borgmatic) * [openSUSE](https://software.opensuse.org/package/borgmatic)
* [macOS (via Homebrew)](https://formulae.brew.sh/formula/borgmatic) * [macOS (via Homebrew)](https://formulae.brew.sh/formula/borgmatic)
* [macOS (via MacPorts)](https://ports.macports.org/port/borgmatic/)
* [Ansible role](https://github.com/borgbase/ansible-role-borgbackup) * [Ansible role](https://github.com/borgbase/ansible-role-borgbackup)
* [virtualenv](https://virtualenv.pypa.io/en/stable/) * [virtualenv](https://virtualenv.pypa.io/en/stable/)
@ -257,9 +258,9 @@ See `borgmatic --help` and `borgmatic create --help` for more information.
If you omit `create` and other actions, borgmatic runs through a set of If you omit `create` and other actions, borgmatic runs through a set of
default actions: `prune` any old backups as per the configured retention default actions: `prune` any old backups as per the configured retention
policy, `compact` segments to free up space (with Borg 1.2+), `create` a policy, `compact` segments to free up space (with Borg 1.2+, borgmatic
backup, *and* `check` backups for consistency problems due to things like file 1.5.23+), `create` a backup, *and* `check` backups for consistency problems
damage. For instance: due to things like file damage. For instance:
```bash ```bash
sudo borgmatic --verbosity 1 --list --stats sudo borgmatic --verbosity 1 --list --stats

View File

@ -160,17 +160,31 @@ Then, run the `rcreate` action (formerly `init`) to create that new Borg 2
repository: repository:
```bash ```bash
borgmatic rcreate --verbosity 1 --encryption repokey-aes-ocb \ borgmatic rcreate --verbosity 1 --encryption repokey-blake2-aes-ocb \
--source-repository original.borg --repository upgraded.borg --source-repository original.borg --repository upgraded.borg
``` ```
(Note that `repokey-chacha20-poly1305` may be faster than `repokey-aes-ocb` on
certain platforms like ARM64.)
This creates an empty repository and doesn't actually transfer any data yet. This creates an empty repository and doesn't actually transfer any data yet.
The `--source-repository` flag is necessary to reuse key material from your The `--source-repository` flag is necessary to reuse key material from your
Borg 1 repository so that the subsequent data transfer can work. Borg 1 repository so that the subsequent data transfer can work.
The `--encryption` value above selects the same chunk ID algorithm (`blake2`)
commonly used in Borg 1, thereby making deduplication work across transferred
archives and new archives.
If you get an error about "You must keep the same ID hash" from Borg, that
means the encryption value you specified doesn't correspond to your source
repository's chunk ID algorithm. In that case, try not using `blake2`:
```bash
borgmatic rcreate --verbosity 1 --encryption repokey-aes-ocb \
--source-repository original.borg --repository upgraded.borg
```
Read about [Borg encryption
modes](https://borgbackup.readthedocs.io/en/2.0.0b5/usage/rcreate.html#encryption-mode-tldr)
for more details.
To transfer data from your original Borg 1 repository to your newly created To transfer data from your original Borg 1 repository to your newly created
Borg 2 repository: Borg 2 repository:
@ -189,9 +203,9 @@ might take a while), and the final command with `--dry-run` again provides
confirmation of success—or tells you if something hasn't been transferred yet. confirmation of success—or tells you if something hasn't been transferred yet.
Note that by omitting the `--upgrader` flag, you can also do archive transfers Note that by omitting the `--upgrader` flag, you can also do archive transfers
between Borg 2 repositories without upgrading, even down to individual between related Borg 2 repositories without upgrading, even down to individual
archives. For more on that functionality, see the [Borg transfer archives. For more on that functionality, see the [Borg transfer
documentation](https://borgbackup.readthedocs.io/en/2.0.0b1/usage/transfer.html). documentation](https://borgbackup.readthedocs.io/en/2.0.0b5/usage/transfer.html).
That's it! Now you can use your new Borg 2 repository as normal with That's it! Now you can use your new Borg 2 repository as normal with
borgmatic. If you've got multiple repositories, repeat the above process for borgmatic. If you've got multiple repositories, repeat the above process for

View File

@ -13,9 +13,3 @@ each action sub-command:
``` ```
{% include borgmatic/command-line.txt %} {% include borgmatic/command-line.txt %}
``` ```
## Related documentation
* [Set up backups with borgmatic](https://torsion.org/borgmatic/docs/how-to/set-up-backups/)
* [borgmatic configuration reference](https://torsion.org/borgmatic/docs/reference/configuration/)

View File

@ -15,9 +15,3 @@ Here is a full sample borgmatic configuration file including all available optio
Note that you can also [download this configuration Note that you can also [download this configuration
file](https://torsion.org/borgmatic/docs/reference/config.yaml) for use locally. file](https://torsion.org/borgmatic/docs/reference/config.yaml) for use locally.
## Related documentation
* [Set up backups with borgmatic](https://torsion.org/borgmatic/docs/how-to/set-up-backups/)
* [borgmatic command-line reference](https://torsion.org/borgmatic/docs/reference/command-line/)

BIN
docs/static/sqlite.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

View File

@ -61,4 +61,4 @@ LogRateLimitIntervalSec=0
# Delay start to prevent backups running during boot. Note that systemd-inhibit requires dbus and # Delay start to prevent backups running during boot. Note that systemd-inhibit requires dbus and
# dbus-user-session to be installed. # dbus-user-session to be installed.
ExecStartPre=sleep 1m ExecStartPre=sleep 1m
ExecStart=systemd-inhibit --who="borgmatic" --why="Prevent interrupting scheduled backup" /root/.local/bin/borgmatic --verbosity -1 --syslog-verbosity 1 ExecStart=systemd-inhibit --who="borgmatic" --what="sleep:shutdown" --why="Prevent interrupting scheduled backup" /root/.local/bin/borgmatic --verbosity -1 --syslog-verbosity 1

View File

@ -53,6 +53,7 @@ for sub_command in prune create check list info; do
| grep -v '^--first' \ | grep -v '^--first' \
| grep -v '^--format' \ | grep -v '^--format' \
| grep -v '^--glob-archives' \ | grep -v '^--glob-archives' \
| grep -v '^--match-archives' \
| grep -v '^--last' \ | grep -v '^--last' \
| grep -v '^--format' \ | grep -v '^--format' \
| grep -v '^--patterns-from' \ | grep -v '^--patterns-from' \

View File

@ -11,7 +11,7 @@
set -e set -e
apk add --no-cache python3 py3-pip borgbackup postgresql-client mariadb-client mongodb-tools \ apk add --no-cache python3 py3-pip borgbackup postgresql-client mariadb-client mongodb-tools \
py3-ruamel.yaml py3-ruamel.yaml.clib bash py3-ruamel.yaml py3-ruamel.yaml.clib bash sqlite
# If certain dependencies of black are available in this version of Alpine, install them. # If certain dependencies of black are available in this version of Alpine, install them.
apk add --no-cache py3-typed-ast py3-regex || true apk add --no-cache py3-typed-ast py3-regex || true
python3 -m pip install --no-cache --upgrade pip==22.2.2 setuptools==64.0.1 python3 -m pip install --no-cache --upgrade pip==22.2.2 setuptools==64.0.1

View File

@ -10,6 +10,8 @@ filterwarnings =
[flake8] [flake8]
ignore = E501,W503 ignore = E501,W503
exclude = *.*/* exclude = *.*/*
multiline-quotes = '''
docstring-quotes = '''
[tool:isort] [tool:isort]
force_single_line = False force_single_line = False

View File

@ -1,6 +1,6 @@
from setuptools import find_packages, setup from setuptools import find_packages, setup
VERSION = '1.7.2' VERSION = '1.7.9.dev0'
setup( setup(

View File

@ -5,6 +5,7 @@ click==7.1.2; python_version >= '3.8'
colorama==0.4.4 colorama==0.4.4
coverage==5.3 coverage==5.3
flake8==4.0.1 flake8==4.0.1
flake8-quotes==3.3.2
flexmock==0.10.4 flexmock==0.10.4
isort==5.9.1 isort==5.9.1
mccabe==0.6.1 mccabe==0.6.1
@ -14,8 +15,8 @@ py==1.10.0
pycodestyle==2.8.0 pycodestyle==2.8.0
pyflakes==2.4.0 pyflakes==2.4.0
jsonschema==3.2.0 jsonschema==3.2.0
pytest==6.2.5 pytest==7.2.0
pytest-cov==3.0.0 pytest-cov==4.0.0
regex; python_version >= '3.8' regex; python_version >= '3.8'
requests==2.25.0 requests==2.25.0
ruamel.yaml>0.15.0,<0.18.0 ruamel.yaml>0.15.0,<0.18.0

View File

@ -9,20 +9,25 @@ import pytest
def write_configuration( def write_configuration(
config_path, repository_path, borgmatic_source_directory, postgresql_dump_format='custom' source_directory,
config_path,
repository_path,
borgmatic_source_directory,
postgresql_dump_format='custom',
mongodb_dump_format='archive',
): ):
''' '''
Write out borgmatic configuration into a file at the config path. Set the options so as to work Write out borgmatic configuration into a file at the config path. Set the options so as to work
for testing. This includes injecting the given repository path, borgmatic source directory for for testing. This includes injecting the given repository path, borgmatic source directory for
storing database dumps, dump format (for PostgreSQL), and encryption passphrase. storing database dumps, dump format (for PostgreSQL), and encryption passphrase.
''' '''
config = ''' config = f'''
location: location:
source_directories: source_directories:
- {} - {source_directory}
repositories: repositories:
- {} - {repository_path}
borgmatic_source_directory: {} borgmatic_source_directory: {borgmatic_source_directory}
storage: storage:
encryption_passphrase: "test" encryption_passphrase: "test"
@ -33,11 +38,16 @@ hooks:
hostname: postgresql hostname: postgresql
username: postgres username: postgres
password: test password: test
format: {} format: {postgresql_dump_format}
- name: all - name: all
hostname: postgresql hostname: postgresql
username: postgres username: postgres
password: test password: test
- name: all
format: custom
hostname: postgresql
username: postgres
password: test
mysql_databases: mysql_databases:
- name: test - name: test
hostname: mysql hostname: mysql
@ -47,19 +57,26 @@ hooks:
hostname: mysql hostname: mysql
username: root username: root
password: test password: test
- name: all
format: sql
hostname: mysql
username: root
password: test
mongodb_databases: mongodb_databases:
- name: test - name: test
hostname: mongodb hostname: mongodb
username: root username: root
password: test password: test
authentication_database: admin authentication_database: admin
format: {mongodb_dump_format}
- name: all - name: all
hostname: mongodb hostname: mongodb
username: root username: root
password: test password: test
'''.format( sqlite_databases:
config_path, repository_path, borgmatic_source_directory, postgresql_dump_format - name: sqlite_test
) path: /tmp/sqlite_test.db
'''
with open(config_path, 'w') as config_file: with open(config_path, 'w') as config_file:
config_file.write(config) config_file.write(config)
@ -71,11 +88,16 @@ def test_database_dump_and_restore():
repository_path = os.path.join(temporary_directory, 'test.borg') repository_path = os.path.join(temporary_directory, 'test.borg')
borgmatic_source_directory = os.path.join(temporary_directory, '.borgmatic') borgmatic_source_directory = os.path.join(temporary_directory, '.borgmatic')
# Write out a special file to ensure that it gets properly excluded and Borg doesn't hang on it.
os.mkfifo(os.path.join(temporary_directory, 'special_file'))
original_working_directory = os.getcwd() original_working_directory = os.getcwd()
try: try:
config_path = os.path.join(temporary_directory, 'test.yaml') config_path = os.path.join(temporary_directory, 'test.yaml')
write_configuration(config_path, repository_path, borgmatic_source_directory) write_configuration(
temporary_directory, config_path, repository_path, borgmatic_source_directory
)
subprocess.check_call( subprocess.check_call(
['borgmatic', '-v', '2', '--config', config_path, 'init', '--encryption', 'repokey'] ['borgmatic', '-v', '2', '--config', config_path, 'init', '--encryption', 'repokey']
@ -114,10 +136,12 @@ def test_database_dump_and_restore_with_directory_format():
try: try:
config_path = os.path.join(temporary_directory, 'test.yaml') config_path = os.path.join(temporary_directory, 'test.yaml')
write_configuration( write_configuration(
temporary_directory,
config_path, config_path,
repository_path, repository_path,
borgmatic_source_directory, borgmatic_source_directory,
postgresql_dump_format='directory', postgresql_dump_format='directory',
mongodb_dump_format='directory',
) )
subprocess.check_call( subprocess.check_call(
@ -146,7 +170,9 @@ def test_database_dump_with_error_causes_borgmatic_to_exit():
try: try:
config_path = os.path.join(temporary_directory, 'test.yaml') config_path = os.path.join(temporary_directory, 'test.yaml')
write_configuration(config_path, repository_path, borgmatic_source_directory) write_configuration(
temporary_directory, config_path, repository_path, borgmatic_source_directory
)
subprocess.check_call( subprocess.check_call(
['borgmatic', '-v', '2', '--config', config_path, 'init', '--encryption', 'repokey'] ['borgmatic', '-v', '2', '--config', config_path, 'init', '--encryption', 'repokey']

View File

@ -422,6 +422,13 @@ def test_parse_arguments_with_list_flag_but_no_relevant_action_raises_value_erro
module.parse_arguments('--list', 'rcreate') module.parse_arguments('--list', 'rcreate')
def test_parse_arguments_disallows_list_with_progress_for_create_action():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
with pytest.raises(ValueError):
module.parse_arguments('create', '--list', '--progress')
def test_parse_arguments_allows_json_with_list_or_info(): def test_parse_arguments_allows_json_with_list_or_info():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default']) flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
@ -450,7 +457,7 @@ def test_parse_arguments_disallows_json_with_both_rinfo_and_info():
module.parse_arguments('rinfo', 'info', '--json') module.parse_arguments('rinfo', 'info', '--json')
def test_parse_arguments_disallows_transfer_with_both_archive_and_glob_archives(): def test_parse_arguments_disallows_transfer_with_both_archive_and_match_archives():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default']) flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
with pytest.raises(ValueError): with pytest.raises(ValueError):
@ -460,16 +467,16 @@ def test_parse_arguments_disallows_transfer_with_both_archive_and_glob_archives(
'source.borg', 'source.borg',
'--archive', '--archive',
'foo', 'foo',
'--glob-archives', '--match-archives',
'*bar', 'sh:*bar',
) )
def test_parse_arguments_disallows_info_with_both_archive_and_glob_archives(): def test_parse_arguments_disallows_info_with_both_archive_and_match_archives():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default']) flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
with pytest.raises(ValueError): with pytest.raises(ValueError):
module.parse_arguments('info', '--archive', 'foo', '--glob-archives', '*bar') module.parse_arguments('info', '--archive', 'foo', '--match-archives', 'sh:*bar')
def test_parse_arguments_disallows_info_with_both_archive_and_prefix(): def test_parse_arguments_disallows_info_with_both_archive_and_prefix():
@ -479,11 +486,11 @@ def test_parse_arguments_disallows_info_with_both_archive_and_prefix():
module.parse_arguments('info', '--archive', 'foo', '--prefix', 'bar') module.parse_arguments('info', '--archive', 'foo', '--prefix', 'bar')
def test_parse_arguments_disallows_info_with_both_prefix_and_glob_archives(): def test_parse_arguments_disallows_info_with_both_prefix_and_match_archives():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default']) flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
with pytest.raises(ValueError): with pytest.raises(ValueError):
module.parse_arguments('info', '--prefix', 'foo', '--glob-archives', '*bar') module.parse_arguments('info', '--prefix', 'foo', '--match-archives', 'sh:*bar')
def test_parse_arguments_check_only_extract_does_not_raise_extract_subparser_error(): def test_parse_arguments_check_only_extract_does_not_raise_extract_subparser_error():

View File

View File

@ -0,0 +1,22 @@
from flexmock import flexmock
from borgmatic.actions import borg as module
def test_run_borg_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.rlist).should_receive('resolve_archive_name').and_return(
flexmock()
)
flexmock(module.borgmatic.borg.borg).should_receive('run_arbitrary_borg')
borg_arguments = flexmock(repository=flexmock(), archive=flexmock(), options=flexmock())
module.run_borg(
repository='repo',
storage={},
local_borg_version=None,
borg_arguments=borg_arguments,
local_path=None,
remote_path=None,
)

View File

@ -0,0 +1,19 @@
from flexmock import flexmock
from borgmatic.actions import break_lock as module
def test_run_break_lock_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.break_lock).should_receive('break_lock')
break_lock_arguments = flexmock(repository=flexmock())
module.run_break_lock(
repository='repo',
storage={},
local_borg_version=None,
break_lock_arguments=break_lock_arguments,
local_path=None,
remote_path=None,
)

View File

@ -0,0 +1,31 @@
from flexmock import flexmock
from borgmatic.actions import check as module
def test_run_check_calls_hooks():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.checks).should_receive(
'repository_enabled_for_checks'
).and_return(True)
flexmock(module.borgmatic.borg.check).should_receive('check_archives')
flexmock(module.borgmatic.hooks.command).should_receive('execute_hook').times(2)
check_arguments = flexmock(
progress=flexmock(), repair=flexmock(), only=flexmock(), force=flexmock()
)
global_arguments = flexmock(monitoring_verbosity=1, dry_run=False)
module.run_check(
config_filename='test.yaml',
repository='repo',
location={'repositories': ['repo']},
storage={},
consistency={},
hooks={},
hook_context={},
local_borg_version=None,
check_arguments=check_arguments,
global_arguments=global_arguments,
local_path=None,
remote_path=None,
)

View File

@ -0,0 +1,29 @@
from flexmock import flexmock
from borgmatic.actions import compact as module
def test_compact_actions_calls_hooks():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.borg.feature).should_receive('available').and_return(True)
flexmock(module.borgmatic.borg.compact).should_receive('compact_segments')
flexmock(module.borgmatic.hooks.command).should_receive('execute_hook').times(2)
compact_arguments = flexmock(
progress=flexmock(), cleanup_commits=flexmock(), threshold=flexmock()
)
global_arguments = flexmock(monitoring_verbosity=1, dry_run=False)
module.run_compact(
config_filename='test.yaml',
repository='repo',
storage={},
retention={},
hooks={},
hook_context={},
local_borg_version=None,
compact_arguments=compact_arguments,
global_arguments=global_arguments,
dry_run_label='',
local_path=None,
remote_path=None,
)

View File

@ -0,0 +1,34 @@
from flexmock import flexmock
from borgmatic.actions import create as module
def test_run_create_executes_and_calls_hooks():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.borg.create).should_receive('create_archive')
flexmock(module.borgmatic.hooks.command).should_receive('execute_hook').times(2)
flexmock(module.borgmatic.hooks.dispatch).should_receive('call_hooks').and_return({})
flexmock(module.borgmatic.hooks.dispatch).should_receive(
'call_hooks_even_if_unconfigured'
).and_return({})
create_arguments = flexmock(
progress=flexmock(), stats=flexmock(), json=flexmock(), list_files=flexmock()
)
global_arguments = flexmock(monitoring_verbosity=1, dry_run=False)
list(
module.run_create(
config_filename='test.yaml',
repository='repo',
location={},
storage={},
hooks={},
hook_context={},
local_borg_version=None,
create_arguments=create_arguments,
global_arguments=global_arguments,
dry_run_label='',
local_path=None,
remote_path=None,
)
)

View File

@ -0,0 +1,29 @@
from flexmock import flexmock
from borgmatic.actions import export_tar as module
def test_run_export_tar_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.export_tar).should_receive('export_tar_archive')
export_tar_arguments = flexmock(
repository=flexmock(),
archive=flexmock(),
paths=flexmock(),
destination=flexmock(),
tar_filter=flexmock(),
list_files=flexmock(),
strip_components=flexmock(),
)
global_arguments = flexmock(monitoring_verbosity=1, dry_run=False)
module.run_export_tar(
repository='repo',
storage={},
local_borg_version=None,
export_tar_arguments=export_tar_arguments,
global_arguments=global_arguments,
local_path=None,
remote_path=None,
)

View File

@ -0,0 +1,33 @@
from flexmock import flexmock
from borgmatic.actions import extract as module
def test_run_extract_calls_hooks():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.extract).should_receive('extract_archive')
flexmock(module.borgmatic.hooks.command).should_receive('execute_hook').times(2)
extract_arguments = flexmock(
paths=flexmock(),
progress=flexmock(),
destination=flexmock(),
strip_components=flexmock(),
archive=flexmock(),
repository='repo',
)
global_arguments = flexmock(monitoring_verbosity=1, dry_run=False)
module.run_extract(
config_filename='test.yaml',
repository='repo',
location={'repositories': ['repo']},
storage={},
hooks={},
hook_context={},
local_borg_version=None,
extract_arguments=extract_arguments,
global_arguments=global_arguments,
local_path=None,
remote_path=None,
)

View File

@ -0,0 +1,24 @@
from flexmock import flexmock
from borgmatic.actions import info as module
def test_run_info_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.rlist).should_receive('resolve_archive_name').and_return(
flexmock()
)
flexmock(module.borgmatic.borg.info).should_receive('display_archives_info')
info_arguments = flexmock(repository=flexmock(), archive=flexmock(), json=flexmock())
list(
module.run_info(
repository='repo',
storage={},
local_borg_version=None,
info_arguments=info_arguments,
local_path=None,
remote_path=None,
)
)

View File

@ -0,0 +1,24 @@
from flexmock import flexmock
from borgmatic.actions import list as module
def test_run_list_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.rlist).should_receive('resolve_archive_name').and_return(
flexmock()
)
flexmock(module.borgmatic.borg.list).should_receive('list_archive')
list_arguments = flexmock(repository=flexmock(), archive=flexmock(), json=flexmock())
list(
module.run_list(
repository='repo',
storage={},
local_borg_version=None,
list_arguments=list_arguments,
local_path=None,
remote_path=None,
)
)

View File

@ -0,0 +1,26 @@
from flexmock import flexmock
from borgmatic.actions import mount as module
def test_run_mount_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.mount).should_receive('mount_archive')
mount_arguments = flexmock(
repository=flexmock(),
archive=flexmock(),
mount_point=flexmock(),
paths=flexmock(),
foreground=flexmock(),
options=flexmock(),
)
module.run_mount(
repository='repo',
storage={},
local_borg_version=None,
mount_arguments=mount_arguments,
local_path=None,
remote_path=None,
)

View File

@ -0,0 +1,26 @@
from flexmock import flexmock
from borgmatic.actions import prune as module
def test_run_prune_calls_hooks():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.borg.prune).should_receive('prune_archives')
flexmock(module.borgmatic.hooks.command).should_receive('execute_hook').times(2)
prune_arguments = flexmock(stats=flexmock(), list_archives=flexmock())
global_arguments = flexmock(monitoring_verbosity=1, dry_run=False)
module.run_prune(
config_filename='test.yaml',
repository='repo',
storage={},
retention={},
hooks={},
hook_context={},
local_borg_version=None,
prune_arguments=prune_arguments,
global_arguments=global_arguments,
dry_run_label='',
local_path=None,
remote_path=None,
)

View File

@ -0,0 +1,55 @@
from flexmock import flexmock
from borgmatic.actions import rcreate as module
def test_run_rcreate_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.rcreate).should_receive('create_repository')
arguments = flexmock(
encryption_mode=flexmock(),
source_repository=flexmock(),
repository=flexmock(),
copy_crypt_key=flexmock(),
append_only=flexmock(),
storage_quota=flexmock(),
make_parent_dirs=flexmock(),
)
module.run_rcreate(
repository='repo',
storage={},
local_borg_version=None,
rcreate_arguments=arguments,
global_arguments=flexmock(dry_run=False),
local_path=None,
remote_path=None,
)
def test_run_rcreate_bails_if_repository_does_not_match():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(
False
)
flexmock(module.borgmatic.borg.rcreate).should_receive('create_repository').never()
arguments = flexmock(
encryption_mode=flexmock(),
source_repository=flexmock(),
repository=flexmock(),
copy_crypt_key=flexmock(),
append_only=flexmock(),
storage_quota=flexmock(),
make_parent_dirs=flexmock(),
)
module.run_rcreate(
repository='repo',
storage={},
local_borg_version=None,
rcreate_arguments=arguments,
global_arguments=flexmock(dry_run=False),
local_path=None,
remote_path=None,
)

View File

@ -0,0 +1,495 @@
import pytest
from flexmock import flexmock
import borgmatic.actions.restore as module
def test_get_configured_database_matches_database_by_name():
assert module.get_configured_database(
hooks={
'other_databases': [{'name': 'other'}],
'postgresql_databases': [{'name': 'foo'}, {'name': 'bar'}],
},
archive_database_names={'postgresql_databases': ['other', 'foo', 'bar']},
hook_name='postgresql_databases',
database_name='bar',
) == ('postgresql_databases', {'name': 'bar'})
def test_get_configured_database_matches_nothing_when_database_name_not_configured():
assert module.get_configured_database(
hooks={'postgresql_databases': [{'name': 'foo'}, {'name': 'bar'}]},
archive_database_names={'postgresql_databases': ['foo']},
hook_name='postgresql_databases',
database_name='quux',
) == (None, None)
def test_get_configured_database_matches_nothing_when_database_name_not_in_archive():
assert module.get_configured_database(
hooks={'postgresql_databases': [{'name': 'foo'}, {'name': 'bar'}]},
archive_database_names={'postgresql_databases': ['bar']},
hook_name='postgresql_databases',
database_name='foo',
) == (None, None)
def test_get_configured_database_matches_database_by_configuration_database_name():
assert module.get_configured_database(
hooks={'postgresql_databases': [{'name': 'all'}, {'name': 'bar'}]},
archive_database_names={'postgresql_databases': ['foo']},
hook_name='postgresql_databases',
database_name='foo',
configuration_database_name='all',
) == ('postgresql_databases', {'name': 'all'})
def test_get_configured_database_with_unspecified_hook_matches_database_by_name():
assert module.get_configured_database(
hooks={
'other_databases': [{'name': 'other'}],
'postgresql_databases': [{'name': 'foo'}, {'name': 'bar'}],
},
archive_database_names={'postgresql_databases': ['other', 'foo', 'bar']},
hook_name=module.UNSPECIFIED_HOOK,
database_name='bar',
) == ('postgresql_databases', {'name': 'bar'})
def test_collect_archive_database_names_parses_archive_paths():
flexmock(module.borgmatic.hooks.dump).should_receive('make_database_dump_path').and_return('')
flexmock(module.borgmatic.borg.list).should_receive('capture_archive_listing').and_return(
[
'.borgmatic/postgresql_databases/localhost/foo',
'.borgmatic/postgresql_databases/localhost/bar',
'.borgmatic/mysql_databases/localhost/quux',
]
)
archive_database_names = module.collect_archive_database_names(
repository='repo',
archive='archive',
location={'borgmatic_source_directory': '.borgmatic'},
storage=flexmock(),
local_borg_version=flexmock(),
local_path=flexmock(),
remote_path=flexmock(),
)
assert archive_database_names == {
'postgresql_databases': ['foo', 'bar'],
'mysql_databases': ['quux'],
}
def test_collect_archive_database_names_parses_directory_format_archive_paths():
flexmock(module.borgmatic.hooks.dump).should_receive('make_database_dump_path').and_return('')
flexmock(module.borgmatic.borg.list).should_receive('capture_archive_listing').and_return(
[
'.borgmatic/postgresql_databases/localhost/foo/table1',
'.borgmatic/postgresql_databases/localhost/foo/table2',
]
)
archive_database_names = module.collect_archive_database_names(
repository='repo',
archive='archive',
location={'borgmatic_source_directory': '.borgmatic'},
storage=flexmock(),
local_borg_version=flexmock(),
local_path=flexmock(),
remote_path=flexmock(),
)
assert archive_database_names == {
'postgresql_databases': ['foo'],
}
def test_collect_archive_database_names_skips_bad_archive_paths():
flexmock(module.borgmatic.hooks.dump).should_receive('make_database_dump_path').and_return('')
flexmock(module.borgmatic.borg.list).should_receive('capture_archive_listing').and_return(
['.borgmatic/postgresql_databases/localhost/foo', '.borgmatic/invalid', 'invalid/as/well']
)
archive_database_names = module.collect_archive_database_names(
repository='repo',
archive='archive',
location={'borgmatic_source_directory': '.borgmatic'},
storage=flexmock(),
local_borg_version=flexmock(),
local_path=flexmock(),
remote_path=flexmock(),
)
assert archive_database_names == {
'postgresql_databases': ['foo'],
}
def test_find_databases_to_restore_passes_through_requested_names_found_in_archive():
restore_names = module.find_databases_to_restore(
requested_database_names=['foo', 'bar'],
archive_database_names={'postresql_databases': ['foo', 'bar', 'baz']},
)
assert restore_names == {module.UNSPECIFIED_HOOK: ['foo', 'bar']}
def test_find_databases_to_restore_raises_for_requested_names_missing_from_archive():
with pytest.raises(ValueError):
module.find_databases_to_restore(
requested_database_names=['foo', 'bar'],
archive_database_names={'postresql_databases': ['foo']},
)
def test_find_databases_to_restore_without_requested_names_finds_all_archive_databases():
archive_database_names = {'postresql_databases': ['foo', 'bar']}
restore_names = module.find_databases_to_restore(
requested_database_names=[], archive_database_names=archive_database_names,
)
assert restore_names == archive_database_names
def test_find_databases_to_restore_with_all_in_requested_names_finds_all_archive_databases():
archive_database_names = {'postresql_databases': ['foo', 'bar']}
restore_names = module.find_databases_to_restore(
requested_database_names=['all'], archive_database_names=archive_database_names,
)
assert restore_names == archive_database_names
def test_find_databases_to_restore_with_all_in_requested_names_plus_additional_requested_names_omits_duplicates():
archive_database_names = {'postresql_databases': ['foo', 'bar']}
restore_names = module.find_databases_to_restore(
requested_database_names=['all', 'foo', 'bar'],
archive_database_names=archive_database_names,
)
assert restore_names == archive_database_names
def test_find_databases_to_restore_raises_for_all_in_requested_names_and_requested_named_missing_from_archives():
with pytest.raises(ValueError):
module.find_databases_to_restore(
requested_database_names=['all', 'foo', 'bar'],
archive_database_names={'postresql_databases': ['foo']},
)
def test_ensure_databases_found_with_all_databases_found_does_not_raise():
module.ensure_databases_found(
restore_names={'postgresql_databases': ['foo']},
remaining_restore_names={'postgresql_databases': ['bar']},
found_names=['foo', 'bar'],
)
def test_ensure_databases_found_with_no_databases_raises():
with pytest.raises(ValueError):
module.ensure_databases_found(
restore_names={'postgresql_databases': []}, remaining_restore_names={}, found_names=[],
)
def test_ensure_databases_found_with_missing_databases_raises():
with pytest.raises(ValueError):
module.ensure_databases_found(
restore_names={'postgresql_databases': ['foo']},
remaining_restore_names={'postgresql_databases': ['bar']},
found_names=['foo'],
)
def test_run_restore_restores_each_database():
restore_names = {
'postgresql_databases': ['foo', 'bar'],
}
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.hooks.dispatch).should_receive('call_hooks_even_if_unconfigured')
flexmock(module.borgmatic.borg.rlist).should_receive('resolve_archive_name').and_return(
flexmock()
)
flexmock(module).should_receive('collect_archive_database_names').and_return(flexmock())
flexmock(module).should_receive('find_databases_to_restore').and_return(restore_names)
flexmock(module).should_receive('get_configured_database').and_return(
('postgresql_databases', {'name': 'foo'})
).and_return(('postgresql_databases', {'name': 'bar'}))
flexmock(module).should_receive('restore_single_database').with_args(
repository=object,
location=object,
storage=object,
hooks=object,
local_borg_version=object,
global_arguments=object,
local_path=object,
remote_path=object,
archive_name=object,
hook_name='postgresql_databases',
database={'name': 'foo'},
).once()
flexmock(module).should_receive('restore_single_database').with_args(
repository=object,
location=object,
storage=object,
hooks=object,
local_borg_version=object,
global_arguments=object,
local_path=object,
remote_path=object,
archive_name=object,
hook_name='postgresql_databases',
database={'name': 'bar'},
).once()
flexmock(module).should_receive('ensure_databases_found')
module.run_restore(
repository='repo',
location=flexmock(),
storage=flexmock(),
hooks=flexmock(),
local_borg_version=flexmock(),
restore_arguments=flexmock(repository='repo', archive='archive', databases=flexmock()),
global_arguments=flexmock(dry_run=False),
local_path=flexmock(),
remote_path=flexmock(),
)
def test_run_restore_bails_for_non_matching_repository():
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(
False
)
flexmock(module.borgmatic.hooks.dispatch).should_receive(
'call_hooks_even_if_unconfigured'
).never()
flexmock(module).should_receive('restore_single_database').never()
module.run_restore(
repository='repo',
location=flexmock(),
storage=flexmock(),
hooks=flexmock(),
local_borg_version=flexmock(),
restore_arguments=flexmock(repository='repo', archive='archive', databases=flexmock()),
global_arguments=flexmock(dry_run=False),
local_path=flexmock(),
remote_path=flexmock(),
)
def test_run_restore_restores_database_configured_with_all_name():
restore_names = {
'postgresql_databases': ['foo', 'bar'],
}
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.hooks.dispatch).should_receive('call_hooks_even_if_unconfigured')
flexmock(module.borgmatic.borg.rlist).should_receive('resolve_archive_name').and_return(
flexmock()
)
flexmock(module).should_receive('collect_archive_database_names').and_return(flexmock())
flexmock(module).should_receive('find_databases_to_restore').and_return(restore_names)
flexmock(module).should_receive('get_configured_database').with_args(
hooks=object,
archive_database_names=object,
hook_name='postgresql_databases',
database_name='foo',
).and_return(('postgresql_databases', {'name': 'foo'}))
flexmock(module).should_receive('get_configured_database').with_args(
hooks=object,
archive_database_names=object,
hook_name='postgresql_databases',
database_name='bar',
).and_return((None, None))
flexmock(module).should_receive('get_configured_database').with_args(
hooks=object,
archive_database_names=object,
hook_name='postgresql_databases',
database_name='bar',
configuration_database_name='all',
).and_return(('postgresql_databases', {'name': 'bar'}))
flexmock(module).should_receive('restore_single_database').with_args(
repository=object,
location=object,
storage=object,
hooks=object,
local_borg_version=object,
global_arguments=object,
local_path=object,
remote_path=object,
archive_name=object,
hook_name='postgresql_databases',
database={'name': 'foo'},
).once()
flexmock(module).should_receive('restore_single_database').with_args(
repository=object,
location=object,
storage=object,
hooks=object,
local_borg_version=object,
global_arguments=object,
local_path=object,
remote_path=object,
archive_name=object,
hook_name='postgresql_databases',
database={'name': 'bar'},
).once()
flexmock(module).should_receive('ensure_databases_found')
module.run_restore(
repository='repo',
location=flexmock(),
storage=flexmock(),
hooks=flexmock(),
local_borg_version=flexmock(),
restore_arguments=flexmock(repository='repo', archive='archive', databases=flexmock()),
global_arguments=flexmock(dry_run=False),
local_path=flexmock(),
remote_path=flexmock(),
)
def test_run_restore_skips_missing_database():
restore_names = {
'postgresql_databases': ['foo', 'bar'],
}
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.hooks.dispatch).should_receive('call_hooks_even_if_unconfigured')
flexmock(module.borgmatic.borg.rlist).should_receive('resolve_archive_name').and_return(
flexmock()
)
flexmock(module).should_receive('collect_archive_database_names').and_return(flexmock())
flexmock(module).should_receive('find_databases_to_restore').and_return(restore_names)
flexmock(module).should_receive('get_configured_database').with_args(
hooks=object,
archive_database_names=object,
hook_name='postgresql_databases',
database_name='foo',
).and_return(('postgresql_databases', {'name': 'foo'}))
flexmock(module).should_receive('get_configured_database').with_args(
hooks=object,
archive_database_names=object,
hook_name='postgresql_databases',
database_name='bar',
).and_return((None, None))
flexmock(module).should_receive('get_configured_database').with_args(
hooks=object,
archive_database_names=object,
hook_name='postgresql_databases',
database_name='bar',
configuration_database_name='all',
).and_return((None, None))
flexmock(module).should_receive('restore_single_database').with_args(
repository=object,
location=object,
storage=object,
hooks=object,
local_borg_version=object,
global_arguments=object,
local_path=object,
remote_path=object,
archive_name=object,
hook_name='postgresql_databases',
database={'name': 'foo'},
).once()
flexmock(module).should_receive('restore_single_database').with_args(
repository=object,
location=object,
storage=object,
hooks=object,
local_borg_version=object,
global_arguments=object,
local_path=object,
remote_path=object,
archive_name=object,
hook_name='postgresql_databases',
database={'name': 'bar'},
).never()
flexmock(module).should_receive('ensure_databases_found')
module.run_restore(
repository='repo',
location=flexmock(),
storage=flexmock(),
hooks=flexmock(),
local_borg_version=flexmock(),
restore_arguments=flexmock(repository='repo', archive='archive', databases=flexmock()),
global_arguments=flexmock(dry_run=False),
local_path=flexmock(),
remote_path=flexmock(),
)
def test_run_restore_restores_databases_from_different_hooks():
restore_names = {
'postgresql_databases': ['foo'],
'mysql_databases': ['bar'],
}
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.hooks.dispatch).should_receive('call_hooks_even_if_unconfigured')
flexmock(module.borgmatic.borg.rlist).should_receive('resolve_archive_name').and_return(
flexmock()
)
flexmock(module).should_receive('collect_archive_database_names').and_return(flexmock())
flexmock(module).should_receive('find_databases_to_restore').and_return(restore_names)
flexmock(module).should_receive('get_configured_database').with_args(
hooks=object,
archive_database_names=object,
hook_name='postgresql_databases',
database_name='foo',
).and_return(('postgresql_databases', {'name': 'foo'}))
flexmock(module).should_receive('get_configured_database').with_args(
hooks=object,
archive_database_names=object,
hook_name='mysql_databases',
database_name='bar',
).and_return(('mysql_databases', {'name': 'bar'}))
flexmock(module).should_receive('restore_single_database').with_args(
repository=object,
location=object,
storage=object,
hooks=object,
local_borg_version=object,
global_arguments=object,
local_path=object,
remote_path=object,
archive_name=object,
hook_name='postgresql_databases',
database={'name': 'foo'},
).once()
flexmock(module).should_receive('restore_single_database').with_args(
repository=object,
location=object,
storage=object,
hooks=object,
local_borg_version=object,
global_arguments=object,
local_path=object,
remote_path=object,
archive_name=object,
hook_name='mysql_databases',
database={'name': 'bar'},
).once()
flexmock(module).should_receive('ensure_databases_found')
module.run_restore(
repository='repo',
location=flexmock(),
storage=flexmock(),
hooks=flexmock(),
local_borg_version=flexmock(),
restore_arguments=flexmock(repository='repo', archive='archive', databases=flexmock()),
global_arguments=flexmock(dry_run=False),
local_path=flexmock(),
remote_path=flexmock(),
)

View File

@ -0,0 +1,21 @@
from flexmock import flexmock
from borgmatic.actions import rinfo as module
def test_run_rinfo_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.rinfo).should_receive('display_repository_info')
rinfo_arguments = flexmock(repository=flexmock(), json=flexmock())
list(
module.run_rinfo(
repository='repo',
storage={},
local_borg_version=None,
rinfo_arguments=rinfo_arguments,
local_path=None,
remote_path=None,
)
)

View File

@ -0,0 +1,21 @@
from flexmock import flexmock
from borgmatic.actions import rlist as module
def test_run_rlist_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.config.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borgmatic.borg.rlist).should_receive('list_repository')
rlist_arguments = flexmock(repository=flexmock(), json=flexmock())
list(
module.run_rlist(
repository='repo',
storage={},
local_borg_version=None,
rlist_arguments=rlist_arguments,
local_path=None,
remote_path=None,
)
)

View File

@ -0,0 +1,20 @@
from flexmock import flexmock
from borgmatic.actions import transfer as module
def test_run_transfer_does_not_raise():
flexmock(module.logger).answer = lambda message: None
flexmock(module.borgmatic.borg.transfer).should_receive('transfer_archives')
transfer_arguments = flexmock()
global_arguments = flexmock(monitoring_verbosity=1, dry_run=False)
module.run_transfer(
repository='repo',
storage={},
local_borg_version=None,
transfer_arguments=transfer_arguments,
global_arguments=global_arguments,
local_path=None,
remote_path=None,
)

Some files were not shown because too many files have changed in this diff Show More