Compare commits

...

186 Commits

Author SHA1 Message Date
929d343214 Add CLI flags for every config option and add config options for many action flags (#303).
All checks were successful
build / test (push) Successful in 6m25s
build / docs (push) Successful in 1m12s
Reviewed-on: #1040
2025-04-03 23:48:49 +00:00
9ea55d9aa3 Add a documentation note about a limitation: You can't pass flags as values to flags (#303). 2025-04-03 16:38:17 -07:00
3eabda45f2 If a boolean option name already starts with "no_", don't add a "--no-no-..." CLI flag (#303). 2025-04-03 16:21:22 -07:00
09212961a4 Add action "--help" note about running compact after recreate (#1053).
All checks were successful
build / test (push) Successful in 5m49s
build / docs (push) Successful in 1m4s
2025-04-03 12:55:26 -07:00
3f25f3f0ff Merge branch 'main' into config-command-line. 2025-04-03 11:47:29 -07:00
e8542f3613 Fix KeePassXC error when "keepassxc:" option is not present, add new options to NEWS (#1047).
All checks were successful
build / test (push) Successful in 5m51s
build / docs (push) Successful in 1m13s
2025-04-03 11:41:58 -07:00
9407f24674 Fix setting of "--checks" on the command-line (#303). 2025-04-03 11:28:32 -07:00
1c9d25b892 Add "key-file" and "yubikey" options to KeePassXC credential hook (#1047).
Some checks failed
build / test (push) Failing after 5m52s
build / docs (push) Has been skipped
Reviewed-on: #1049
2025-04-03 18:28:08 +00:00
248999c23e Final 2025-04-03 17:10:52 +00:00
d0a5aa63be Add a TL;DR to NEWS since 2.0.0 is such a huge release and ain't nobody got time for reading a huge changelog. 2025-04-03 09:24:47 -07:00
d2c3ed26a9 Make a CLI flag for any config option that's a list of scalars (#303). 2025-04-02 23:15:21 -07:00
bbf6f27715 For boolean configuration options, add separate "--foo" and "--no-foo" CLI flags (#303). 2025-04-02 17:08:04 -07:00
9301ab13cc Merge branch 'main' into config-command-line. 2025-04-02 09:55:33 -07:00
d5d04b89dc Add configuration filename to "Successfully ran configuration file" log message (#1051).
All checks were successful
build / test (push) Successful in 10m4s
build / docs (push) Successful in 1m14s
2025-04-02 09:50:31 -07:00
364200c65a Fix incorrect matching of non-zero array index flags with dashed names (#303). 2025-04-02 09:37:52 -07:00
4e55547235 Command Restructuring 2025-04-02 15:35:12 +00:00
96ec66de79 Applied changes 2025-04-02 10:50:25 +00:00
7a0c56878b Applied changes 2025-04-02 10:47:35 +00:00
4065c5d0f7 Fix use of dashed command-line flags like "--repositories[2].append-only" generated from configuration (#303). 2025-04-01 23:04:53 -07:00
affe7cdc1b Expose propertyless YAML objects from configuration (e.g. "constants") as command-line flags (#303). 2025-04-01 21:05:44 -07:00
017cbae4f9 Fix for the example not showing up in generated config for empty YAML objects (#303). 2025-04-01 19:44:47 -07:00
e96db2e100 Fix "progress" option with the "transfer" action (#303). 2025-04-01 19:43:56 -07:00
af97b95e2b Merge branch 'main' into config-command-line. 2025-04-01 12:09:54 -07:00
6a61259f1a Fix a failure in the "spot" check when the archive contains a symlink (#1050).
All checks were successful
build / test (push) Successful in 10m19s
build / docs (push) Successful in 1m14s
2025-04-01 11:49:47 -07:00
5490a83d77 Merge branch 'main' into config-command-line. 2025-03-31 17:13:20 -07:00
8c907bb5a3 Fix broken "recreate" action with Borg 1.4 (#610).
All checks were successful
build / test (push) Successful in 9m49s
build / docs (push) Successful in 1m14s
2025-03-31 17:11:37 -07:00
f166111b9b Fix new "repositories:" sub-options ("append_only", "make_parent_directories", etc.) (#303). 2025-03-31 15:26:24 -07:00
10fb02c40a Fix bootstrap --progress flag (#303). 2025-03-31 13:33:39 -07:00
cf477bdc1c Fix broken list_details, progress, and statistics options (#303). 2025-03-31 11:33:56 -07:00
6f07402407 Fix end-to-end tests and don't stat() directories that don't exist (#1048).
All checks were successful
build / test (push) Successful in 5m52s
build / docs (push) Successful in 55s
2025-03-30 19:04:36 -07:00
ab01e97a5e Fix a "no such file or directory" error in ZFS, Btrfs, and LVM hooks with nested directories that reside on separate devices/filesystems (#1048).
Some checks failed
build / test (push) Failing after 5m40s
build / docs (push) Has been skipped
2025-03-30 14:55:54 -07:00
92ebc77597 2nd Draft 2025-03-30 16:19:56 +00:00
863c954144 added schema.yaml 2025-03-30 15:57:42 +00:00
f7e4d38762 First Draft 2025-03-30 14:02:56 +00:00
de4d7af507 Merge branch 'main' into config-command-line. 2025-03-29 22:52:40 -07:00
5cea1e1b72 Fix flake error (#262).
All checks were successful
build / test (push) Successful in 5m52s
build / docs (push) Successful in 1m15s
2025-03-29 22:52:17 -07:00
fd8c11eb0a Add documentation for "native" command-line overrides without --override (#303). 2025-03-29 21:59:47 -07:00
92de539bf9 Merge branch 'main' into config-command-line. 2025-03-29 19:55:03 -07:00
5716e61f8f Code formatting (#262).
Some checks failed
build / test (push) Failing after 1m49s
build / docs (push) Has been skipped
2025-03-29 19:54:40 -07:00
3e05eeb4de Merge branch 'main' into config-command-line. 2025-03-29 19:03:29 -07:00
65d1b9235d Add "default_actions" to NEWS (#262).
Some checks failed
build / test (push) Failing after 1m43s
build / docs (push) Has been skipped
2025-03-29 19:02:11 -07:00
cffb8e88da Merge branch 'main' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic into config-command-line 2025-03-29 18:58:12 -07:00
a8362f2618 borgmatic without arguments/parameters should show usage help instead of starting a backup (#262).
Some checks failed
build / test (push) Failing after 1m42s
build / docs (push) Has been skipped
Reviewed-on: #1046
2025-03-30 01:57:11 +00:00
36265eea7d Docs update 2025-03-30 01:34:30 +00:00
8101e5c56f Add "list_details" config option support to new "recreate" action (#303). 2025-03-29 15:24:37 -07:00
c7feb16ab5 Merge branch 'main' into config-command-line. 2025-03-29 15:16:29 -07:00
da324ebeb7 Add "recreate" action to NEWS and docs (#610).
All checks were successful
build / test (push) Successful in 5m48s
build / docs (push) Successful in 1m15s
2025-03-29 15:15:36 -07:00
59f9d56aae Add a recreate action (#1030).
Some checks failed
build / docs (push) Has been cancelled
build / test (push) Has been cancelled
Reviewed-on: #1030
2025-03-29 22:07:52 +00:00
Vandal
dbf2e78f62 help changes 2025-03-30 03:05:46 +05:30
f6929f8891 Add last couple of missing tests after audit (#303). 2025-03-29 14:26:54 -07:00
Vandal
2716d9d0b0 add to schema 2025-03-29 23:25:50 +05:30
668f767bfc Adding some missing tests and fixing related flag vs. config logic (#303). 2025-03-28 23:11:15 -07:00
0182dbd914 Added 2 new unit tests and updated docs 2025-03-29 03:43:58 +00:00
1c27e0dadc Add an end-to-end test for command-line flags of configuration options (#303). 2025-03-28 13:46:58 -07:00
Vandal
8b3a682edf add tests and minor fixes 2025-03-29 01:26:20 +05:30
975a6e4540 Add additional tests for complete coverage (#303). 2025-03-28 11:37:48 -07:00
Vandal
7020f0530a update existing tests 2025-03-28 22:22:19 +05:30
5bf2f546b9 More automated tests (#303). 2025-03-27 21:01:56 -07:00
b4c558d013 Add tests for CLI arguments from schema logic (#303). 2025-03-27 16:49:14 -07:00
79bf641668 Set the action type when cloning an argument for a list index flag (#303). 2025-03-27 12:42:49 -07:00
50beb334dc Add tests for adding array element arguments and fix the code under test (#303). 2025-03-27 11:07:25 -07:00
Vandal
26fd41da92 add rest of flags 2025-03-27 22:18:34 +05:30
088da19012 Added Unit Tests 2025-03-27 11:26:56 +00:00
4c6674e0ad Merge branch 'main' into config-command-line. 2025-03-26 22:14:36 -07:00
486bec698d Add "key import" to reference documentation (#345).
All checks were successful
build / test (push) Successful in 9m59s
build / docs (push) Successful in 1m14s
2025-03-26 22:13:30 -07:00
7a766c717e 2nd Draft 2025-03-27 02:55:16 +00:00
520fb78a00 Clarify Btrfs documentation: borgmatic expects subvolume mount points in "source_directories" (#1043).
All checks were successful
build / test (push) Successful in 5m49s
build / docs (push) Successful in 59s
2025-03-26 11:39:16 -07:00
Vandal
acc2814f11 add archive timestamp filter 2025-03-26 23:39:06 +05:30
996b037946 1st 2025-03-26 17:39:10 +00:00
Vandal
9356924418 add archive options 2025-03-26 22:30:11 +05:30
79e4e089ee Fix typo in NEWS (#1044).
All checks were successful
build / test (push) Successful in 5m50s
build / docs (push) Successful in 1m0s
2025-03-26 09:57:53 -07:00
d2714cb706 Fix an error in the systemd credential hook when the credential name contains a "." chararcter (#1044).
Some checks failed
build / test (push) Failing after 1m48s
build / docs (push) Has been skipped
2025-03-26 09:53:52 -07:00
5a0430b9c8 Merge branch 'main' into config-command-line. 2025-03-25 22:39:51 -07:00
23efbb8df3 Fix line wrapping / code style (#837).
All checks were successful
build / test (push) Successful in 8m7s
build / docs (push) Successful in 1m12s
2025-03-25 22:31:50 -07:00
9e694e4df9 Add MongoDB custom command options to NEWS (#837).
Some checks failed
build / docs (push) Has been cancelled
build / test (push) Has been cancelled
2025-03-25 22:28:14 -07:00
76f7c53a1c Add custom command options for MongoDB hook (#837).
Some checks failed
build / docs (push) Has been cancelled
build / test (push) Has been cancelled
Reviewed-on: #1041
2025-03-26 05:27:03 +00:00
Vandal
203e84b91f hotfix 2025-03-25 21:57:06 +05:30
Vandal
ea5a2d8a46 add tests for the flags 2025-03-25 20:39:02 +05:30
Vandal
a8726c408a add tests 2025-03-25 19:35:15 +05:30
Vandal
3542673446 add test recreate with skip action 2025-03-25 11:36:06 +05:30
532a97623c Added test_build_restore_command_prevents_shell_injection() 2025-03-25 04:50:45 +00:00
e1fdfe4c2f Add credential hook directory expansion to NEWS (#422).
All checks were successful
build / test (push) Successful in 8m40s
build / docs (push) Successful in 1m15s
2025-03-24 13:00:38 -07:00
83a56a3fef Add directory expansion for file-based and KeyPassXC credential hooks (#1042).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: #1042
2025-03-24 19:57:18 +00:00
Vandal
b60cf2449a add recreate to schema 2025-03-25 00:48:27 +05:30
Vandal
e7f14bca87 add tests and requested changes 2025-03-25 00:16:20 +05:30
Nish_
4bca7bb198 add directory expansion for file-based and KeyPassXC credentials
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-24 21:04:55 +05:30
Vandal
fa3b140590 add patterns 2025-03-24 12:09:08 +05:30
Vandal
a1d2f7f221 add path 2025-03-24 11:51:33 +05:30
6a470be924 Made some changes in test file 2025-03-24 03:53:42 +00:00
d651813601 Custom command options for MongoDB hook #837 2025-03-24 03:39:26 +00:00
65b1d8e8b2 Clarify NEWS items (#303). 2025-03-23 19:13:07 -07:00
16a1121649 Get existing end-to-end tests passing (#303). 2025-03-23 18:45:49 -07:00
423627e67b Get existing unit/integration tests passing (#303). 2025-03-23 17:00:04 -07:00
9f7c71265e Add Bash completion for completing flags like "--foo[3].bar". 2025-03-23 16:32:31 -07:00
ba75958a2f Fix missing argument descriptions (#303). 2025-03-23 11:26:49 -07:00
57721937a3 Factor out schema type comparion in config generation and get several tests passing (#303). 2025-03-23 11:24:36 -07:00
f222bf2c1a Organizational refactoring (#303). 2025-03-22 22:52:23 -07:00
dc9da3832d Bold "not yet released" in docs to prevent confusion (#303). 2025-03-22 14:03:44 -07:00
f8eda92379 Code formatting (#303). 2025-03-22 14:01:39 -07:00
cc14421460 Fix list examples in generated configuration. 2025-03-22 13:58:42 -07:00
Vandal
a750d58a2d add recreate action 2025-03-22 21:18:28 +05:30
2045706faa merge upstream 2025-03-22 13:00:07 +00:00
976fb8f343 Add "compact_threshold" option, overridden by "compact --threshold" flag (#303). 2025-03-21 22:44:49 -07:00
5246a10b99 Merge branch 'main' into config-command-line. 2025-03-21 15:44:12 -07:00
524ec6b3cb Add "extract" action fix to NEWS (#1037).
All checks were successful
build / test (push) Successful in 8m11s
build / docs (push) Successful in 1m22s
2025-03-21 15:43:05 -07:00
6f1c77bc7d Merge branch 'main' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic into config-command-line 2025-03-21 15:40:27 -07:00
7904ffb641 Fix extracting from remote repositories with working_directory defined (#1037).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: #1038
Reviewed-by: Dan Helfman <witten@torsion.org>
2025-03-21 22:40:18 +00:00
cd5ba81748 Fix docs: Crontabs aren't executable (#1039).
All checks were successful
build / test (push) Successful in 5m59s
build / docs (push) Successful in 59s
Reviewed-on: #1039
2025-03-21 21:32:38 +00:00
5c11052b8c Merge branch 'main' into config-command-line 2025-03-21 14:30:39 -07:00
514ade6609 Fix inconsistent quotes in one documentation file (#790).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-03-21 14:27:40 -07:00
201469e2c2 Add "key import" action to NEWS (#345).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-03-21 14:26:01 -07:00
9ac2a2e286 Add key import action to import a copy of repository key from backup (#345).
Some checks failed
build / test (push) Failing after 1m41s
build / docs (push) Has been skipped
Reviewed-on: #1036
Reviewed-by: Dan Helfman <witten@torsion.org>
2025-03-21 21:22:50 +00:00
Benjamin Bock
a16d138afc Crontabs aren't executable 2025-03-21 21:58:02 +01:00
Benjamin Bock
81a3a99578 Fix extracting from remote repositories with working_directory defined 2025-03-21 21:34:46 +01:00
f3cc3b1b65 Merge branch 'main' into config-command-line 2025-03-21 11:10:19 -07:00
587d31de7c Run all command hooks respecting the "working_directory" option if configured (#790).
All checks were successful
build / test (push) Successful in 10m15s
build / docs (push) Successful in 1m14s
2025-03-21 10:53:06 -07:00
cbfc0bead1 Exclude --match-archives from global flags since it already exists on several actions (#303). 2025-03-21 09:56:42 -07:00
Nish_
8aaa5ba8a6 minor changes
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-21 19:26:12 +05:30
7d989f727d Don't auto-add CLI flags for configuration options that already have per-action CLI flags (#303). 2025-03-20 12:23:00 -07:00
Nish_
5525b467ef add key import command
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-21 00:47:45 +05:30
89c98de122 Merge branch 'main' into config-command-line. 2025-03-20 11:37:04 -07:00
c2409d9968 Remove the "dump_data_sources" command hook, as it doesn't really solve the use case and works differently than all the other command hooks (#790).
All checks were successful
build / test (push) Successful in 5m47s
build / docs (push) Successful in 1m6s
2025-03-20 11:13:37 -07:00
624a7de622 Document "after" command hooks running in case of error and make sure that happens in case of "before" hook error (#790).
All checks were successful
build / test (push) Successful in 10m16s
build / docs (push) Successful in 1m22s
2025-03-20 10:57:39 -07:00
3119c924b4 In configuration option descriptions, remove mention of corresponding CLI flags because it looks dumb on the command-line help (#303). 2025-03-19 23:08:26 -07:00
ed6022d4a9 Add "list" option to configuration, corresponding to "--list" (#303). 2025-03-19 23:05:38 -07:00
3e21cdb579 Add "stats" option to configuration (#303). 2025-03-19 19:43:04 -07:00
d02d31f445 Use schema defaults instead of a flag name whitelist to make valueless boolean flags (#303). 2025-03-19 11:37:17 -07:00
1097a6576f Add "progress" option to configuration (#303). 2025-03-19 11:06:36 -07:00
63b0c69794 Add additional options under "repositories:" for parity with repo-create #303. 2025-03-18 20:54:14 -07:00
Vandal
4e2805918d update borg/recreate.py 2025-03-18 23:19:33 +05:30
711f5fa6cb UX nicety to make default-false boolean options into valueless CLI flags (#303). 2025-03-17 22:58:25 -07:00
93e7da823c Add an encryption option to repositories (#303). 2025-03-17 22:24:01 -07:00
903308864c Factor out schema type parsing (#303). 2025-03-17 10:46:02 -07:00
d75c8609c5 Merge branch 'main' into config-command-line 2025-03-17 10:34:20 -07:00
c926f0bd5d Clarify documentation for dump_data_sources command hook (#790).
All checks were successful
build / test (push) Successful in 10m21s
build / docs (push) Successful in 1m14s
2025-03-17 10:31:34 -07:00
7b14e8c7f2 Add feature to NEWS (#303). 2025-03-17 10:17:04 -07:00
87b9ad5aea Code formatting (#303). 2025-03-17 10:02:25 -07:00
eca78fbc2c Support setting whole lists and dicts from the command-line (#303). 2025-03-17 09:57:25 -07:00
Vandal
6adb0fd44c add borg recreate 2025-03-17 22:24:53 +05:30
05900c188f Expand docstrings (#303). 2025-03-15 22:58:39 -07:00
1d5713c4c5 Updated outdated schema comment referencing ~/.borgmatic path (#836).
All checks were successful
build / test (push) Successful in 6m7s
build / docs (push) Successful in 1m13s
2025-03-15 21:42:45 -07:00
f9612cc685 Add SQLite custom command option to NEWS (#836). 2025-03-15 21:37:23 -07:00
5742a1a2d9 Add custom command option for SQLite hook (#836).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: #1027
2025-03-16 04:34:15 +00:00
Nish_
c84815bfb0 add custom dump and restore commands for sqlite hook
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-16 09:07:49 +05:30
e1ff51ff1e Merge branch 'main' into config-command-line. 2025-03-15 10:03:59 -07:00
1c92d84e09 Add Borg 2 "prune --stats" flag change to NEWS (#1010).
All checks were successful
build / test (push) Successful in 9m59s
build / docs (push) Successful in 1m33s
2025-03-15 10:02:47 -07:00
1d94fb501f Conditionally pass --stats to prune based on Borg version (#1010).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: #1026
2025-03-15 16:59:50 +00:00
92279d3c71 Initial work on command-line flags for all configuration (#303). 2025-03-14 22:59:43 -07:00
Nish_
1b4c94ad1e Add feature toggle to pass --stats to prune on Borg 1, but not Borg 2
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-15 09:56:14 +05:30
901e668c76 Document a database use case involving a temporary database client container (#1020).
All checks were successful
build / test (push) Successful in 7m37s
build / docs (push) Successful in 1m30s
2025-03-12 17:10:35 -07:00
bcb224a243 Claim another implemented ticket in NEWS (#821).
All checks were successful
build / test (push) Successful in 7m35s
build / docs (push) Successful in 1m25s
2025-03-12 14:31:13 -07:00
6b6e1e0336 Make the "configuration" command hook support "error" hooks and also pinging monitoring on failure (#790).
All checks were successful
build / test (push) Successful in 12m18s
build / docs (push) Successful in 1m53s
2025-03-12 14:13:29 -07:00
f5c9bc4fa9 Add a "not yet released" note on 2.0.0 in docs (#790).
All checks were successful
build / test (push) Successful in 7m15s
build / docs (push) Successful in 1m35s
2025-03-11 16:46:07 -07:00
cdd0e6f052 Fix incorrect kwarg in LVM hook (#790).
All checks were successful
build / test (push) Successful in 7m3s
build / docs (push) Successful in 1m36s
2025-03-11 14:42:25 -07:00
7bdbadbac2 Deprecate all "before_*", "after_*" and "on_error" command hooks in favor of more flexible "commands:" (#790).
Some checks failed
build / test (push) Failing after 15m7s
build / docs (push) Has been skipped
Reviewed-on: #1019
2025-03-11 21:22:33 +00:00
d3413e0907 Documentation clarification (#1019). 2025-03-11 14:20:42 -07:00
8a20ee7304 Fix typo in documentation (#1019). 2025-03-11 14:08:53 -07:00
325f53c286 Context tweaks + mention configuration upgrade in command hook documentation (#1019). 2025-03-11 14:07:06 -07:00
b4d24798bf More command hook documentation updates (#1019). 2025-03-11 13:03:58 -07:00
7965eb9de3 Correctly handle errors in command hooks (#1019). 2025-03-11 11:36:28 -07:00
8817364e6d Documentation on command hooks (#1019). 2025-03-10 22:38:48 -07:00
965740c778 Update version of command hooks since they didn't get released in 1.9.14 (#1019). 2025-03-10 10:37:09 -07:00
2a0319f02f Merge branch 'main' into unified-command-hooks. 2025-03-10 10:35:36 -07:00
fbdb09b87d Bump version for release.
All checks were successful
build / test (push) Successful in 6m42s
build / docs (push) Successful in 1m19s
2025-03-10 10:17:36 -07:00
bec5a0c0ca Fix end-to-end tests for Btrfs (#1023).
All checks were successful
build / test (push) Successful in 6m50s
build / docs (push) Successful in 1m38s
2025-03-10 10:15:23 -07:00
4ee7f72696 Fix an error in the Btrfs hook when attempting to snapshot a read-only subvolume (#1023).
Some checks failed
build / test (push) Failing after 6m54s
build / docs (push) Has been skipped
2025-03-09 23:04:55 -07:00
9941d7dc57 More docs and command hook context tweaks (#1019). 2025-03-09 17:01:46 -07:00
ec88bb2e9c Merge branch 'main' into unified-command-hooks. 2025-03-09 13:37:17 -07:00
68b6d01071 Fix a regression in which the "exclude_patterns" option didn't expand "~" (#1021).
All checks were successful
build / test (push) Successful in 7m11s
build / docs (push) Successful in 1m31s
2025-03-09 13:35:22 -07:00
b52339652f Initial command hooks documentation work (#1019). 2025-03-09 09:57:13 -07:00
4fd22b2df0 Merge branch 'main' into unified-command-hooks. 2025-03-08 21:02:04 -08:00
86b138e73b Clarify command hook documentation.
All checks were successful
build / test (push) Successful in 11m29s
build / docs (push) Successful in 1m44s
2025-03-08 21:00:58 -08:00
5ab766b51c Add a few more missing tests (#1019). 2025-03-08 20:55:13 -08:00
45c114973c Add missing test coverage for new/changed code (#1019). 2025-03-08 18:31:16 -08:00
6a96a78cf1 Fix existing tests (#1019). 2025-03-07 22:58:25 -08:00
e06c6740f2 Switch to context manager for running "dump_data_sources" before/after hooks (#790). 2025-03-07 10:33:39 -08:00
10bd1c7b41 Remove restore_data_source_dump as a command hook for now (#790). 2025-03-06 22:53:19 -08:00
d4f48a3a9e Initial work on unified command hooks (#790). 2025-03-06 11:23:24 -08:00
c76a108422 Link to Zabbix documentation from NEWS. 2025-03-06 10:37:00 -08:00
eb5dc128bf Fix incorrect test name (#1017).
All checks were successful
build / test (push) Successful in 7m10s
build / docs (push) Successful in 1m32s
2025-03-06 10:34:28 -08:00
1d486d024b Fix a regression in which some MariaDB/MySQL passwords were not escaped correctly (#1017).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-03-06 10:32:38 -08:00
5a8f27d75c Add single quotes around the MariaDB password (#1017).
All checks were successful
build / test (push) Successful in 11m51s
build / docs (push) Successful in 1m41s
Reviewed-on: #1017
2025-03-06 18:01:43 +00:00
a926b413bc Updating automated test, and fixing linting errors. 2025-03-06 09:00:33 -03:30
18ffd96d62 Add single quotes around the password.
When the DB password uses some special characters, the
defaults-extra-file can be incorrect. In the case of a password with
the # symbol, anything after that is considered a comment. The single
quotes around the password rectify this.
2025-03-05 22:51:41 -03:30
c0135864c2 With the PagerDuty monitoring hook, send borgmatic logs to PagerDuty so they show up in the incident UI (#409).
All checks were successful
build / test (push) Successful in 10m48s
build / docs (push) Successful in 2m50s
2025-03-04 08:55:09 -08:00
ddfd3c6ca1 Clarify Zabbix monitoring hook documentation about creating items (#936).
All checks were successful
build / test (push) Successful in 7m54s
build / docs (push) Successful in 1m40s
2025-03-03 16:02:22 -08:00
118 changed files with 8658 additions and 1876 deletions

52
NEWS
View File

@@ -1,3 +1,55 @@
2.0.0.dev0
* TL;DR: More flexible, completely revamped command hooks. All config options settable on the
command-line. Config option defaults for many command-line flags. New "key import" and "recreate"
actions. Almost everything is backwards compatible.
* #262: Add a "default_actions" option that supports disabling default actions when borgmatic is
run without any command-line arguments.
* #303: Deprecate the "--override" flag in favor of direct command-line flags for every borgmatic
configuration option. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides
* #303: Add configuration options that serve as defaults for some (but not all) command-line
action flags. For example, each entry in "repositories:" now has an "encryption" option that
applies to the "repo-create" action, serving as a default for the "--encryption" flag. See the
documentation for more information: https://torsion.org/borgmatic/docs/reference/configuration/
* #345: Add a "key import" action to import a repository key from backup.
* #422: Add home directory expansion to file-based and KeePassXC credential hooks.
* #610: Add a "recreate" action for recreating archives, for instance for retroactively excluding
particular files from existing archives.
* #790, #821: Deprecate all "before_*", "after_*" and "on_error" command hooks in favor of more
flexible "commands:". See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #790: BREAKING: For both new and deprecated command hooks, run a configured "after" hook even if
an error occurs first. This allows you to perform cleanup steps that correspond to "before"
preparation commands—even when something goes wrong.
* #790: BREAKING: Run all command hooks (both new and deprecated) respecting the
"working_directory" option if configured, meaning that hook commands are run in that directory.
* #836: Add a custom command option for the SQLite hook.
* #837: Add custom command options for the MongoDB hook.
* #1010: When using Borg 2, don't pass the "--stats" flag to "borg prune".
* #1020: Document a database use case involving a temporary database client container:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
* #1037: Fix an error with the "extract" action when both a remote repository and a
"working_directory" are used.
* #1044: Fix an error in the systemd credential hook when the credential name contains a "."
character.
* #1047: Add "key-file" and "yubikey" options to the KeePassXC credential hook.
* #1048: Fix a "no such file or directory" error in ZFS, Btrfs, and LVM hooks with nested
directories that reside on separate devices/filesystems.
* #1050: Fix a failure in the "spot" check when the archive contains a symlink.
* #1051: Add configuration filename to the "Successfully ran configuration file" log message.
1.9.14
* #409: With the PagerDuty monitoring hook, send borgmatic logs to PagerDuty so they show up in the
incident UI. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook
* #936: Clarify Zabbix monitoring hook documentation about creating items:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#zabbix-hook
* #1017: Fix a regression in which some MariaDB/MySQL passwords were not escaped correctly.
* #1021: Fix a regression in which the "exclude_patterns" option didn't expand "~" (the user's
home directory). This fix means that all "patterns" and "patterns_from" also now expand "~".
* #1023: Fix an error in the Btrfs hook when attempting to snapshot a read-only subvolume. Now,
read-only subvolumes are ignored since Btrfs can't actually snapshot them.
1.9.13
* #975: Add a "compression" option to the PostgreSQL database hook.
* #1001: Fix a ZFS error during snapshot cleanup.

View File

@@ -170,7 +170,7 @@ def filter_checks_on_frequency(
if calendar.day_name[datetime_now().weekday()] not in days:
logger.info(
f"Skipping {check} check due to day of the week; check only runs on {'/'.join(days)} (use --force to check anyway)"
f"Skipping {check} check due to day of the week; check only runs on {'/'.join(day.title() for day in days)} (use --force to check anyway)"
)
filtered_checks.remove(check)
continue
@@ -372,7 +372,7 @@ def collect_spot_check_source_paths(
borgmatic.borg.create.make_base_create_command(
dry_run=True,
repository_path=repository['path'],
config=config,
config=dict(config, list_details=True),
patterns=borgmatic.actions.create.process_patterns(
borgmatic.actions.create.collect_patterns(config),
working_directory,
@@ -382,7 +382,6 @@ def collect_spot_check_source_paths(
borgmatic_runtime_directory=borgmatic_runtime_directory,
local_path=local_path,
remote_path=remote_path,
list_files=True,
stream_processes=stream_processes,
)
)
@@ -483,10 +482,12 @@ def compare_spot_check_hashes(
)
source_sample_paths = tuple(random.sample(source_paths, sample_count))
working_directory = borgmatic.config.paths.get_working_directory(config)
existing_source_sample_paths = {
hashable_source_sample_path = {
source_path
for source_path in source_sample_paths
if os.path.exists(os.path.join(working_directory or '', source_path))
for full_source_path in (os.path.join(working_directory or '', source_path),)
if os.path.exists(full_source_path)
if not os.path.islink(full_source_path)
}
logger.debug(
f'Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
@@ -509,7 +510,7 @@ def compare_spot_check_hashes(
hash_output = borgmatic.execute.execute_command_and_capture_output(
(spot_check_config.get('xxh64sum_command', 'xxh64sum'),)
+ tuple(
path for path in source_sample_paths_subset if path in existing_source_sample_paths
path for path in source_sample_paths_subset if path in hashable_source_sample_path
),
working_directory=working_directory,
)
@@ -517,11 +518,13 @@ def compare_spot_check_hashes(
source_hashes.update(
**dict(
(reversed(line.split(' ', 1)) for line in hash_output.splitlines()),
# Represent non-existent files as having empty hashes so the comparison below still works.
# Represent non-existent files as having empty hashes so the comparison below still
# works. Same thing for filesystem links, since Borg produces empty archive hashes
# for them.
**{
path: ''
for path in source_sample_paths_subset
if path not in existing_source_sample_paths
if path not in hashable_source_sample_path
},
)
)
@@ -682,7 +685,6 @@ def run_check(
config_filename,
repository,
config,
hook_context,
local_borg_version,
check_arguments,
global_arguments,
@@ -699,15 +701,6 @@ def run_check(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_check'),
config.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
**hook_context,
)
logger.info('Running consistency checks')
repository_id = borgmatic.borg.check.get_repository_id(
@@ -772,12 +765,3 @@ def run_check(
borgmatic_runtime_directory,
)
write_check_time(make_check_time_path(config, repository_id, 'spot'))
borgmatic.hooks.command.execute_hook(
config.get('after_check'),
config.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
**hook_context,
)

View File

@@ -12,7 +12,6 @@ def run_compact(
config_filename,
repository,
config,
hook_context,
local_borg_version,
compact_arguments,
global_arguments,
@@ -28,14 +27,6 @@ def run_compact(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_compact'),
config.get('umask'),
config_filename,
'pre-compact',
global_arguments.dry_run,
**hook_context,
)
if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
logger.info(f'Compacting segments{dry_run_label}')
borgmatic.borg.compact.compact_segments(
@@ -46,18 +37,7 @@ def run_compact(
global_arguments,
local_path=local_path,
remote_path=remote_path,
progress=compact_arguments.progress,
cleanup_commits=compact_arguments.cleanup_commits,
threshold=compact_arguments.threshold,
)
else: # pragma: nocover
logger.info('Skipping compact (only available/needed in Borg 1.2+)')
borgmatic.hooks.command.execute_hook(
config.get('after_compact'),
config.get('umask'),
config_filename,
'post-compact',
global_arguments.dry_run,
**hook_context,
)

View File

@@ -119,7 +119,9 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
bootstrap_arguments.repository,
archive_name,
[config_path.lstrip(os.path.sep) for config_path in manifest_config_paths],
config,
# Only add progress here and not the extract_archive() call above, because progress
# conflicts with extract_to_stdout.
dict(config, progress=bootstrap_arguments.progress or False),
local_borg_version,
global_arguments,
local_path=bootstrap_arguments.local_path,
@@ -127,5 +129,4 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
extract_to_stdout=False,
destination_path=bootstrap_arguments.destination,
strip_components=bootstrap_arguments.strip_components,
progress=bootstrap_arguments.progress,
)

View File

@@ -130,8 +130,11 @@ def expand_directory(directory, working_directory):
def expand_patterns(patterns, working_directory=None, skip_paths=None):
'''
Given a sequence of borgmatic.borg.pattern.Pattern instances and an optional working directory,
expand tildes and globs in each root pattern. Return all the resulting patterns (not just the
root patterns) as a tuple.
expand tildes and globs in each root pattern and expand just tildes in each non-root pattern.
The idea is that non-root patterns may be regular expressions or other pattern styles containing
"*" that borgmatic should not expand as a shell glob.
Return all the resulting patterns as a tuple.
If a set of paths are given to skip, then don't expand any patterns matching them.
'''
@@ -153,7 +156,15 @@ def expand_patterns(patterns, working_directory=None, skip_paths=None):
)
if pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
and pattern.path not in (skip_paths or ())
else (pattern,)
else (
borgmatic.borg.pattern.Pattern(
os.path.expanduser(pattern.path),
pattern.type,
pattern.style,
pattern.device,
pattern.source,
),
)
)
for pattern in patterns
)
@@ -261,7 +272,6 @@ def run_create(
repository,
config,
config_paths,
hook_context,
local_borg_version,
create_arguments,
global_arguments,
@@ -279,14 +289,15 @@ def run_create(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_backup'),
config.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
**hook_context,
)
if config.get('list_details') and config.get('progress'):
raise ValueError(
'With the create action, only one of --list/--files/list_details and --progress/progress can be used.'
)
if config.get('list_details') and create_arguments.json:
raise ValueError(
'With the create action, only one of --list/--files/list_details and --json can be used.'
)
logger.info(f'Creating archive{dry_run_label}')
working_directory = borgmatic.config.paths.get_working_directory(config)
@@ -326,10 +337,7 @@ def run_create(
borgmatic_runtime_directory,
local_path=local_path,
remote_path=remote_path,
progress=create_arguments.progress,
stats=create_arguments.stats,
json=create_arguments.json,
list_files=create_arguments.list_files,
stream_processes=stream_processes,
)
@@ -343,12 +351,3 @@ def run_create(
borgmatic_runtime_directory,
global_arguments.dry_run,
)
borgmatic.hooks.command.execute_hook(
config.get('after_backup'),
config.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
**hook_context,
)

View File

@@ -43,6 +43,5 @@ def run_export_tar(
local_path=local_path,
remote_path=remote_path,
tar_filter=export_tar_arguments.tar_filter,
list_files=export_tar_arguments.list_files,
strip_components=export_tar_arguments.strip_components,
)

View File

@@ -12,7 +12,6 @@ def run_extract(
config_filename,
repository,
config,
hook_context,
local_borg_version,
extract_arguments,
global_arguments,
@@ -22,14 +21,6 @@ def run_extract(
'''
Run the "extract" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
config.get('before_extract'),
config.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
**hook_context,
)
if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, extract_arguments.repository
):
@@ -54,13 +45,4 @@ def run_extract(
remote_path=remote_path,
destination_path=extract_arguments.destination,
strip_components=extract_arguments.strip_components,
progress=extract_arguments.progress,
)
borgmatic.hooks.command.execute_hook(
config.get('after_extract'),
config.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
**hook_context,
)

View File

@@ -0,0 +1,33 @@
import logging
import borgmatic.borg.import_key
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_import_key(
repository,
config,
local_borg_version,
import_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "key import" action for the given repository.
'''
if import_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, import_arguments.repository
):
logger.info('Importing repository key')
borgmatic.borg.import_key.import_key(
repository['path'],
config,
local_borg_version,
import_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@@ -11,7 +11,6 @@ def run_prune(
config_filename,
repository,
config,
hook_context,
local_borg_version,
prune_arguments,
global_arguments,
@@ -27,14 +26,6 @@ def run_prune(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_prune'),
config.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'Pruning archives{dry_run_label}')
borgmatic.borg.prune.prune_archives(
global_arguments.dry_run,
@@ -46,11 +37,3 @@ def run_prune(
local_path=local_path,
remote_path=remote_path,
)
borgmatic.hooks.command.execute_hook(
config.get('after_prune'),
config.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
**hook_context,
)

View File

@@ -0,0 +1,53 @@
import logging
import borgmatic.borg.recreate
import borgmatic.config.validate
from borgmatic.actions.create import collect_patterns, process_patterns
logger = logging.getLogger(__name__)
def run_recreate(
repository,
config,
local_borg_version,
recreate_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "recreate" action for the given repository.
'''
if recreate_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, recreate_arguments.repository
):
if recreate_arguments.archive:
logger.answer(f'Recreating archive {recreate_arguments.archive}')
else:
logger.answer('Recreating repository')
# Collect and process patterns.
processed_patterns = process_patterns(
collect_patterns(config), borgmatic.config.paths.get_working_directory(config)
)
borgmatic.borg.recreate.recreate_archive(
repository['path'],
borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],
recreate_arguments.archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
),
config,
local_borg_version,
recreate_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
patterns=processed_patterns,
)

View File

@@ -24,18 +24,38 @@ def run_repo_create(
return
logger.info('Creating repository')
encryption_mode = repo_create_arguments.encryption_mode or repository.get('encryption')
if not encryption_mode:
raise ValueError(
'With the repo-create action, either the --encryption flag or the repository encryption option is required.'
)
borgmatic.borg.repo_create.create_repository(
global_arguments.dry_run,
repository['path'],
config,
local_borg_version,
global_arguments,
repo_create_arguments.encryption_mode,
encryption_mode,
repo_create_arguments.source_repository,
repo_create_arguments.copy_crypt_key,
repo_create_arguments.append_only,
repo_create_arguments.storage_quota,
repo_create_arguments.make_parent_dirs,
(
repository.get('append_only')
if repo_create_arguments.append_only is None
else repo_create_arguments.append_only
),
(
repository.get('storage_quota')
if repo_create_arguments.storage_quota is None
else repo_create_arguments.storage_quota
),
(
repository.get('make_parent_directories')
if repo_create_arguments.make_parent_directories is None
else repo_create_arguments.make_parent_directories
),
local_path=local_path,
remote_path=remote_path,
)

View File

@@ -17,7 +17,13 @@ def run_transfer(
'''
Run the "transfer" action for the given repository.
'''
if transfer_arguments.archive and config.get('match_archives'):
raise ValueError(
'With the transfer action, only one of --archive and --match-archives/match_archives can be used.'
)
logger.info('Transferring archives to repository')
borgmatic.borg.transfer.transfer_archives(
global_arguments.dry_run,
repository['path'],

View File

@@ -32,7 +32,7 @@ def make_archive_filter_flags(local_borg_version, config, checks, check_argument
if prefix
else (
flags.make_match_archives_flags(
check_arguments.match_archives or config.get('match_archives'),
config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
@@ -170,7 +170,7 @@ def check_archives(
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if check_arguments.progress else ())
+ (('--progress',) if config.get('progress') else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
@@ -180,7 +180,7 @@ def check_archives(
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
output_file=(
DO_NOT_CAPTURE if check_arguments.repair or check_arguments.progress else None
DO_NOT_CAPTURE if check_arguments.repair or config.get('progress') else None
),
environment=environment.make_environment(config),
working_directory=working_directory,

View File

@@ -15,9 +15,7 @@ def compact_segments(
global_arguments,
local_path='borg',
remote_path=None,
progress=False,
cleanup_commits=False,
threshold=None,
):
'''
Given dry-run flag, a local or remote repository path, a configuration dict, and the local Borg
@@ -26,6 +24,7 @@ def compact_segments(
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
extra_borg_options = config.get('extra_borg_options', {}).get('compact', '')
threshold = config.get('compact_threshold')
full_command = (
(local_path, 'compact')
@@ -33,7 +32,7 @@ def compact_segments(
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--progress',) if progress else ())
+ (('--progress',) if config.get('progress') else ())
+ (('--cleanup-commits',) if cleanup_commits else ())
+ (('--threshold', str(threshold)) if threshold else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())

View File

@@ -196,7 +196,7 @@ def check_all_root_patterns_exist(patterns):
if missing_paths:
raise ValueError(
f"Source directories / root pattern paths do not exist: {', '.join(missing_paths)}"
f"Source directories or root pattern paths do not exist: {', '.join(missing_paths)}"
)
@@ -213,9 +213,7 @@ def make_base_create_command(
borgmatic_runtime_directory,
local_path='borg',
remote_path=None,
progress=False,
json=False,
list_files=False,
stream_processes=None,
):
'''
@@ -293,7 +291,7 @@ def make_base_create_command(
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (
('--list', '--filter', list_filter_flags)
if list_files and not json and not progress
if config.get('list_details') and not json and not config.get('progress')
else ()
)
+ (('--dry-run',) if dry_run else ())
@@ -361,10 +359,7 @@ def create_archive(
borgmatic_runtime_directory,
local_path='borg',
remote_path=None,
progress=False,
stats=False,
json=False,
list_files=False,
stream_processes=None,
):
'''
@@ -389,28 +384,26 @@ def create_archive(
borgmatic_runtime_directory,
local_path,
remote_path,
progress,
json,
list_files,
stream_processes,
)
if json:
output_log_level = None
elif list_files or (stats and not dry_run):
elif config.get('list_details') or (config.get('statistics') and not dry_run):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.
output_file = DO_NOT_CAPTURE if progress else None
output_file = DO_NOT_CAPTURE if config.get('progress') else None
create_flags += (
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ())
+ (('--stats',) if config.get('statistics') and not json and not dry_run else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
+ (('--progress',) if progress else ())
+ (('--progress',) if config.get('progress') else ())
+ (('--json',) if json else ())
)
borg_exit_codes = config.get('borg_exit_codes')

View File

@@ -34,7 +34,7 @@ def make_delete_command(
+ borgmatic.borg.flags.make_flags('umask', config.get('umask'))
+ borgmatic.borg.flags.make_flags('log-json', global_arguments.log_json)
+ borgmatic.borg.flags.make_flags('lock-wait', config.get('lock_wait'))
+ borgmatic.borg.flags.make_flags('list', delete_arguments.list_archives)
+ borgmatic.borg.flags.make_flags('list', config.get('list_details'))
+ (
(('--force',) + (('--force',) if delete_arguments.force >= 2 else ()))
if delete_arguments.force
@@ -48,9 +48,17 @@ def make_delete_command(
local_borg_version=local_borg_version,
default_archive_name_format='*',
)
+ (('--stats',) if config.get('statistics') else ())
+ borgmatic.borg.flags.make_flags_from_arguments(
delete_arguments,
excludes=('list_archives', 'force', 'match_archives', 'archive', 'repository'),
excludes=(
'list_details',
'statistics',
'force',
'match_archives',
'archive',
'repository',
),
)
+ borgmatic.borg.flags.make_repository_flags(repository['path'], local_borg_version)
)
@@ -98,7 +106,7 @@ def delete_archives(
repo_delete_arguments = argparse.Namespace(
repository=repository['path'],
list_archives=delete_arguments.list_archives,
list_details=delete_arguments.list_details,
force=delete_arguments.force,
cache_only=delete_arguments.cache_only,
keep_security_info=delete_arguments.keep_security_info,

View File

@@ -20,7 +20,6 @@ def export_tar_archive(
local_path='borg',
remote_path=None,
tar_filter=None,
list_files=False,
strip_components=None,
):
'''
@@ -43,7 +42,7 @@ def export_tar_archive(
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--list',) if list_files else ())
+ (('--list',) if config.get('list_details') else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (('--tar-filter', tar_filter) if tar_filter else ())
@@ -57,7 +56,7 @@ def export_tar_archive(
+ (tuple(paths) if paths else ())
)
if list_files:
if config.get('list_details'):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO

View File

@@ -77,7 +77,6 @@ def extract_archive(
remote_path=None,
destination_path=None,
strip_components=None,
progress=False,
extract_to_stdout=False,
):
'''
@@ -92,8 +91,8 @@ def extract_archive(
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
if progress and extract_to_stdout:
raise ValueError('progress and extract_to_stdout cannot both be set')
if config.get('progress') and extract_to_stdout:
raise ValueError('progress and extract to stdout cannot both be set')
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if config.get('numeric_ids') else ()
@@ -128,15 +127,13 @@ def extract_archive(
+ (('--debug', '--list', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (('--strip-components', str(strip_components)) if strip_components else ())
+ (('--progress',) if progress else ())
+ (('--progress',) if config.get('progress') else ())
+ (('--stdout',) if extract_to_stdout else ())
+ flags.make_repository_archive_flags(
# Make the repository path absolute so the destination directory used below via changing
# the working directory doesn't prevent Borg from finding the repo. But also apply the
# user's configured working directory (if any) to the repo path.
borgmatic.config.validate.normalize_repository_path(
os.path.join(working_directory or '', repository)
),
borgmatic.config.validate.normalize_repository_path(repository, working_directory),
archive,
local_borg_version,
)
@@ -150,7 +147,7 @@ def extract_archive(
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.
if progress:
if config.get('progress'):
return execute_command(
full_command,
output_file=DO_NOT_CAPTURE,

View File

@@ -17,6 +17,7 @@ class Feature(Enum):
MATCH_ARCHIVES = 11
EXCLUDED_FILES_MINUS = 12
ARCHIVE_SERIES = 13
NO_PRUNE_STATS = 14
FEATURE_TO_MINIMUM_BORG_VERSION = {
@@ -33,6 +34,7 @@ FEATURE_TO_MINIMUM_BORG_VERSION = {
Feature.MATCH_ARCHIVES: parse('2.0.0b3'), # borg --match-archives
Feature.EXCLUDED_FILES_MINUS: parse('2.0.0b5'), # --list --filter uses "-" for excludes
Feature.ARCHIVE_SERIES: parse('2.0.0b11'), # identically named archives form a series
Feature.NO_PRUNE_STATS: parse('2.0.0b10'), # prune --stats is not available
}

View File

@@ -0,0 +1,70 @@
import logging
import os
import borgmatic.config.paths
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def import_key(
repository_path,
config,
local_borg_version,
import_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, import
arguments, and optional local and remote Borg paths, import the repository key from the
path indicated in the import arguments.
If the path is empty or "-", then read the key from stdin.
Raise ValueError if the path is given and it does not exist.
'''
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
working_directory = borgmatic.config.paths.get_working_directory(config)
if import_arguments.path and import_arguments.path != '-':
if not os.path.exists(os.path.join(working_directory or '', import_arguments.path)):
raise ValueError(f'Path {import_arguments.path} does not exist. Aborting.')
input_file = None
else:
input_file = DO_NOT_CAPTURE
full_command = (
(local_path, 'key', 'import')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('paper', import_arguments.paper)
+ flags.make_repository_flags(
repository_path,
local_borg_version,
)
+ ((import_arguments.path,) if input_file is None else ())
)
if global_arguments.dry_run:
logger.info('Skipping key import (dry run)')
return
execute_command(
full_command,
input_file=input_file,
output_log_level=logging.INFO,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@@ -48,9 +48,7 @@ def make_info_command(
if info_arguments.prefix
else (
flags.make_match_archives_flags(
info_arguments.match_archives
or info_arguments.archive
or config.get('match_archives'),
info_arguments.archive or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)

View File

@@ -41,7 +41,7 @@ def make_prune_flags(config, prune_arguments, local_borg_version):
if prefix
else (
flags.make_match_archives_flags(
prune_arguments.match_archives or config.get('match_archives'),
config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
@@ -75,20 +75,26 @@ def prune_archives(
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--stats',) if prune_arguments.stats and not dry_run else ())
+ (
('--stats',)
if config.get('statistics')
and not dry_run
and not feature.available(feature.Feature.NO_PRUNE_STATS, local_borg_version)
else ()
)
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ flags.make_flags_from_arguments(
prune_arguments,
excludes=('repository', 'match_archives', 'stats', 'list_archives'),
excludes=('repository', 'match_archives', 'statistics', 'list_details'),
)
+ (('--list',) if prune_arguments.list_archives else ())
+ (('--list',) if config.get('list_details') else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if prune_arguments.stats or prune_arguments.list_archives:
if config.get('statistics') or config.get('list_details'):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO

103
borgmatic/borg/recreate.py Normal file
View File

@@ -0,0 +1,103 @@
import logging
import shlex
import borgmatic.borg.environment
import borgmatic.borg.feature
import borgmatic.config.paths
import borgmatic.execute
from borgmatic.borg import flags
from borgmatic.borg.create import make_exclude_flags, make_list_filter_flags, write_patterns_file
logger = logging.getLogger(__name__)
def recreate_archive(
repository,
archive,
config,
local_borg_version,
recreate_arguments,
global_arguments,
local_path,
remote_path=None,
patterns=None,
):
'''
Given a local or remote repository path, an archive name, a configuration dict, the local Borg
version string, an argparse.Namespace of recreate arguments, an argparse.Namespace of global
arguments, optional local and remote Borg paths, executes the recreate command with the given
arguments.
'''
lock_wait = config.get('lock_wait', None)
exclude_flags = make_exclude_flags(config)
compression = config.get('compression', None)
chunker_params = config.get('chunker_params', None)
# Available recompress MODES: "if-different", "always", "never" (default)
recompress = config.get('recompress', None)
# Write patterns to a temporary file and use that file with --patterns-from.
patterns_file = write_patterns_file(
patterns, borgmatic.config.paths.get_working_directory(config)
)
recreate_command = (
(local_path, 'recreate')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait is not None else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--patterns-from', patterns_file.name) if patterns_file else ())
+ (
(
'--list',
'--filter',
make_list_filter_flags(local_borg_version, global_arguments.dry_run),
)
if config.get('list_details')
else ()
)
# Flag --target works only for a single archive.
+ (('--target', recreate_arguments.target) if recreate_arguments.target and archive else ())
+ (
('--comment', shlex.quote(recreate_arguments.comment))
if recreate_arguments.comment
else ()
)
+ (('--timestamp', recreate_arguments.timestamp) if recreate_arguments.timestamp else ())
+ (('--compression', compression) if compression else ())
+ (('--chunker-params', chunker_params) if chunker_params else ())
+ (('--recompress', recompress) if recompress else ())
+ exclude_flags
+ (
(
flags.make_repository_flags(repository, local_borg_version)
+ flags.make_match_archives_flags(
archive or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
)
if borgmatic.borg.feature.available(
borgmatic.borg.feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version
)
else (
flags.make_repository_archive_flags(repository, archive, local_borg_version)
if archive
else flags.make_repository_flags(repository, local_borg_version)
)
)
)
if global_arguments.dry_run:
logger.info('Skipping the archive recreation (dry run)')
return
borgmatic.execute.execute_command(
full_command=recreate_command,
output_log_level=logging.INFO,
environment=borgmatic.borg.environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@@ -24,7 +24,7 @@ def create_repository(
copy_crypt_key=False,
append_only=None,
storage_quota=None,
make_parent_dirs=False,
make_parent_directories=False,
local_path='borg',
remote_path=None,
):
@@ -79,7 +79,7 @@ def create_repository(
+ (('--copy-crypt-key',) if copy_crypt_key else ())
+ (('--append-only',) if append_only else ())
+ (('--storage-quota', storage_quota) if storage_quota else ())
+ (('--make-parent-dirs',) if make_parent_dirs else ())
+ (('--make-parent-dirs',) if make_parent_directories else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--log-json',) if global_arguments.log_json else ())

View File

@@ -39,14 +39,14 @@ def make_repo_delete_command(
+ borgmatic.borg.flags.make_flags('umask', config.get('umask'))
+ borgmatic.borg.flags.make_flags('log-json', global_arguments.log_json)
+ borgmatic.borg.flags.make_flags('lock-wait', config.get('lock_wait'))
+ borgmatic.borg.flags.make_flags('list', repo_delete_arguments.list_archives)
+ borgmatic.borg.flags.make_flags('list', config.get('list_details'))
+ (
(('--force',) + (('--force',) if repo_delete_arguments.force >= 2 else ()))
if repo_delete_arguments.force
else ()
)
+ borgmatic.borg.flags.make_flags_from_arguments(
repo_delete_arguments, excludes=('list_archives', 'force', 'repository')
repo_delete_arguments, excludes=('list_details', 'force', 'repository')
)
+ borgmatic.borg.flags.make_repository_flags(repository['path'], local_borg_version)
)

View File

@@ -113,7 +113,7 @@ def make_repo_list_command(
if repo_list_arguments.prefix
else (
flags.make_match_archives_flags(
repo_list_arguments.match_archives or config.get('match_archives'),
config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)

View File

@@ -32,17 +32,22 @@ def transfer_archives(
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('umask', config.get('umask'))
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', config.get('lock_wait', None))
+ flags.make_flags('lock-wait', config.get('lock_wait'))
+ flags.make_flags('progress', config.get('progress'))
+ (
flags.make_flags_from_arguments(
transfer_arguments,
excludes=('repository', 'source_repository', 'archive', 'match_archives'),
excludes=(
'repository',
'source_repository',
'archive',
'match_archives',
'progress',
),
)
or (
flags.make_match_archives_flags(
transfer_arguments.match_archives
or transfer_arguments.archive
or config.get('match_archives'),
transfer_arguments.archive or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
@@ -56,7 +61,7 @@ def transfer_archives(
return execute_command(
full_command,
output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
output_file=DO_NOT_CAPTURE if config.get('progress') else None,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,

View File

@@ -1,8 +1,13 @@
import collections
import io
import itertools
import re
import sys
from argparse import ArgumentParser
import ruamel.yaml
import borgmatic.config.schema
from borgmatic.config import collect
ACTION_ALIASES = {
@@ -27,6 +32,7 @@ ACTION_ALIASES = {
'break-lock': [],
'key': [],
'borg': [],
'recreate': [],
}
@@ -63,9 +69,9 @@ def get_subactions_for_actions(action_parsers):
def omit_values_colliding_with_action_names(unparsed_arguments, parsed_arguments):
'''
Given a sequence of string arguments and a dict from action name to parsed argparse.Namespace
arguments, return the string arguments with any values omitted that happen to be the same as
the name of a borgmatic action.
Given unparsed arguments as a sequence of strings and a dict from action name to parsed
argparse.Namespace arguments, return the string arguments with any values omitted that happen to
be the same as the name of a borgmatic action.
This prevents, for instance, "check --only extract" from triggering the "extract" action.
'''
@@ -282,17 +288,270 @@ def parse_arguments_for_actions(unparsed_arguments, action_parsers, global_parse
)
def make_parsers():
OMITTED_FLAG_NAMES = {'match-archives', 'progress', 'statistics', 'list-details'}
def make_argument_description(schema, flag_name):
'''
Build a global arguments parser, individual action parsers, and a combined parser containing
both. Return them as a tuple. The global parser is useful for parsing just global arguments
while ignoring actions, and the combined parser is handy for displaying help that includes
everything: global flags, a list of actions, etc.
Given a configuration schema dict and a flag name for it, extend the schema's description with
an example or additional information as appropriate based on its type. Return the updated
description for use in a command-line argument.
'''
description = schema.get('description')
schema_type = schema.get('type')
example = schema.get('example')
pieces = [description] if description else []
if '[0]' in flag_name:
pieces.append(
' To specify a different list element, replace the "[0]" with another array index ("[1]", "[2]", etc.).'
)
if example and schema_type in ('array', 'object'):
example_buffer = io.StringIO()
yaml = ruamel.yaml.YAML(typ='safe')
yaml.default_flow_style = True
yaml.dump(example, example_buffer)
pieces.append(f'Example value: "{example_buffer.getvalue().strip()}"')
return ' '.join(pieces).replace('%', '%%')
def add_array_element_arguments(arguments_group, unparsed_arguments, flag_name):
r'''
Given an argparse._ArgumentGroup instance, a sequence of unparsed argument strings, and a dotted
flag name, add command-line array element flags that correspond to the given unparsed arguments.
Here's the background. We want to support flags that can have arbitrary indices like:
--foo.bar[1].baz
But argparse doesn't support that natively because the index can be an arbitrary number. We
won't let that stop us though, will we?
If the current flag name has an array component in it (e.g. a name with "[0]"), then make a
pattern that would match the flag name regardless of the number that's in it. The idea is that
we want to look for unparsed arguments that appear like the flag name, but instead of "[0]" they
have, say, "[1]" or "[123]".
Next, we check each unparsed argument against that pattern. If one of them matches, add an
argument flag for it to the argument parser group. Example:
Let's say flag_name is:
--foo.bar[0].baz
... then the regular expression pattern will be:
^--foo\.bar\[\d+\]\.baz
... and, if that matches an unparsed argument of:
--foo.bar[1].baz
... then an argument flag will get added equal to that unparsed argument. And so the unparsed
argument will match it when parsing is performed! In this manner, we're using the actual user
CLI input to inform what exact flags we support.
'''
if '[0]' not in flag_name or not unparsed_arguments or '--help' in unparsed_arguments:
return
pattern = re.compile(fr'^--{flag_name.replace("[0]", r"\[\d+\]").replace(".", r"\.")}$')
try:
# Find an existing list index flag (and its action) corresponding to the given flag name.
(argument_action, existing_flag_name) = next(
(action, action_flag_name)
for action in arguments_group._group_actions
for action_flag_name in action.option_strings
if pattern.match(action_flag_name)
if f'--{flag_name}'.startswith(action_flag_name)
)
# Based on the type of the action (e.g. argparse._StoreTrueAction), look up the corresponding
# action registry name (e.g., "store_true") to pass to add_argument(action=...) below.
action_registry_name = next(
registry_name
for registry_name, action_type in arguments_group._registries['action'].items()
# Not using isinstance() here because we only want an exact match—no parent classes.
if type(argument_action) is action_type
)
except StopIteration:
return
for unparsed in unparsed_arguments:
unparsed_flag_name = unparsed.split('=', 1)[0]
destination_name = unparsed_flag_name.lstrip('-').replace('-', '_')
if not pattern.match(unparsed_flag_name) or unparsed_flag_name == existing_flag_name:
continue
if action_registry_name in ('store_true', 'store_false'):
arguments_group.add_argument(
unparsed_flag_name,
action=action_registry_name,
default=argument_action.default,
dest=destination_name,
required=argument_action.nargs,
)
else:
arguments_group.add_argument(
unparsed_flag_name,
action=action_registry_name,
choices=argument_action.choices,
default=argument_action.default,
dest=destination_name,
nargs=argument_action.nargs,
required=argument_action.nargs,
type=argument_action.type,
)
def add_arguments_from_schema(arguments_group, schema, unparsed_arguments, names=None):
'''
Given an argparse._ArgumentGroup instance, a configuration schema dict, and a sequence of
unparsed argument strings, convert the entire schema into corresponding command-line flags and
add them to the arguments group.
For instance, given a schema of:
{
'type': 'object',
'properties': {
'foo': {
'type': 'object',
'properties': {
'bar': {'type': 'integer'}
}
}
}
}
... the following flag will be added to the arguments group:
--foo.bar
If "foo" is instead an array of objects, both of the following will get added:
--foo
--foo[0].bar
And if names are also passed in, they are considered to be the name components of an option
(e.g. "foo" and "bar") and are used to construct a resulting flag.
Bail if the schema is not a dict.
'''
if names is None:
names = ()
if not isinstance(schema, dict):
return
schema_type = schema.get('type')
# If this option has multiple types, just use the first one (that isn't "null").
if isinstance(schema_type, list):
try:
schema_type = next(single_type for single_type in schema_type if single_type != 'null')
except StopIteration:
raise ValueError(f'Unknown type in configuration schema: {schema_type}')
# If this is an "object" type, recurse for each child option ("property").
if schema_type == 'object':
properties = schema.get('properties')
# If there are child properties, recurse for each one. But if there are no child properties,
# fall through so that a flag gets added below for the (empty) object.
if properties:
for name, child in properties.items():
add_arguments_from_schema(
arguments_group, child, unparsed_arguments, names + (name,)
)
return
# If this is an "array" type, recurse for each items type child option. Don't return yet so that
# a flag also gets added below for the array itself.
if schema_type == 'array':
items = schema.get('items', {})
properties = borgmatic.config.schema.get_properties(items)
if properties:
for name, child in properties.items():
add_arguments_from_schema(
arguments_group,
child,
unparsed_arguments,
names[:-1] + (f'{names[-1]}[0]',) + (name,),
)
# If there aren't any children, then this is an array of scalars. Recurse accordingly.
else:
add_arguments_from_schema(
arguments_group, items, unparsed_arguments, names[:-1] + (f'{names[-1]}[0]',)
)
flag_name = '.'.join(names).replace('_', '-')
# Certain options already have corresponding flags on individual actions (like "create
# --progress"), so don't bother adding them to the global flags.
if not flag_name or flag_name in OMITTED_FLAG_NAMES:
return
metavar = names[-1].upper()
description = make_argument_description(schema, flag_name)
# The object=str and array=str given here is to support specifying an object or an array as a
# YAML string on the command-line.
argument_type = borgmatic.config.schema.parse_type(schema_type, object=str, array=str)
# As a UX nicety, add separate true and false flags for boolean options.
if schema_type == 'boolean':
arguments_group.add_argument(
f'--{flag_name}',
action='store_true',
default=None,
help=description,
)
if names[-1].startswith('no_'):
no_flag_name = '.'.join(names[:-1] + (names[-1][len('no_') :],)).replace('_', '-')
else:
no_flag_name = '.'.join(names[:-1] + ('no-' + names[-1],)).replace('_', '-')
arguments_group.add_argument(
f'--{no_flag_name}',
dest=flag_name.replace('-', '_'),
action='store_false',
default=None,
help=f'Set the --{flag_name} value to false.',
)
else:
arguments_group.add_argument(
f'--{flag_name}',
type=argument_type,
metavar=metavar,
help=description,
)
add_array_element_arguments(arguments_group, unparsed_arguments, flag_name)
def make_parsers(schema, unparsed_arguments):
'''
Given a configuration schema dict and unparsed arguments as a sequence of strings, build a
global arguments parser, individual action parsers, and a combined parser containing both.
Return them as a tuple. The global parser is useful for parsing just global arguments while
ignoring actions, and the combined parser is handy for displaying help that includes everything:
global flags, a list of actions, etc.
'''
config_paths = collect.get_default_config_paths(expand_home=True)
unexpanded_config_paths = collect.get_default_config_paths(expand_home=False)
global_parser = ArgumentParser(add_help=False)
# Using allow_abbrev=False here prevents the global parser from erroring about "ambiguous"
# options like --encryption. Such options are intended for an action parser rather than the
# global parser, and so we don't want to error on them here.
global_parser = ArgumentParser(allow_abbrev=False, add_help=False)
global_group = global_parser.add_argument_group('global arguments')
global_group.add_argument(
@@ -309,9 +568,6 @@ def make_parsers():
action='store_true',
help='Go through the motions, but do not actually write to any repositories',
)
global_group.add_argument(
'-nc', '--no-color', dest='no_color', action='store_true', help='Disable colored output'
)
global_group.add_argument(
'-v',
'--verbosity',
@@ -388,6 +644,7 @@ def make_parsers():
action='store_true',
help='Display installed version number of borgmatic and exit',
)
add_arguments_from_schema(global_group, schema, unparsed_arguments)
global_plus_action_parser = ArgumentParser(
description='''
@@ -415,7 +672,6 @@ def make_parsers():
'--encryption',
dest='encryption_mode',
help='Borg repository encryption mode',
required=True,
)
repo_create_group.add_argument(
'--source-repository',
@@ -434,6 +690,7 @@ def make_parsers():
)
repo_create_group.add_argument(
'--append-only',
default=None,
action='store_true',
help='Create an append-only repository',
)
@@ -443,6 +700,8 @@ def make_parsers():
)
repo_create_group.add_argument(
'--make-parent-dirs',
dest='make_parent_directories',
default=None,
action='store_true',
help='Create any missing parent directories of the repository directory',
)
@@ -477,7 +736,7 @@ def make_parsers():
)
transfer_group.add_argument(
'--progress',
default=False,
default=None,
action='store_true',
help='Display progress as each archive is transferred',
)
@@ -544,13 +803,17 @@ def make_parsers():
)
prune_group.add_argument(
'--stats',
dest='stats',
default=False,
dest='statistics',
default=None,
action='store_true',
help='Display statistics of the pruned archive',
help='Display statistics of the pruned archive [Borg 1 only]',
)
prune_group.add_argument(
'--list', dest='list_archives', action='store_true', help='List archives kept/pruned'
'--list',
dest='list_details',
default=None,
action='store_true',
help='List archives kept/pruned',
)
prune_group.add_argument(
'--oldest',
@@ -588,8 +851,7 @@ def make_parsers():
)
compact_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress as each segment is compacted',
)
@@ -603,7 +865,7 @@ def make_parsers():
compact_group.add_argument(
'--threshold',
type=int,
dest='threshold',
dest='compact_threshold',
help='Minimum saved space percentage threshold for compacting a segment, defaults to 10',
)
compact_group.add_argument(
@@ -624,20 +886,24 @@ def make_parsers():
)
create_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress for each file as it is backed up',
)
create_group.add_argument(
'--stats',
dest='stats',
default=False,
dest='statistics',
default=None,
action='store_true',
help='Display statistics of archive',
)
create_group.add_argument(
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
'--list',
'--files',
dest='list_details',
default=None,
action='store_true',
help='Show per-file details',
)
create_group.add_argument(
'--json', dest='json', default=False, action='store_true', help='Output results as JSON'
@@ -658,8 +924,7 @@ def make_parsers():
)
check_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress for each file as it is checked',
)
@@ -716,12 +981,15 @@ def make_parsers():
)
delete_group.add_argument(
'--list',
dest='list_archives',
dest='list_details',
default=None,
action='store_true',
help='Show details for the deleted archives',
)
delete_group.add_argument(
'--stats',
dest='statistics',
default=None,
action='store_true',
help='Display statistics for the deleted archives',
)
@@ -826,8 +1094,7 @@ def make_parsers():
)
extract_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress for each file as it is extracted',
)
@@ -902,8 +1169,7 @@ def make_parsers():
)
config_bootstrap_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress for each file as it is extracted',
)
@@ -996,7 +1262,12 @@ def make_parsers():
'--tar-filter', help='Name of filter program to pipe data through'
)
export_tar_group.add_argument(
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
'--list',
'--files',
dest='list_details',
default=None,
action='store_true',
help='Show per-file details',
)
export_tar_group.add_argument(
'--strip-components',
@@ -1107,7 +1378,8 @@ def make_parsers():
)
repo_delete_group.add_argument(
'--list',
dest='list_archives',
dest='list_details',
default=None,
action='store_true',
help='Show details for the archives in the given repository',
)
@@ -1479,6 +1751,31 @@ def make_parsers():
'-h', '--help', action='help', help='Show this help message and exit'
)
key_import_parser = key_parsers.add_parser(
'import',
help='Import a copy of the repository key from backup',
description='Import a copy of the repository key from backup',
add_help=False,
)
key_import_group = key_import_parser.add_argument_group('key import arguments')
key_import_group.add_argument(
'--paper',
action='store_true',
help='Import interactively from a backup done with --paper',
)
key_import_group.add_argument(
'--repository',
help='Path of repository to import the key from, defaults to the configured repository if there is only one, quoted globs supported',
)
key_import_group.add_argument(
'--path',
metavar='PATH',
help='Path to import the key from backup, defaults to stdin',
)
key_import_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
key_change_passphrase_parser = key_parsers.add_parser(
'change-passphrase',
help='Change the passphrase protecting the repository key',
@@ -1496,6 +1793,56 @@ def make_parsers():
'-h', '--help', action='help', help='Show this help message and exit'
)
recreate_parser = action_parsers.add_parser(
'recreate',
aliases=ACTION_ALIASES['recreate'],
help='Recreate an archive in a repository (with Borg 1.2+, you must run compact afterwards to actually free space)',
description='Recreate an archive in a repository (with Borg 1.2+, you must run compact afterwards to actually free space)',
add_help=False,
)
recreate_group = recreate_parser.add_argument_group('recreate arguments')
recreate_group.add_argument(
'--repository',
help='Path of repository containing archive to recreate, defaults to the configured repository if there is only one, quoted globs supported',
)
recreate_group.add_argument(
'--archive',
help='Archive name, hash, or series to recreate',
)
recreate_group.add_argument(
'--list',
dest='list_details',
default=None,
action='store_true',
help='Show per-file details',
)
recreate_group.add_argument(
'--target',
metavar='TARGET',
help='Create a new archive from the specified archive (via --archive), without replacing it',
)
recreate_group.add_argument(
'--comment',
metavar='COMMENT',
help='Add a comment text to the archive or, if an archive is not provided, to all matching archives',
)
recreate_group.add_argument(
'--timestamp',
metavar='TIMESTAMP',
help='Manually override the archive creation date/time (UTC)',
)
recreate_group.add_argument(
'-a',
'--match-archives',
'--glob-archives',
dest='match_archives',
metavar='PATTERN',
help='Only consider archive names, hashes, or series matching this pattern [Borg 2.x+ only]',
)
recreate_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
borg_parser = action_parsers.add_parser(
'borg',
aliases=ACTION_ALIASES['borg'],
@@ -1523,15 +1870,18 @@ def make_parsers():
return global_parser, action_parsers, global_plus_action_parser
def parse_arguments(*unparsed_arguments):
def parse_arguments(schema, *unparsed_arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as a dict mapping from action name (or "global") to an argparse.Namespace instance.
Given a configuration schema dict and the command-line arguments with which this script was
invoked and unparsed arguments as a sequence of strings, parse the arguments and return them as
a dict mapping from action name (or "global") to an argparse.Namespace instance.
Raise ValueError if the arguments cannot be parsed.
Raise SystemExit with an error code of 0 if "--help" was requested.
'''
global_parser, action_parsers, global_plus_action_parser = make_parsers()
global_parser, action_parsers, global_plus_action_parser = make_parsers(
schema, unparsed_arguments
)
arguments, remaining_action_arguments = parse_arguments_for_actions(
unparsed_arguments, action_parsers.choices, global_parser
)
@@ -1559,15 +1909,6 @@ def parse_arguments(*unparsed_arguments):
f"Unrecognized argument{'s' if len(unknown_arguments) > 1 else ''}: {' '.join(unknown_arguments)}"
)
if 'create' in arguments and arguments['create'].list_files and arguments['create'].progress:
raise ValueError(
'With the create action, only one of --list (--files) and --progress flags can be used.'
)
if 'create' in arguments and arguments['create'].list_files and arguments['create'].json:
raise ValueError(
'With the create action, only one of --list (--files) and --json flags can be used.'
)
if (
('list' in arguments and 'repo-info' in arguments and arguments['list'].json)
or ('list' in arguments and 'info' in arguments and arguments['list'].json)
@@ -1575,15 +1916,6 @@ def parse_arguments(*unparsed_arguments):
):
raise ValueError('With the --json flag, multiple actions cannot be used together.')
if (
'transfer' in arguments
and arguments['transfer'].archive
and arguments['transfer'].match_archives
):
raise ValueError(
'With the transfer action, only one of --archive and --match-archives flags can be used.'
)
if 'list' in arguments and (arguments['list'].prefix and arguments['list'].match_archives):
raise ValueError(
'With the list action, only one of --prefix or --match-archives flags can be used.'

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,7 @@
import borgmatic.commands.arguments
import borgmatic.commands.completion.actions
import borgmatic.commands.completion.flag
import borgmatic.config.validate
def parser_flags(parser):
@@ -7,7 +9,12 @@ def parser_flags(parser):
Given an argparse.ArgumentParser instance, return its argument flags in a space-separated
string.
'''
return ' '.join(option for action in parser._actions for option in action.option_strings)
return ' '.join(
flag_variant
for action in parser._actions
for flag_name in action.option_strings
for flag_variant in borgmatic.commands.completion.flag.variants(flag_name)
)
def bash_completion():
@@ -19,7 +26,10 @@ def bash_completion():
unused_global_parser,
action_parsers,
global_plus_action_parser,
) = borgmatic.commands.arguments.make_parsers()
) = borgmatic.commands.arguments.make_parsers(
schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
unparsed_arguments=(),
)
global_flags = parser_flags(global_plus_action_parser)
# Avert your eyes.

View File

@@ -4,6 +4,7 @@ from textwrap import dedent
import borgmatic.commands.arguments
import borgmatic.commands.completion.actions
import borgmatic.config.validate
def has_file_options(action: Action):
@@ -26,9 +27,11 @@ def has_choice_options(action: Action):
def has_unknown_required_param_options(action: Action):
'''
A catch-all for options that take a required parameter, but we don't know what the parameter is.
This should be used last. These are actions that take something like a glob, a list of numbers, or a string.
This should be used last. These are actions that take something like a glob, a list of numbers,
or a string.
Actions that match this pattern should not show the normal arguments, because those are unlikely to be valid.
Actions that match this pattern should not show the normal arguments, because those are unlikely
to be valid.
'''
return (
action.required is True
@@ -52,9 +55,9 @@ def has_exact_options(action: Action):
def exact_options_completion(action: Action):
'''
Given an argparse.Action instance, return a completion invocation that forces file completions, options completion,
or just that some value follow the action, if the action takes such an argument and was the last action on the
command line prior to the cursor.
Given an argparse.Action instance, return a completion invocation that forces file completions,
options completion, or just that some value follow the action, if the action takes such an
argument and was the last action on the command line prior to the cursor.
Otherwise, return an empty string.
'''
@@ -80,8 +83,9 @@ def exact_options_completion(action: Action):
def dedent_strip_as_tuple(string: str):
'''
Dedent a string, then strip it to avoid requiring your first line to have content, then return a tuple of the string.
Makes it easier to write multiline strings for completions when you join them with a tuple.
Dedent a string, then strip it to avoid requiring your first line to have content, then return a
tuple of the string. Makes it easier to write multiline strings for completions when you join
them with a tuple.
'''
return (dedent(string).strip('\n'),)
@@ -95,7 +99,10 @@ def fish_completion():
unused_global_parser,
action_parsers,
global_plus_action_parser,
) = borgmatic.commands.arguments.make_parsers()
) = borgmatic.commands.arguments.make_parsers(
schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
unparsed_arguments=(),
)
all_action_parsers = ' '.join(action for action in action_parsers.choices.keys())

View File

@@ -0,0 +1,13 @@
def variants(flag_name):
'''
Given a flag name as a string, yield it and any variations that should be complete-able as well.
For instance, for a string like "--foo[0].bar", yield "--foo[0].bar", "--foo[1].bar", ...,
"--foo[9].bar".
'''
if '[0]' in flag_name:
for index in range(0, 10):
yield flag_name.replace('[0]', f'[{index}]')
return
yield flag_name

View File

@@ -0,0 +1,176 @@
import io
import re
import ruamel.yaml
import borgmatic.config.schema
LIST_INDEX_KEY_PATTERN = re.compile(r'^(?P<list_name>[a-zA-z-]+)\[(?P<index>\d+)\]$')
def set_values(config, keys, value):
'''
Given a configuration dict, a sequence of parsed key strings, and a string value, descend into
the configuration hierarchy based on the given keys and set the value into the right place.
For example, consider these keys:
('foo', 'bar', 'baz')
This looks up "foo" in the given configuration dict. And within that, it looks up "bar". And
then within that, it looks up "baz" and sets it to the given value. Another example:
('mylist[0]', 'foo')
This looks for the zeroth element of "mylist" in the given configuration. And within that, it
looks up "foo" and sets it to the given value.
'''
if not keys:
return
first_key = keys[0]
# Support "mylist[0]" list index syntax.
match = LIST_INDEX_KEY_PATTERN.match(first_key)
if match:
list_key = match.group('list_name')
list_index = int(match.group('index'))
try:
if len(keys) == 1:
config[list_key][list_index] = value
return
if list_key not in config:
config[list_key] = []
set_values(config[list_key][list_index], keys[1:], value)
except (IndexError, KeyError):
raise ValueError(f'Argument list index {first_key} is out of range')
return
if len(keys) == 1:
config[first_key] = value
return
if first_key not in config:
config[first_key] = {}
set_values(config[first_key], keys[1:], value)
def type_for_option(schema, option_keys):
'''
Given a configuration schema dict and a sequence of keys identifying a potentially nested
option, e.g. ('extra_borg_options', 'create'), return the schema type of that option as a
string.
Return None if the option or its type cannot be found in the schema.
'''
option_schema = schema
for key in option_keys:
# Support "name[0]"-style list index syntax.
match = LIST_INDEX_KEY_PATTERN.match(key)
properties = borgmatic.config.schema.get_properties(option_schema)
try:
if match:
option_schema = properties[match.group('list_name')]['items']
else:
option_schema = properties[key]
except KeyError:
return None
try:
return option_schema['type']
except KeyError:
return None
def convert_value_type(value, option_type):
'''
Given a string value and its schema type as a string, determine its logical type (string,
boolean, integer, etc.), and return it converted to that type.
If the destination option type is a string, then leave the value as-is so that special
characters in it don't get interpreted as YAML during conversion.
And if the source value isn't a string, return it as-is.
Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
Raise ValueError if the parsed value doesn't match the option type.
'''
if not isinstance(value, str):
return value
if option_type == 'string':
return value
try:
parsed_value = ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
except ruamel.yaml.error.YAMLError as error:
raise ValueError(f'Argument value "{value}" is invalid: {error.problem}')
if not isinstance(parsed_value, borgmatic.config.schema.parse_type(option_type)):
raise ValueError(f'Argument value "{value}" is not of the expected type: {option_type}')
return parsed_value
def prepare_arguments_for_config(global_arguments, schema):
'''
Given global arguments as an argparse.Namespace and a configuration schema dict, parse each
argument that corresponds to an option in the schema and return a sequence of tuples (keys,
values) for that option, where keys is a sequence of strings. For instance, given the following
arguments:
argparse.Namespace(**{'my_option.sub_option': 'value1', 'other_option': 'value2'})
... return this:
(
(('my_option', 'sub_option'), 'value1'),
(('other_option',), 'value2'),
)
'''
prepared_values = []
for argument_name, value in global_arguments.__dict__.items():
if value is None:
continue
keys = tuple(argument_name.split('.'))
option_type = type_for_option(schema, keys)
# The argument doesn't correspond to any option in the schema, so ignore it. It's
# probably a flag that borgmatic has on the command-line but not in configuration.
if option_type is None:
continue
prepared_values.append(
(
keys,
convert_value_type(value, option_type),
)
)
return tuple(prepared_values)
def apply_arguments_to_config(config, schema, arguments):
'''
Given a configuration dict, a corresponding configuration schema dict, and arguments as a dict
from action name to argparse.Namespace, set those given argument values into their corresponding
configuration options in the configuration dict.
This supports argument flags of the from "--foo.bar.baz" where each dotted component is a nested
configuration object. Additionally, flags like "--foo.bar[0].baz" are supported to update a list
element in the configuration.
'''
for action_arguments in arguments.values():
for keys, value in prepare_arguments_for_config(action_arguments, schema):
set_values(config, keys, value)

View File

@@ -5,6 +5,7 @@ import re
import ruamel.yaml
import borgmatic.config.schema
from borgmatic.config import load, normalize
INDENT = 4
@@ -21,46 +22,59 @@ def insert_newline_before_comment(config, field_name):
)
def get_properties(schema):
'''
Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
potential properties, returned their merged properties instead.
'''
if 'oneOf' in schema:
return dict(
collections.ChainMap(*[sub_schema['properties'] for sub_schema in schema['oneOf']])
)
return schema['properties']
SCALAR_SCHEMA_TYPES = {'string', 'boolean', 'integer', 'number'}
def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
def schema_to_sample_configuration(schema, source_config=None, level=0, parent_is_sequence=False):
'''
Given a loaded configuration schema, generate and return sample config for it. Include comments
for each option based on the schema "description".
Given a loaded configuration schema and a source configuration, generate and return sample
config for the schema. Include comments for each option based on the schema "description".
If a source config is given, walk it alongside the given schema so that both can be taken into
account when commenting out particular options in add_comments_to_configuration_object().
'''
schema_type = schema.get('type')
example = schema.get('example')
if example is not None:
return example
if schema_type == 'array' or (isinstance(schema_type, list) and 'array' in schema_type):
if borgmatic.config.schema.compare_types(schema_type, {'array'}):
config = ruamel.yaml.comments.CommentedSeq(
[schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
example
if borgmatic.config.schema.compare_types(
schema['items'].get('type'), SCALAR_SCHEMA_TYPES
)
else [
schema_to_sample_configuration(
schema['items'], source_config, level, parent_is_sequence=True
)
]
)
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
elif schema_type == 'object' or (isinstance(schema_type, list) and 'object' in schema_type):
config = ruamel.yaml.comments.CommentedMap(
[
(field_name, schema_to_sample_configuration(sub_schema, level + 1))
for field_name, sub_schema in get_properties(schema).items()
]
elif borgmatic.config.schema.compare_types(schema_type, {'object'}):
if source_config and isinstance(source_config, list) and isinstance(source_config[0], dict):
source_config = dict(collections.ChainMap(*source_config))
config = (
ruamel.yaml.comments.CommentedMap(
[
(
field_name,
schema_to_sample_configuration(
sub_schema, (source_config or {}).get(field_name, {}), level + 1
),
)
for field_name, sub_schema in borgmatic.config.schema.get_properties(
schema
).items()
]
)
or example
)
indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
add_comments_to_configuration_object(
config, schema, indent=indent, skip_first=parent_is_sequence
config, schema, source_config, indent=indent, skip_first=parent_is_sequence
)
elif borgmatic.config.schema.compare_types(schema_type, SCALAR_SCHEMA_TYPES, match=all):
return example
else:
raise ValueError(f'Schema at level {level} is unsupported: {schema}')
@@ -165,7 +179,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
return
for field_name in config[0].keys():
field_schema = get_properties(schema['items']).get(field_name, {})
field_schema = borgmatic.config.schema.get_properties(schema['items']).get(field_name, {})
description = field_schema.get('description')
# No description to use? Skip it.
@@ -179,26 +193,35 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
return
REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
DEFAULT_KEYS = {'source_directories', 'repositories', 'keep_daily'}
COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
def add_comments_to_configuration_object(config, schema, indent=0, skip_first=False):
def add_comments_to_configuration_object(
config, schema, source_config=None, indent=0, skip_first=False
):
'''
Using descriptions from a schema as a source, add those descriptions as comments to the given
config mapping, before each field. Indent the comment the given number of characters.
configuration dict, putting them before each field. Indent the comment the given number of
characters.
And a sentinel for commenting out options that are neither in DEFAULT_KEYS nor the the given
source configuration dict. The idea is that any options used in the source configuration should
stay active in the generated configuration.
'''
for index, field_name in enumerate(config.keys()):
if skip_first and index == 0:
continue
field_schema = get_properties(schema).get(field_name, {})
field_schema = borgmatic.config.schema.get_properties(schema).get(field_name, {})
description = field_schema.get('description', '').strip()
# If this is an optional key, add an indicator to the comment flagging it to be commented
# If this isn't a default key, add an indicator to the comment flagging it to be commented
# out from the sample configuration. This sentinel is consumed by downstream processing that
# does the actual commenting out.
if field_name not in REQUIRED_KEYS:
if field_name not in DEFAULT_KEYS and (
source_config is None or field_name not in source_config
):
description = (
'\n'.join((description, COMMENTED_OUT_SENTINEL))
if description
@@ -218,21 +241,6 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
RUAMEL_YAML_COMMENTS_INDEX = 1
def remove_commented_out_sentinel(config, field_name):
'''
Given a configuration CommentedMap and a top-level field name in it, remove any "commented out"
sentinel found at the end of its YAML comments. This prevents the given field name from getting
commented out by downstream processing that consumes the sentinel.
'''
try:
last_comment_value = config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX][-1].value
except KeyError:
return
if last_comment_value == f'# {COMMENTED_OUT_SENTINEL}\n':
config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX].pop()
def merge_source_configuration_into_destination(destination_config, source_config):
'''
Deep merge the given source configuration dict into the destination configuration CommentedMap,
@@ -247,12 +255,6 @@ def merge_source_configuration_into_destination(destination_config, source_confi
return source_config
for field_name, source_value in source_config.items():
# Since this key/value is from the source configuration, leave it uncommented and remove any
# sentinel that would cause it to get commented out.
remove_commented_out_sentinel(
ruamel.yaml.comments.CommentedMap(destination_config), field_name
)
# This is a mapping. Recurse for this key/value.
if isinstance(source_value, collections.abc.Mapping):
destination_config[field_name] = merge_source_configuration_into_destination(
@@ -298,7 +300,7 @@ def generate_sample_configuration(
normalize.normalize(source_filename, source_config)
destination_config = merge_source_configuration_into_destination(
schema_to_sample_configuration(schema), source_config
schema_to_sample_configuration(schema, source_config), source_config
)
if dry_run:

View File

@@ -58,6 +58,90 @@ def normalize_sections(config_filename, config):
return []
def make_command_hook_deprecation_log(config_filename, option_name): # pragma: no cover
'''
Given a configuration filename and the name of a configuration option, return a deprecation
warning log for it.
'''
return logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: {option_name} is deprecated and support will be removed from a future release. Use commands: instead.',
)
)
def normalize_commands(config_filename, config):
'''
Given a configuration filename and a configuration dict, transform any "before_*"- and
"after_*"-style command hooks into "commands:".
'''
logs = []
# Normalize "before_actions" and "after_actions".
for preposition in ('before', 'after'):
option_name = f'{preposition}_actions'
commands = config.pop(option_name, None)
if commands:
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
config.setdefault('commands', []).append(
{
preposition: 'repository',
'run': commands,
}
)
# Normalize "before_backup", "before_prune", "after_backup", "after_prune", etc.
for action_name in ('create', 'prune', 'compact', 'check', 'extract'):
for preposition in ('before', 'after'):
option_name = f'{preposition}_{"backup" if action_name == "create" else action_name}'
commands = config.pop(option_name, None)
if not commands:
continue
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
config.setdefault('commands', []).append(
{
preposition: 'action',
'when': [action_name],
'run': commands,
}
)
# Normalize "on_error".
commands = config.pop('on_error', None)
if commands:
logs.append(make_command_hook_deprecation_log(config_filename, 'on_error'))
config.setdefault('commands', []).append(
{
'after': 'error',
'when': ['create', 'prune', 'compact', 'check'],
'run': commands,
}
)
# Normalize "before_everything" and "after_everything".
for preposition in ('before', 'after'):
option_name = f'{preposition}_everything'
commands = config.pop(option_name, None)
if commands:
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
config.setdefault('commands', []).append(
{
preposition: 'everything',
'when': ['create'],
'run': commands,
}
)
return logs
def normalize(config_filename, config):
'''
Given a configuration filename and a configuration dict of its loaded contents, apply particular
@@ -67,6 +151,7 @@ def normalize(config_filename, config):
Raise ValueError the configuration cannot be normalized.
'''
logs = normalize_sections(config_filename, config)
logs += normalize_commands(config_filename, config)
if config.get('borgmatic_source_directory'):
logs.append(
@@ -241,7 +326,11 @@ def normalize(config_filename, config):
config['repositories'] = []
for repository_dict in repositories:
repository_path = repository_dict['path']
repository_path = repository_dict.get('path')
if repository_path is None:
continue
if '~' in repository_path:
logs.append(
logging.makeLogRecord(

View File

@@ -1,7 +1,10 @@
import io
import logging
import ruamel.yaml
logger = logging.getLogger(__name__)
def set_values(config, keys, value):
'''
@@ -134,6 +137,11 @@ def apply_overrides(config, schema, raw_overrides):
'''
overrides = parse_overrides(raw_overrides, schema)
if overrides:
logger.warning(
"The --override flag is deprecated and will be removed from a future release. Instead, use a command-line flag corresponding to the configuration option you'd like to set."
)
for keys, value in overrides:
set_values(config, keys, value)
set_values(config, strip_section_names(keys), value)

View File

@@ -134,7 +134,7 @@ class Runtime_directory:
'''
return self.runtime_path
def __exit__(self, exception, value, traceback):
def __exit__(self, exception_type, exception, traceback):
'''
Delete any temporary directory that was created as part of initialization.
'''

View File

@@ -0,0 +1,72 @@
import decimal
import itertools
def get_properties(schema):
'''
Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
potential properties, return their merged properties instead (interleaved so the first
properties of each sub-schema come first). The idea is that the user should see all possible
options even if they're not all possible together.
'''
if 'oneOf' in schema:
return dict(
item
for item in itertools.chain(
*itertools.zip_longest(
*[sub_schema['properties'].items() for sub_schema in schema['oneOf']]
)
)
if item is not None
)
return schema.get('properties', {})
SCHEMA_TYPE_TO_PYTHON_TYPE = {
'array': list,
'boolean': bool,
'integer': int,
'number': decimal.Decimal,
'object': dict,
'string': str,
}
def parse_type(schema_type, **overrides):
'''
Given a schema type as a string, return the corresponding Python type.
If any overrides are given in the from of a schema type string to a Python type, then override
the default type mapping with them.
Raise ValueError if the schema type is unknown.
'''
try:
return dict(
SCHEMA_TYPE_TO_PYTHON_TYPE,
**overrides,
)[schema_type]
except KeyError:
raise ValueError(f'Unknown type in configuration schema: {schema_type}')
def compare_types(schema_type, target_types, match=any):
'''
Given a schema type as a string or a list of strings (representing multiple types) and a set of
target type strings, return whether every schema type is in the set of target types.
If the schema type is a list of strings, use the given match function (such as any or all) to
compare elements. For instance, if match is given as all, then every element of the schema_type
list must be in the target types.
'''
if isinstance(schema_type, list):
if match(element_schema_type in target_types for element_schema_type in schema_type):
return True
return False
if schema_type in target_types:
return True
return False

View File

@@ -33,13 +33,47 @@ properties:
type: object
required:
- path
additionalProperties: false
properties:
path:
type: string
example: ssh://user@backupserver/./{fqdn}
description: The local path or Borg URL of the repository.
example: ssh://user@backupserver/./sourcehostname.borg
label:
type: string
description: |
An optional label for the repository, used in logging
and to make selecting the repository easier on the
command-line.
example: backupserver
encryption:
type: string
description: |
The encryption mode with which to create the repository,
only used for the repo-create action. To see the
available encryption modes, run "borg init --help" with
Borg 1 or "borg repo-create --help" with Borg 2.
example: repokey-blake2
append_only:
type: boolean
description: |
Whether the repository should be created append-only,
only used for the repo-create action. Defaults to false.
example: true
storage_quota:
type: string
description: |
The storage quota with which to create the repository,
only used for the repo-create action. Defaults to no
quota.
example: 5G
make_parent_directories:
type: boolean
description: |
Whether any missing parent directories of the repository
path should be created, only used for the repo-create
action. Defaults to false.
example: true
description: |
A required list of local or remote repositories with paths and
optional labels (which can be used with the --repository flag to
@@ -48,8 +82,7 @@ properties:
output of "borg help placeholders" for details. See ssh_command for
SSH options like identity file or port. If systemd service is used,
then add local repository paths in the systemd service file to the
ReadWritePaths list. Prior to borgmatic 1.7.10, repositories was a
list of plain path strings.
ReadWritePaths list.
example:
- path: ssh://user@backupserver/./sourcehostname.borg
label: backupserver
@@ -99,13 +132,13 @@ properties:
used when backing up special devices such as /dev/zero. Defaults to
false. But when a database hook is used, the setting here is ignored
and read_special is considered true.
example: false
example: true
flags:
type: boolean
description: |
Record filesystem flags (e.g. NODUMP, IMMUTABLE) in archive.
Defaults to true.
example: true
example: false
files_cache:
type: string
description: |
@@ -284,6 +317,22 @@ properties:
http://borgbackup.readthedocs.io/en/stable/usage/create.html for
details. Defaults to "lz4".
example: lz4
recompress:
type: string
enum: ['if-different', 'always', 'never']
description: |
Mode for recompressing data chunks according to MODE.
Possible modes are:
* "if-different": Recompress if the current compression
is with a different compression algorithm.
* "always": Recompress even if the current compression
is with the same compression algorithm. Use this to change
the compression level.
* "never": Do not recompress. Use this option to explicitly
prevent recompression.
See https://borgbackup.readthedocs.io/en/stable/usage/recreate.html
for details. Defaults to "never".
example: if-different
upload_rate_limit:
type: integer
description: |
@@ -426,19 +475,19 @@ properties:
type: boolean
description: |
Bypass Borg error about a repository that has been moved. Defaults
to not bypassing.
to false.
example: true
unknown_unencrypted_repo_access_is_ok:
type: boolean
description: |
Bypass Borg error about a previously unknown unencrypted repository.
Defaults to not bypassing.
Defaults to false.
example: true
check_i_know_what_i_am_doing:
type: boolean
description: |
Bypass Borg confirmation about check with repair option. Defaults to
an interactive prompt from Borg.
false and an interactive prompt from Borg.
example: true
extra_borg_options:
type: object
@@ -518,6 +567,12 @@ properties:
not specified, borgmatic defaults to matching archives based on the
archive_name_format (see above).
example: sourcehostname
compact_threshold:
type: integer
description: |
Minimum saved space percentage threshold for compacting a segment,
defaults to 10.
example: 20
checks:
type: array
items:
@@ -733,6 +788,10 @@ properties:
List of one or more consistency checks to run on a periodic basis
(if "frequency" is set) or every time borgmatic runs checks (if
"frequency" is omitted).
example:
- name: archives
frequency: 2 weeks
- name: repository
check_repositories:
type: array
items:
@@ -754,9 +813,29 @@ properties:
color:
type: boolean
description: |
Apply color to console output. Can be overridden with --no-color
command-line flag. Defaults to true.
Apply color to console output. Defaults to true.
example: false
progress:
type: boolean
description: |
Display progress as each file or archive is processed when running
supported actions. Corresponds to the "--progress" flag on those
actions. Defaults to false.
example: true
statistics:
type: boolean
description: |
Display statistics for an archive when running supported actions.
Corresponds to the "--stats" flag on those actions. Defaults to
false.
example: true
list_details:
type: boolean
description: |
Display details for each file or archive as it is processed when
running supported actions. Corresponds to the "--list" flag on those
actions. Defaults to false.
example: true
skip_actions:
type: array
items:
@@ -767,6 +846,7 @@ properties:
- prune
- compact
- create
- recreate
- check
- delete
- extract
@@ -796,8 +876,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before all
the actions for each repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before all the actions for each
repository.
example:
- "echo Starting actions."
before_backup:
@@ -805,8 +886,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
creating a backup, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before creating a backup, run once
per repository.
example:
- "echo Starting a backup."
before_prune:
@@ -814,8 +896,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
pruning, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before pruning, run once per
repository.
example:
- "echo Starting pruning."
before_compact:
@@ -823,8 +906,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
compaction, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before compaction, run once per
repository.
example:
- "echo Starting compaction."
before_check:
@@ -832,8 +916,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
consistency checks, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before consistency checks, run once
per repository.
example:
- "echo Starting checks."
before_extract:
@@ -841,8 +926,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
extracting a backup, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before extracting a backup, run once
per repository.
example:
- "echo Starting extracting."
after_backup:
@@ -850,8 +936,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
creating a backup, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after creating a backup, run once per
repository.
example:
- "echo Finished a backup."
after_compact:
@@ -859,8 +946,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
compaction, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after compaction, run once per
repository.
example:
- "echo Finished compaction."
after_prune:
@@ -868,8 +956,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
pruning, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after pruning, run once per
repository.
example:
- "echo Finished pruning."
after_check:
@@ -877,8 +966,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
consistency checks, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after consistency checks, run once
per repository.
example:
- "echo Finished checks."
after_extract:
@@ -886,8 +976,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
extracting a backup, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after extracting a backup, run once
per repository.
example:
- "echo Finished extracting."
after_actions:
@@ -895,8 +986,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after all
actions for each repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after all actions for each
repository.
example:
- "echo Finished actions."
on_error:
@@ -904,9 +996,10 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute when an
exception occurs during a "create", "prune", "compact", or "check"
action or an associated before/after hook.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute when an exception occurs during a
"create", "prune", "compact", or "check" action or an associated
before/after hook.
example:
- "echo Error during create/prune/compact/check."
before_everything:
@@ -914,10 +1007,10 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
running all actions (if one of them is "create"). These are
collected from all configuration files and then run once before all
of them (prior to all actions).
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before running all actions (if one of
them is "create"). These are collected from all configuration files
and then run once before all of them (prior to all actions).
example:
- "echo Starting actions."
after_everything:
@@ -925,14 +1018,157 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
running all actions (if one of them is "create"). These are
collected from all configuration files and then run once after all
of them (after any action).
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after running all actions (if one of
them is "create"). These are collected from all configuration files
and then run once after all of them (after any action).
example:
- "echo Completed actions."
commands:
type: array
items:
type: object
oneOf:
- required: [before, run]
additionalProperties: false
properties:
before:
type: string
enum:
- action
- repository
- configuration
- everything
description: |
Name for the point in borgmatic's execution that
the commands should be run before (required if
"after" isn't set):
* "action" runs before each action for each
repository.
* "repository" runs before all actions for each
repository.
* "configuration" runs before all actions and
repositories in the current configuration file.
* "everything" runs before all configuration
files.
example: action
when:
type: array
items:
type: string
enum:
- repo-create
- transfer
- prune
- compact
- create
- recreate
- check
- delete
- extract
- config
- export-tar
- mount
- umount
- repo-delete
- restore
- repo-list
- list
- repo-info
- info
- break-lock
- key
- borg
description: |
List of actions for which the commands will be
run. Defaults to running for all actions.
example: [create, prune, compact, check]
run:
type: array
items:
type: string
description: |
List of one or more shell commands or scripts to
run when this command hook is triggered. Required.
example:
- "echo Doing stuff."
- required: [after, run]
additionalProperties: false
properties:
after:
type: string
enum:
- action
- repository
- configuration
- everything
- error
description: |
Name for the point in borgmatic's execution that
the commands should be run after (required if
"before" isn't set):
* "action" runs after each action for each
repository.
* "repository" runs after all actions for each
repository.
* "configuration" runs after all actions and
repositories in the current configuration file.
* "everything" runs after all configuration
files.
* "error" runs after an error occurs.
example: action
when:
type: array
items:
type: string
enum:
- repo-create
- transfer
- prune
- compact
- create
- recreate
- check
- delete
- extract
- config
- export-tar
- mount
- umount
- repo-delete
- restore
- repo-list
- list
- repo-info
- info
- break-lock
- key
- borg
description: |
Only trigger the hook when borgmatic is run with
particular actions listed here. Defaults to
running for all actions.
example: [create, prune, compact, check]
run:
type: array
items:
type: string
description: |
List of one or more shell commands or scripts to
run when this command hook is triggered. Required.
example:
- "echo Doing stuff."
description: |
List of one or more command hooks to execute, triggered at
particular points during borgmatic's execution. For each command
hook, specify one of "before" or "after", not both.
example:
- before: action
when: [create]
run: [echo Backing up.]
bootstrap:
type: object
additionalProperties: false
properties:
store_config_files:
type: boolean
@@ -1088,11 +1324,11 @@ properties:
Command to use instead of "pg_dump" or "pg_dumpall".
This can be used to run a specific pg_dump version
(e.g., one inside a running container). If you run it
from within a container, make sure to mount your
host's ".borgmatic" folder into the container using
the same directory structure. Defaults to "pg_dump"
for single database dump or "pg_dumpall" to dump all
databases.
from within a container, make sure to mount the path in
the "user_runtime_directory" option from the host into
the container at the same location. Defaults to
"pg_dump" for single database dump or "pg_dumpall" to
dump all databases.
example: docker exec my_pg_container pg_dump
pg_restore_command:
type: string
@@ -1145,6 +1381,9 @@ properties:
https://www.postgresql.org/docs/current/app-pgdump.html and
https://www.postgresql.org/docs/current/libpq-ssl.html for
details.
example:
- name: users
hostname: database.example.org
mariadb_databases:
type: array
items:
@@ -1229,10 +1468,11 @@ properties:
description: |
Command to use instead of "mariadb-dump". This can be
used to run a specific mariadb_dump version (e.g., one
inside a running container). If you run it from within
a container, make sure to mount your host's
".borgmatic" folder into the container using the same
directory structure. Defaults to "mariadb-dump".
inside a running container). If you run it from within a
container, make sure to mount the path in the
"user_runtime_directory" option from the host into the
container at the same location. Defaults to
"mariadb-dump".
example: docker exec mariadb_container mariadb-dump
mariadb_command:
type: string
@@ -1289,6 +1529,9 @@ properties:
added to your source directories at runtime and streamed directly
to Borg. Requires mariadb-dump/mariadb commands. See
https://mariadb.com/kb/en/library/mysqldump/ for details.
example:
- name: users
hostname: database.example.org
mysql_databases:
type: array
items:
@@ -1371,12 +1614,12 @@ properties:
mysql_dump_command:
type: string
description: |
Command to use instead of "mysqldump". This can be
used to run a specific mysql_dump version (e.g., one
inside a running container). If you run it from within
a container, make sure to mount your host's
".borgmatic" folder into the container using the same
directory structure. Defaults to "mysqldump".
Command to use instead of "mysqldump". This can be used
to run a specific mysql_dump version (e.g., one inside a
running container). If you run it from within a
container, make sure to mount the path in the
"user_runtime_directory" option from the host into the
container at the same location. Defaults to "mysqldump".
example: docker exec mysql_container mysqldump
mysql_command:
type: string
@@ -1434,6 +1677,9 @@ properties:
to Borg. Requires mysqldump/mysql commands. See
https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html for
details.
example:
- name: users
hostname: database.example.org
sqlite_databases:
type: array
items:
@@ -1463,6 +1709,33 @@ properties:
Path to the SQLite database file to restore to. Defaults
to the "path" option.
example: /var/lib/sqlite/users.db
sqlite_command:
type: string
description: |
Command to use instead of "sqlite3". This can be used to
run a specific sqlite3 version (e.g., one inside a
running container). If you run it from within a
container, make sure to mount the path in the
"user_runtime_directory" option from the host into the
container at the same location. Defaults to "sqlite3".
example: docker exec sqlite_container sqlite3
sqlite_restore_command:
type: string
description: |
Command to run when restoring a database instead
of "sqlite3". This can be used to run a specific
sqlite3 version (e.g., one inside a running container).
Defaults to "sqlite3".
example: docker exec sqlite_container sqlite3
description: |
List of one or more SQLite databases to dump before creating a
backup, run once per configuration file. The database dumps are
added to your source directories at runtime and streamed directly to
Borg. Requires the sqlite3 command. See https://sqlite.org/cli.html
for details.
example:
- name: users
path: /var/lib/db.sqlite
mongodb_databases:
type: array
items:
@@ -1558,6 +1831,25 @@ properties:
dump command, without performing any validation on them.
See mongorestore documentation for details.
example: --restoreDbUsersAndRoles
mongodump_command:
type: string
description: |
Command to use instead of "mongodump". This can be used
to run a specific mongodump version (e.g., one inside a
running container). If you run it from within a
container, make sure to mount the path in the
"user_runtime_directory" option from the host into the
container at the same location. Defaults to
"mongodump".
example: docker exec mongodb_container mongodump
mongorestore_command:
type: string
description: |
Command to run when restoring a database instead of
"mongorestore". This can be used to run a specific
mongorestore version (e.g., one inside a running
container). Defaults to "mongorestore".
example: docker exec mongodb_container mongorestore
description: |
List of one or more MongoDB databases to dump before creating a
backup, run once per configuration file. The database dumps are
@@ -1565,6 +1857,9 @@ properties:
to Borg. Requires mongodump/mongorestore commands. See
https://docs.mongodb.com/database-tools/mongodump/ and
https://docs.mongodb.com/database-tools/mongorestore/ for details.
example:
- name: users
hostname: database.example.org
ntfy:
type: object
required: ['topic']
@@ -1601,6 +1896,7 @@ properties:
example: tk_AgQdq7mVBoFD37zQVN29RhuMzNIz2
start:
type: object
additionalProperties: false
properties:
title:
type: string
@@ -1624,6 +1920,7 @@ properties:
example: incoming_envelope
finish:
type: object
additionalProperties: false
properties:
title:
type: string
@@ -1647,6 +1944,7 @@ properties:
example: incoming_envelope
fail:
type: object
additionalProperties: false
properties:
title:
type: string
@@ -1705,6 +2003,7 @@ properties:
example: hwRwoWsXMBWwgrSecfa9EfPey55WSN
start:
type: object
additionalProperties: false
properties:
message:
type: string
@@ -1744,8 +2043,8 @@ properties:
type: boolean
description: |
Set to True to enable HTML parsing of the message.
Set to False for plain text.
example: True
Set to false for plain text.
example: true
sound:
type: string
description: |
@@ -1780,6 +2079,7 @@ properties:
example: Pushover Link
finish:
type: object
additionalProperties: false
properties:
message:
type: string
@@ -1819,8 +2119,8 @@ properties:
type: boolean
description: |
Set to True to enable HTML parsing of the message.
Set to False for plain text.
example: True
Set to false for plain text.
example: true
sound:
type: string
description: |
@@ -1855,6 +2155,7 @@ properties:
example: Pushover Link
fail:
type: object
additionalProperties: false
properties:
message:
type: string
@@ -1894,8 +2195,8 @@ properties:
type: boolean
description: |
Set to True to enable HTML parsing of the message.
Set to False for plain text.
example: True
Set to false for plain text.
example: true
sound:
type: string
description: |
@@ -1994,6 +2295,7 @@ properties:
example: fakekey
start:
type: object
additionalProperties: false
properties:
value:
type: ["integer", "string"]
@@ -2002,6 +2304,7 @@ properties:
example: STARTED
finish:
type: object
additionalProperties: false
properties:
value:
type: ["integer", "string"]
@@ -2010,6 +2313,7 @@ properties:
example: FINISH
fail:
type: object
additionalProperties: false
properties:
value:
type: ["integer", "string"]
@@ -2041,15 +2345,20 @@ properties:
type: array
items:
type: object
additionalProperties: false
required:
- url
- label
properties:
url:
type: string
description: URL of this Apprise service.
example: "gotify://hostname/token"
label:
type: string
description: |
Label used in borgmatic logs for this Apprise
service.
example: gotify
description: |
A list of Apprise services to publish to with URLs and
@@ -2064,7 +2373,7 @@ properties:
send_logs:
type: boolean
description: |
Send borgmatic logs to Apprise services as part the
Send borgmatic logs to Apprise services as part of the
"finish", "fail", and "log" states. Defaults to true.
example: false
logs_size_limit:
@@ -2077,6 +2386,7 @@ properties:
start:
type: object
required: ['body']
additionalProperties: false
properties:
title:
type: string
@@ -2092,6 +2402,7 @@ properties:
finish:
type: object
required: ['body']
additionalProperties: false
properties:
title:
type: string
@@ -2107,6 +2418,7 @@ properties:
fail:
type: object
required: ['body']
additionalProperties: false
properties:
title:
type: string
@@ -2122,6 +2434,7 @@ properties:
log:
type: object
required: ['body']
additionalProperties: false
properties:
title:
type: string
@@ -2175,7 +2488,7 @@ properties:
send_logs:
type: boolean
description: |
Send borgmatic logs to Healthchecks as part the "finish",
Send borgmatic logs to Healthchecks as part of the "finish",
"fail", and "log" states. Defaults to true.
example: false
ping_body_limit:
@@ -2279,6 +2592,12 @@ properties:
PagerDuty integration key used to notify PagerDuty when a
backup errors. Supports the "{credential ...}" syntax.
example: a177cad45bd374409f78906a810a3074
send_logs:
type: boolean
description: |
Send borgmatic logs to PagerDuty when a backup errors.
Defaults to true.
example: false
description: |
Configuration for a monitoring integration with PagerDuty. Create an
account at https://www.pagerduty.com if you'd like to use this
@@ -2471,5 +2790,27 @@ properties:
description: |
Command to use instead of "keepassxc-cli".
example: /usr/local/bin/keepassxc-cli
key_file:
type: string
description: |
Path to a key file for unlocking the KeePassXC database.
example: /path/to/keyfile
yubikey:
type: string
description: |
YubiKey slot and optional serial number used to access the
KeePassXC database. The format is "<slot[:serial]>", where:
* <slot> is the YubiKey slot number (e.g., `1` or `2`).
* <serial> (optional) is the YubiKey's serial number (e.g.,
`7370001`).
example: "1:7370001"
description: |
Configuration for integration with the KeePassXC password manager.
default_actions:
type: boolean
description: |
Whether to apply default actions (e.g., backup) when no arguments
are supplied to the borgmatic command. If set to true, borgmatic
triggers the default actions (create, prune, compact and check). If
set to false, borgmatic displays the help message instead.
example: true

View File

@@ -4,7 +4,7 @@ import os
import jsonschema
import ruamel.yaml
import borgmatic.config
import borgmatic.config.arguments
from borgmatic.config import constants, environment, load, normalize, override
@@ -21,6 +21,18 @@ def schema_filename():
return schema_path
def load_schema(schema_path): # pragma: no cover
'''
Given a schema filename path, load the schema and return it as a dict.
Raise Validation_error if the schema could not be parsed.
'''
try:
return load.load_configuration(schema_path)
except (ruamel.yaml.error.YAMLError, RecursionError) as error:
raise Validation_error(schema_path, (str(error),))
def format_json_error_path_element(path_element):
'''
Given a path element into a JSON data structure, format it for display as a string.
@@ -84,13 +96,17 @@ def apply_logical_validation(config_filename, parsed_configuration):
)
def parse_configuration(config_filename, schema_filename, overrides=None, resolve_env=True):
def parse_configuration(
config_filename, schema_filename, arguments, overrides=None, resolve_env=True
):
'''
Given the path to a config filename in YAML format, the path to a schema filename in a YAML
rendition of JSON Schema format, a sequence of configuration file override strings in the form
of "option.suboption=value", and whether to resolve environment variables, return the parsed
configuration as a data structure of nested dicts and lists corresponding to the schema. Example
return value:
rendition of JSON Schema format, arguments as dict from action name to argparse.Namespace, a
sequence of configuration file override strings in the form of "option.suboption=value", and
whether to resolve environment variables, return the parsed configuration as a data structure of
nested dicts and lists corresponding to the schema. Example return value.
Example return value:
{
'source_directories': ['/home', '/etc'],
@@ -113,6 +129,7 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
except (ruamel.yaml.error.YAMLError, RecursionError) as error:
raise Validation_error(config_filename, (str(error),))
borgmatic.config.arguments.apply_arguments_to_config(config, schema, arguments)
override.apply_overrides(config, schema, overrides)
constants.apply_constants(config, config.get('constants') if config else {})
@@ -138,16 +155,22 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
return config, config_paths, logs
def normalize_repository_path(repository):
def normalize_repository_path(repository, base=None):
'''
Given a repository path, return the absolute path of it (for local repositories).
Optionally, use a base path for resolving relative paths, e.g. to the configured working directory.
'''
# A colon in the repository could mean that it's either a file:// URL or a remote repository.
# If it's a remote repository, we don't want to normalize it. If it's a file:// URL, we do.
if ':' not in repository:
return os.path.abspath(repository)
return (
os.path.abspath(os.path.join(base, repository)) if base else os.path.abspath(repository)
)
elif repository.startswith('file://'):
return os.path.abspath(repository.partition('file://')[-1])
local_path = repository.partition('file://')[-1]
return (
os.path.abspath(os.path.join(base, local_path)) if base else os.path.abspath(local_path)
)
else:
return repository

View File

@@ -2,9 +2,11 @@ import logging
import os
import re
import shlex
import subprocess
import sys
import borgmatic.execute
import borgmatic.logger
logger = logging.getLogger(__name__)
@@ -44,54 +46,184 @@ def make_environment(current_environment, sys_module=sys):
return environment
def execute_hook(commands, umask, config_filename, description, dry_run, **context):
def filter_hooks(command_hooks, before=None, after=None, hook_name=None, action_names=None):
'''
Given a list of hook commands to execute, a umask to execute with (or None), a config filename,
a hook description, and whether this is a dry run, run the given commands. Or, don't run them
if this is a dry run.
Given a sequence of command hook dicts from configuration and one or more filters (before name,
after name, calling hook name, or a sequence of action names), filter down the command hooks to
just the ones that match the given filters.
'''
return tuple(
hook_config
for hook_config in command_hooks or ()
for config_action_names in (hook_config.get('when'),)
if before is None or hook_config.get('before') == before
if after is None or hook_config.get('after') == after
if action_names is None
or config_action_names is None
or set(config_action_names or ()).intersection(set(action_names))
)
def execute_hooks(command_hooks, umask, working_directory, dry_run, **context):
'''
Given a sequence of command hook dicts from configuration, a umask to execute with (or None), a
working directory to execute with, and whether this is a dry run, run the commands for each
hook. Or don't run them if this is a dry run.
The context contains optional values interpolated by name into the hook commands.
Raise ValueError if the umask cannot be parsed.
Raise ValueError if the umask cannot be parsed or a hook is invalid.
Raise subprocesses.CalledProcessError if an error occurs in a hook.
'''
if not commands:
logger.debug(f'No commands to run for {description} hook')
return
borgmatic.logger.add_custom_log_levels()
dry_run_label = ' (dry run; not actually running hooks)' if dry_run else ''
context['configuration_filename'] = config_filename
commands = [interpolate_context(description, command, context) for command in commands]
for hook_config in command_hooks:
commands = hook_config.get('run')
if len(commands) == 1:
logger.info(f'Running command for {description} hook{dry_run_label}')
else:
logger.info(
f'Running {len(commands)} commands for {description} hook{dry_run_label}',
)
if 'before' in hook_config:
description = f'before {hook_config.get("before")}'
elif 'after' in hook_config:
description = f'after {hook_config.get("after")}'
else:
raise ValueError(f'Invalid hook configuration: {hook_config}')
if umask:
parsed_umask = int(str(umask), 8)
logger.debug(f'Set hook umask to {oct(parsed_umask)}')
original_umask = os.umask(parsed_umask)
else:
original_umask = None
if not commands:
logger.debug(f'No commands to run for {description} hook')
continue
try:
for command in commands:
if dry_run:
continue
commands = [interpolate_context(description, command, context) for command in commands]
borgmatic.execute.execute_command(
[command],
output_log_level=(logging.ERROR if description == 'on-error' else logging.WARNING),
shell=True,
environment=make_environment(os.environ),
if len(commands) == 1:
logger.info(f'Running {description} command hook{dry_run_label}')
else:
logger.info(
f'Running {len(commands)} commands for {description} hook{dry_run_label}',
)
finally:
if original_umask:
os.umask(original_umask)
if umask:
parsed_umask = int(str(umask), 8)
logger.debug(f'Setting hook umask to {oct(parsed_umask)}')
original_umask = os.umask(parsed_umask)
else:
original_umask = None
try:
for command in commands:
if dry_run:
continue
borgmatic.execute.execute_command(
[command],
output_log_level=(
logging.ERROR if hook_config.get('after') == 'error' else logging.ANSWER
),
shell=True,
environment=make_environment(os.environ),
working_directory=working_directory,
)
finally:
if original_umask:
os.umask(original_umask)
class Before_after_hooks:
'''
A Python context manager for executing command hooks both before and after the wrapped code.
Example use as a context manager:
with borgmatic.hooks.command.Before_after_hooks(
command_hooks=config.get('commands'),
before_after='do_stuff',
umask=config.get('umask'),
dry_run=dry_run,
hook_name='myhook',
):
do()
some()
stuff()
With that context manager in place, "before" command hooks execute before the wrapped code runs,
and "after" command hooks execute after the wrapped code completes.
'''
def __init__(
self,
command_hooks,
before_after,
umask,
working_directory,
dry_run,
hook_name=None,
action_names=None,
**context,
):
'''
Given a sequence of command hook configuration dicts, the before/after name, a umask to run
commands with, a working directory to run commands with, a dry run flag, the name of the
calling hook, a sequence of action names, and any context for the executed commands, save
those data points for use below.
'''
self.command_hooks = command_hooks
self.before_after = before_after
self.umask = umask
self.working_directory = working_directory
self.dry_run = dry_run
self.hook_name = hook_name
self.action_names = action_names
self.context = context
def __enter__(self):
'''
Run the configured "before" command hooks that match the initialized data points.
'''
try:
execute_hooks(
borgmatic.hooks.command.filter_hooks(
self.command_hooks,
before=self.before_after,
hook_name=self.hook_name,
action_names=self.action_names,
),
self.umask,
self.working_directory,
self.dry_run,
**self.context,
)
except (OSError, subprocess.CalledProcessError) as error:
if considered_soft_failure(error):
return
# Trigger the after hook manually, since raising here will prevent it from being run
# otherwise.
self.__exit__(None, None, None)
raise ValueError(f'Error running before {self.before_after} hook: {error}')
def __exit__(self, exception_type, exception, traceback):
'''
Run the configured "after" command hooks that match the initialized data points.
'''
try:
execute_hooks(
borgmatic.hooks.command.filter_hooks(
self.command_hooks,
after=self.before_after,
hook_name=self.hook_name,
action_names=self.action_names,
),
self.umask,
self.working_directory,
self.dry_run,
**self.context,
)
except (OSError, subprocess.CalledProcessError) as error:
if considered_soft_failure(error):
return
raise ValueError(f'Error running after {self.before_after} hook: {error}')
def considered_soft_failure(error):

View File

@@ -19,9 +19,11 @@ def load_credential(hook_config, config, credential_parameters):
raise ValueError(f'Cannot load invalid credential: "{name}"')
expanded_credential_path = os.path.expanduser(credential_path)
try:
with open(
os.path.join(config.get('working_directory', ''), credential_path)
os.path.join(config.get('working_directory', ''), expanded_credential_path)
) as credential_file:
return credential_file.read().rstrip(os.linesep)
except (FileNotFoundError, OSError) as error:

View File

@@ -11,32 +11,35 @@ def load_credential(hook_config, config, credential_parameters):
'''
Given the hook configuration dict, the configuration dict, and a credential parameters tuple
containing a KeePassXC database path and an attribute name to load, run keepassxc-cli to fetch
the corresponidng KeePassXC credential and return it.
the corresponding KeePassXC credential and return it.
Raise ValueError if keepassxc-cli can't retrieve the credential.
'''
try:
(database_path, attribute_name) = credential_parameters
except ValueError:
path_and_name = ' '.join(credential_parameters)
raise ValueError(f'Invalid KeePassXC credential parameters: {credential_parameters}')
raise ValueError(
f'Cannot load credential with invalid KeePassXC database path and attribute name: "{path_and_name}"'
)
expanded_database_path = os.path.expanduser(database_path)
if not os.path.exists(database_path):
raise ValueError(
f'Cannot load credential because KeePassXC database path does not exist: {database_path}'
)
if not os.path.exists(expanded_database_path):
raise ValueError(f'KeePassXC database path does not exist: {database_path}')
return borgmatic.execute.execute_command_and_capture_output(
# Build the keepassxc-cli command.
command = (
tuple(shlex.split((hook_config or {}).get('keepassxc_cli_command', 'keepassxc-cli')))
+ ('show', '--show-protected', '--attributes', 'Password')
+ (
'show',
'--show-protected',
'--attributes',
'Password',
database_path,
attribute_name,
('--key-file', hook_config['key_file'])
if hook_config and hook_config.get('key_file')
else ()
)
).rstrip(os.linesep)
+ (
('--yubikey', hook_config['yubikey'])
if hook_config and hook_config.get('yubikey')
else ()
)
+ (expanded_database_path, attribute_name) # Ensure database and entry are last.
)
return borgmatic.execute.execute_command_and_capture_output(command).rstrip(os.linesep)

View File

@@ -5,7 +5,7 @@ import re
logger = logging.getLogger(__name__)
CREDENTIAL_NAME_PATTERN = re.compile(r'^\w+$')
CREDENTIAL_NAME_PATTERN = re.compile(r'^[\w.-]+$')
def load_credential(hook_config, config, credential_parameters):

View File

@@ -48,6 +48,47 @@ def get_subvolume_mount_points(findmnt_command):
Subvolume = collections.namedtuple('Subvolume', ('path', 'contained_patterns'), defaults=((),))
def get_subvolume_property(btrfs_command, subvolume_path, property_name):
output = borgmatic.execute.execute_command_and_capture_output(
tuple(btrfs_command.split(' '))
+ (
'property',
'get',
'-t', # Type.
'subvol',
subvolume_path,
property_name,
),
)
try:
value = output.strip().split('=')[1]
except IndexError:
raise ValueError(f'Invalid {btrfs_command} property output')
return {
'true': True,
'false': False,
}.get(value, value)
def omit_read_only_subvolume_mount_points(btrfs_command, subvolume_paths):
'''
Given a Btrfs command to run and a sequence of Btrfs subvolume mount points, filter them down to
just those that are read-write. The idea is that Btrfs can't actually snapshot a read-only
subvolume, so we should just ignore them.
'''
retained_subvolume_paths = []
for subvolume_path in subvolume_paths:
if get_subvolume_property(btrfs_command, subvolume_path, 'ro'):
logger.debug(f'Ignoring Btrfs subvolume {subvolume_path} because it is read-only')
else:
retained_subvolume_paths.append(subvolume_path)
return tuple(retained_subvolume_paths)
def get_subvolumes(btrfs_command, findmnt_command, patterns=None):
'''
Given a Btrfs command to run and a sequence of configured patterns, find the intersection
@@ -67,7 +108,11 @@ def get_subvolumes(btrfs_command, findmnt_command, patterns=None):
# backup. Sort the subvolumes from longest to shortest mount points, so longer mount points get
# a whack at the candidate pattern piñata before their parents do. (Patterns are consumed during
# this process, so no two subvolumes end up with the same contained patterns.)
for mount_point in reversed(get_subvolume_mount_points(findmnt_command)):
for mount_point in reversed(
omit_read_only_subvolume_mount_points(
btrfs_command, get_subvolume_mount_points(findmnt_command)
)
):
subvolumes.extend(
Subvolume(mount_point, contained_patterns)
for contained_patterns in (

View File

@@ -65,10 +65,12 @@ def make_defaults_file_options(username=None, password=None, defaults_extra_file
Do not use the returned value for multiple different command invocations. That will not work
because each pipe is "used up" once read.
'''
escaped_password = None if password is None else password.replace('\\', '\\\\')
values = '\n'.join(
(
(f'user={username}' if username is not None else ''),
(f'password={password}' if password is not None else ''),
(f'password="{escaped_password}"' if escaped_password is not None else ''),
)
).strip()

View File

@@ -53,6 +53,7 @@ def dump_data_sources(
logger.info(f'Dumping MongoDB databases{dry_run_label}')
processes = []
for database in databases:
name = database['name']
dump_filename = dump.make_data_source_dump_filename(
@@ -113,14 +114,17 @@ def make_password_config_file(password):
def build_dump_command(database, config, dump_filename, dump_format):
'''
Return the mongodump command from a single database configuration.
Return the custom mongodump_command from a single database configuration.
'''
all_databases = database['name'] == 'all'
password = borgmatic.hooks.credential.parse.resolve_credential(database.get('password'), config)
dump_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('mongodump_command') or 'mongodump')
)
return (
('mongodump',)
dump_command
+ (('--out', shlex.quote(dump_filename)) if dump_format == 'directory' else ())
+ (('--host', shlex.quote(database['hostname'])) if 'hostname' in database else ())
+ (('--port', shlex.quote(str(database['port']))) if 'port' in database else ())
@@ -229,7 +233,7 @@ def restore_data_source_dump(
def build_restore_command(extract_process, database, config, dump_filename, connection_params):
'''
Return the mongorestore command from a single database configuration.
Return the custom mongorestore_command from a single database configuration.
'''
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
@@ -250,7 +254,10 @@ def build_restore_command(extract_process, database, config, dump_filename, conn
config,
)
command = ['mongorestore']
command = list(
shlex.quote(part)
for part in shlex.split(database.get('mongorestore_command') or 'mongorestore')
)
if extract_process:
command.append('--archive')
else:

View File

@@ -1,3 +1,4 @@
import os
import pathlib
IS_A_HOOK = False
@@ -11,6 +12,10 @@ def get_contained_patterns(parent_directory, candidate_patterns):
paths, but there's a parent directory (logical volume, dataset, subvolume, etc.) at /var, then
/var is what we want to snapshot.
If a parent directory and a candidate pattern are on different devices, skip the pattern. That's
because any snapshot of a parent directory won't actually include "contained" directories if
they reside on separate devices.
For this function to work, a candidate pattern path can't have any globs or other non-literal
characters in the initial portion of the path that matches the parent directory. For instance, a
parent directory of /var would match a candidate pattern path of /var/log/*/data, but not a
@@ -27,6 +32,8 @@ def get_contained_patterns(parent_directory, candidate_patterns):
if not candidate_patterns:
return ()
parent_device = os.stat(parent_directory).st_dev if os.path.exists(parent_directory) else None
contained_patterns = tuple(
candidate
for candidate in candidate_patterns
@@ -35,6 +42,7 @@ def get_contained_patterns(parent_directory, candidate_patterns):
pathlib.PurePath(parent_directory) == candidate_path
or pathlib.PurePath(parent_directory) in candidate_path.parents
)
if candidate.device == parent_device
)
candidate_patterns -= set(contained_patterns)

View File

@@ -71,13 +71,16 @@ def dump_data_sources(
)
continue
command = (
'sqlite3',
sqlite_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('sqlite_command') or 'sqlite3')
)
command = sqlite_command + (
shlex.quote(database_path),
'.dump',
'>',
shlex.quote(dump_filename),
)
logger.debug(
f'Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}'
)
@@ -160,11 +163,11 @@ def restore_data_source_dump(
except FileNotFoundError: # pragma: no cover
pass
restore_command = (
'sqlite3',
database_path,
sqlite_restore_command = tuple(
shlex.quote(part)
for part in shlex.split(data_source.get('sqlite_restore_command') or 'sqlite3')
)
restore_command = sqlite_restore_command + (shlex.quote(database_path),)
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(

View File

@@ -3,6 +3,7 @@ import importlib
import logging
import pkgutil
import borgmatic.hooks.command
import borgmatic.hooks.credential
import borgmatic.hooks.data_source
import borgmatic.hooks.monitoring

View File

@@ -28,7 +28,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
filename in any log entries. If this is a dry run, then don't actually ping anything.
'''
if state not in MONITOR_STATE_TO_CRONHUB:
logger.debug(f'Ignoring unsupported monitoring {state.name.lower()} in Cronhub hook')
logger.debug(f'Ignoring unsupported monitoring state {state.name.lower()} in Cronhub hook')
return
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''

View File

@@ -28,7 +28,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
filename in any log entries. If this is a dry run, then don't actually ping anything.
'''
if state not in MONITOR_STATE_TO_CRONITOR:
logger.debug(f'Ignoring unsupported monitoring {state.name.lower()} in Cronitor hook')
logger.debug(f'Ignoring unsupported monitoring state {state.name.lower()} in Cronitor hook')
return
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''

View File

@@ -64,7 +64,7 @@ def get_handler(identifier):
def format_buffered_logs_for_payload(identifier):
'''
Get the handler previously added to the root logger, and slurp buffered logs out of it to
send to Healthchecks.
send to the monitoring service.
'''
try:
buffering_handler = get_handler(identifier)

View File

@@ -6,20 +6,36 @@ import platform
import requests
import borgmatic.hooks.credential.parse
import borgmatic.hooks.monitoring.logs
from borgmatic.hooks.monitoring import monitor
logger = logging.getLogger(__name__)
EVENTS_API_URL = 'https://events.pagerduty.com/v2/enqueue'
DEFAULT_LOGS_PAYLOAD_LIMIT_BYTES = 10000
HANDLER_IDENTIFIER = 'pagerduty'
def initialize_monitor(
integration_key, config, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
No initialization is necessary for this monitor.
Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
we can send them all to PagerDuty upon a failure state. But skip this if the "send_logs" option
is false.
'''
pass
if hook_config.get('send_logs') is False:
return
ping_body_limit = max(
DEFAULT_LOGS_PAYLOAD_LIMIT_BYTES
- len(borgmatic.hooks.monitoring.logs.PAYLOAD_TRUNCATION_INDICATOR),
0,
)
borgmatic.hooks.monitoring.logs.add_handler(
borgmatic.hooks.monitoring.logs.Forgetful_buffering_handler(
HANDLER_IDENTIFIER, ping_body_limit, monitoring_log_level
)
)
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run):
@@ -30,16 +46,13 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
'''
if state != monitor.State.FAIL:
logger.debug(
f'Ignoring unsupported monitoring {state.name.lower()} in PagerDuty hook',
f'Ignoring unsupported monitoring state {state.name.lower()} in PagerDuty hook',
)
return
dry_run_label = ' (dry run; not actually sending)' if dry_run else ''
logger.info(f'Sending failure event to PagerDuty {dry_run_label}')
if dry_run:
return
try:
integration_key = borgmatic.hooks.credential.parse.resolve_credential(
hook_config.get('integration_key'), config
@@ -48,6 +61,10 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
logger.warning(f'PagerDuty credential error: {error}')
return
logs_payload = borgmatic.hooks.monitoring.logs.format_buffered_logs_for_payload(
HANDLER_IDENTIFIER
)
hostname = platform.node()
local_timestamp = datetime.datetime.now(datetime.timezone.utc).astimezone().isoformat()
payload = json.dumps(
@@ -66,11 +83,14 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
'hostname': hostname,
'configuration filename': config_filename,
'server time': local_timestamp,
'logs': logs_payload,
},
},
}
)
logger.debug(f'Using PagerDuty payload: {payload}')
if dry_run:
return
logging.getLogger('urllib3').setLevel(logging.ERROR)
try:
@@ -83,6 +103,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
def destroy_monitor(ping_url_or_uuid, config, monitoring_log_level, dry_run): # pragma: no cover
'''
No destruction is necessary for this monitor.
Remove the monitor handler that was added to the root logger. This prevents the handler from
getting reused by other instances of this monitor.
'''
pass
borgmatic.hooks.monitoring.logs.remove_handler(HANDLER_IDENTIFIER)

View File

@@ -29,12 +29,13 @@ def interactive_console():
return sys.stderr.isatty() and os.environ.get('TERM') != 'dumb'
def should_do_markup(no_color, configs):
def should_do_markup(configs, json_enabled):
'''
Given the value of the command-line no-color argument, and a dict of configuration filename to
corresponding parsed configuration, determine if we should enable color marking up.
Given a dict of configuration filename to corresponding parsed configuration (which already have
any command-line overrides applied) and whether json is enabled, determine if we should enable
color marking up.
'''
if no_color:
if json_enabled:
return False
if any(config.get('color', True) is False for config in configs.values()):
@@ -256,7 +257,7 @@ class Log_prefix:
self.original_prefix = get_log_prefix()
set_log_prefix(self.prefix)
def __exit__(self, exception, value, traceback):
def __exit__(self, exception_type, exception, traceback):
'''
Restore any original prefix.
'''

View File

@@ -4,7 +4,7 @@ COPY . /app
RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib
RUN pip install --break-system-packages --no-cache /app && borgmatic config generate && chmod +r /etc/borgmatic/config.yaml
RUN borgmatic --help > /command-line.txt \
&& for action in repo-create transfer create prune compact check delete extract config "config bootstrap" "config generate" "config validate" export-tar mount umount repo-delete restore repo-list list repo-info info break-lock "key export" "key change-passphrase" borg; do \
&& for action in repo-create transfer create prune compact check delete extract config "config bootstrap" "config generate" "config validate" export-tar mount umount repo-delete restore repo-list list repo-info info break-lock "key export" "key import" "key change-passphrase" recreate borg; do \
echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \
&& borgmatic $action --help >> /command-line.txt; done
RUN /app/docs/fetch-contributors >> /contributors.html

View File

@@ -165,6 +165,7 @@ ul {
}
li {
padding: .25em 0;
line-height: 1.5;
}
li ul {
list-style-type: disc;

View File

@@ -7,18 +7,112 @@ eleventyNavigation:
---
## Preparation and cleanup hooks
If you find yourself performing preparation tasks before your backup runs, or
cleanup work afterwards, borgmatic hooks may be of interest. Hooks are shell
commands that borgmatic executes for you at various points as it runs, and
they're configured in the `hooks` section of your configuration file. But if
you're looking to backup a database, it's probably easier to use the [database
backup
feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
instead.
If you find yourself performing preparation tasks before your backup runs or
doing cleanup work afterwards, borgmatic command hooks may be of interest. These
are custom shell commands you can configure borgmatic to execute at various
points as it runs.
You can specify `before_backup` hooks to perform preparation steps before
(But if you're looking to backup a database, it's probably easier to use the
[database backup
feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
instead.)
<span class="minilink minilink-addedin">New in version 2.0.0 (**not yet
released**)</span> Command hooks are now configured via a list of `commands:` in
your borgmatic configuration file. For example:
```yaml
commands:
- before: action
when: [create]
run:
- echo "Before create!"
- after: action
when:
- create
- prune
run:
- echo "After create or prune!"
- after: error
run:
- echo "Something went wrong!"
```
If you're coming from an older version of borgmatic, there is tooling to help
you [upgrade your
configuration](https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-your-configuration)
to this new command hook format.
Note that if a `run:` command contains a special YAML character such as a colon,
you may need to quote the entire string (or use a [multiline
string](https://yaml-multiline.info/)) to avoid an error:
```yaml
commands:
- before: action
when: [create]
run:
- "echo Backup: start"
```
Each command in the `commands:` list has the following options:
* `before` or `after`: Name for the point in borgmatic's execution that the commands should be run before or after, one of:
* `action` runs before each action for each repository. This replaces the deprecated `before_create`, `after_prune`, etc.
* `repository` runs before or after all actions for each repository. This replaces the deprecated `before_actions` and `after_actions`.
* `configuration` runs before or after all actions and repositories in the current configuration file.
* `everything` runs before or after all configuration files. Errors here do not trigger `error` hooks or the `fail` state in monitoring hooks. This replaces the deprecated `before_everything` and `after_everything`.
* `error` runs after an error occurs—and it's only available for `after`. This replaces the deprecated `on_error` hook.
* `when`: Only trigger the hook when borgmatic is run with particular actions (`create`, `prune`, etc.) listed here. Defaults to running for all actions.
* `run`: List of one or more shell commands or scripts to run when this command hook is triggered.
An `after` command hook runs even if an error occurs in the corresponding
`before` hook or between those two hooks. This allows you to perform cleanup
steps that correspond to `before` preparation commands—even when something goes
wrong. This is a departure from the way that the deprecated `after_*` hooks
worked in borgmatic prior to version 2.0.0.
Additionally, when command hooks run, they respect the `working_directory`
option if it is configured, meaning that the hook commands are run in that
directory.
### Order of execution
Here's a way of visualizing how all of these command hooks slot into borgmatic's
execution.
Let's say you've got a borgmatic configuration file with a configured
repository. And suppose you configure several command hooks and then run
borgmatic for the `create` and `prune` actions. Here's the order of execution:
* Run `before: everything` hooks (from all configuration files).
* Run `before: configuration` hooks (from the first configuration file).
* Run `before: repository` hooks (for the first repository).
* Run `before: action` hooks for `create`.
* Actually run the `create` action (e.g. `borg create`).
* Run `after: action` hooks for `create`.
* Run `before: action` hooks for `prune`.
* Actually run the `prune` action (e.g. `borg prune`).
* Run `after: action` hooks for `prune`.
* Run `after: repository` hooks (for the first repository).
* Run `after: configuration` hooks (from the first configuration file).
* Run `after: everything` hooks (from all configuration files).
This same order of execution extends to multiple repositories and/or
configuration files.
### Deprecated command hooks
<span class="minilink minilink-addedin">Prior to version 2.0.0</span> The
command hooks worked a little differently. In these older versions of borgmatic,
you can specify `before_backup` hooks to perform preparation steps before
running backups and specify `after_backup` hooks to perform cleanup steps
afterwards. Here's an example:
afterwards. These deprecated command hooks still work in version 2.0.0+,
although see below about a few semantic differences starting in that version.
Here's an example of these deprecated hooks:
```yaml
before_backup:
@@ -43,6 +137,15 @@ instance, `before_prune` runs before a `prune` action for a repository, while
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `hooks:` section of your configuration.
<span class="minilink minilink-addedin">New in version 2.0.0</span> An `after_*`
command hook runs even if an error occurs in the corresponding `before_*` hook
or between those two hooks. This allows you to perform cleanup steps that
correspond to `before_*` preparation commands—even when something goes wrong.
<span class="minilink minilink-addedin">New in version 2.0.0</span> When command
hooks run, they respect the `working_directory` option if it is configured,
meaning that the hook commands are run in that directory.
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. These hooks are a good
@@ -57,49 +160,13 @@ but not if an error occurs in a previous hook or in the backups themselves.
(Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file
rather than once per repository.)
## Variable interpolation
The before and after action hooks support interpolating particular runtime
variables into the hook command. Here's an example that assumes you provide a
separate shell script:
```yaml
after_prune:
- record-prune.sh "{configuration_filename}" "{repository}"
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
In this example, when the hook is triggered, borgmatic interpolates runtime
values into the hook command: the borgmatic configuration filename and the
paths of the current Borg repository. Here's the full set of supported
variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the
hook was defined
* `log_file`
<span class="minilink minilink-addedin">New in version 1.7.12</span>:
path of the borgmatic log file, only set when the `--log-file` flag is used
* `repository`: path of the current repository as configured in the current
borgmatic configuration file
* `repository_label` <span class="minilink minilink-addedin">New in version
1.8.12</span>: label of the current repository as configured in the current
borgmatic configuration file
Note that you can also interpolate in [arbitrary environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
## Global hooks
You can also use `before_everything` and `after_everything` hooks to perform
global setup or cleanup:
```yaml
before_everything:
- set-up-stuff-globally
after_everything:
- clean-up-stuff-globally
```
@@ -117,13 +184,102 @@ but only if there is a `create` action. It runs even if an error occurs during
a backup or a backup hook, but not if an error occurs during a
`before_everything` hook.
`on_error` hooks run when an error occurs, but only if there is a `create`,
`prune`, `compact`, or `check` action. For instance, borgmatic can run
configurable shell commands to fire off custom error notifications or take other
actions, so you can get alerted as soon as something goes wrong. Here's a
not-so-useful example:
## Error hooks
```yaml
on_error:
- echo "Error while creating a backup or running a backup hook."
```
borgmatic also runs `on_error` hooks if an error occurs, either when creating
a backup or running a backup hook. See the [monitoring and alerting
documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/)
for more information.
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
The `on_error` hook supports interpolating particular runtime variables into
the hook command. Here's an example that assumes you provide a separate shell
script to handle the alerting:
```yaml
on_error:
- send-text-message.sh
```
borgmatic does not run `on_error` hooks if an error occurs within a
`before_everything` or `after_everything` hook.
## Variable interpolation
The command action hooks support interpolating particular runtime variables into
the commands that are run. Here's are a couple examples that assume you provide
separate shell scripts:
```yaml
commands:
- after: action
when: [prune]
run:
- record-prune.sh {configuration_filename} {repository}
- after: error
when: [create]
run:
- send-text-message.sh {configuration_filename} {repository}
```
In this example, when the hook is triggered, borgmatic interpolates runtime
values into each hook command: the borgmatic configuration filename and the
paths of the current Borg repository.
Here's the full set of supported variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the
hook was defined
* `log_file`
<span class="minilink minilink-addedin">New in version 1.7.12</span>:
path of the borgmatic log file, only set when the `--log-file` flag is used
* `repository`: path of the current repository as configured in the current
borgmatic configuration file, if applicable to the current hook
* `repository_label` <span class="minilink minilink-addedin">New in version
1.8.12</span>: label of the current repository as configured in the current
borgmatic configuration file, if applicable to the current hook
* `error`: the error message itself, only applies to `error` hooks
* `output`: output of the command that failed, only applies to `error` hooks
(may be blank if an error occurred without running a command)
Not all command hooks support all variables. For instance, the `everything` and
`configuration` hooks don't support repository variables because those hooks
don't run in the context of a single repository. But the deprecated command
hooks (`before_backup`, `on_error`, etc.) do generally support variable
interpolation.
borgmatic automatically escapes these interpolated values to prevent shell
injection attacks. One implication is that you shouldn't wrap the interpolated
values in your own quotes, as that will interfere with the quoting performed by
borgmatic and result in your command receiving incorrect arguments. For
instance, this won't work:
```yaml
commands:
- after: error
run:
# Don't do this! It won't work, as the {error} value is already quoted.
- send-text-message.sh "Uh oh: {error}"
```
Do this instead:
```yaml
commands:
- after: error
run:
- send-text-message.sh {error}
```
Note that you can also interpolate [arbitrary environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
## Hook output

View File

@@ -29,17 +29,14 @@ concept of "soft failure" come in.
This feature leverages [borgmatic command
hooks](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/),
so first familiarize yourself with them. The idea is that you write a simple
test in the form of a borgmatic hook to see if backups should proceed or not.
so familiarize yourself with them first. The idea is that you write a simple
test in the form of a borgmatic command hook to see if backups should proceed or
not.
The way the test works is that if any of your hook commands return a special
exit status of 75, that indicates to borgmatic that it's a temporary failure,
and borgmatic should skip all subsequent actions for the current repository.
<span class="minilink minilink-addedin">Prior to version 1.9.0</span> Soft
failures skipped subsequent actions for *all* repositories in the
configuration file, rather than just for the current repository.
If you return any status besides 75, then it's a standard success or error.
(Zero is success; anything else other than 75 is an error).
@@ -62,33 +59,37 @@ these options in the `location:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list.
Then, write a `before_backup` hook in that same configuration file that uses
the external `findmnt` utility to see whether the drive is mounted before
proceeding.
Then, make a command hook in that same configuration file that uses the external
`findmnt` utility to see whether the drive is mounted before proceeding.
```yaml
before_backup:
commands:
- before: repository
run:
- findmnt /mnt/removable > /dev/null || exit 75
```
<span class="minilink minilink-addedin">Prior to version 2.0.0</span> Use the
deprecated `before_actions` hook instead:
```yaml
before_actions:
- findmnt /mnt/removable > /dev/null || exit 75
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put this
option in the `hooks:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.7.0</span> Use
`before_create` or similar instead of `before_actions`, which was introduced in
borgmatic 1.7.0.
What this does is check if the `findmnt` command errors when probing for a
particular mount point. If it does error, then it returns exit code 75 to
borgmatic. borgmatic logs the soft failure, skips all further actions for the
current repository, and proceeds onward to any other repositories and/or
configuration files you may have.
If you'd prefer not to use a separate configuration file, and you'd rather
have multiple repositories in a single configuration file, you can make your
`before_backup` soft failure test [vary by
repository](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation).
That might require calling out to a separate script though.
Note that `before_backup` only runs on the `create` action. See below about
optionally using `before_actions` instead.
You can imagine a similar check for the sometimes-online server case:
```yaml
@@ -98,50 +99,50 @@ source_directories:
repositories:
- path: ssh://me@buddys-server.org/./backup.borg
before_backup:
- ping -q -c 1 buddys-server.org > /dev/null || exit 75
commands:
- before: repository
run:
- ping -q -c 1 buddys-server.org > /dev/null || exit 75
```
Or to only run backups if the battery level is high enough:
```yaml
before_backup:
- is_battery_percent_at_least.sh 25
commands:
- before: repository
run:
- is_battery_percent_at_least.sh 25
```
(Writing the battery script is left as an exercise to the reader.)
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. So if you'd like your soft
failure command hook to run regardless of action, consider using
`before_actions` instead of `before_backup`.
Writing the battery script is left as an exercise to the reader.
## Caveats and details
There are some caveats you should be aware of with this feature.
* You'll generally want to put a soft failure command in the `before_backup`
* You'll generally want to put a soft failure command in a `before` command
hook, so as to gate whether the backup action occurs. While a soft failure is
also supported in the `after_backup` hook, returning a soft failure there
also supported in an `after` command hook, returning a soft failure there
won't prevent any actions from occurring, because they've already occurred!
Similarly, you can return a soft failure from an `on_error` hook, but at
Similarly, you can return a soft failure from an `error` command hook, but at
that point it's too late to prevent the error.
* Returning a soft failure does prevent further commands in the same hook from
executing. So, like a standard error, it is an "early out". Unlike a standard
executing. So, like a standard error, it is an "early out." Unlike a standard
error, borgmatic does not display it in angry red text or consider it a
failure.
* Any given soft failure only applies to the a single borgmatic repository
(as of borgmatic 1.9.0). So if you have other repositories you don't want
soft-failed, then make your soft fail test [vary by
repository](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation)—or
put anything that you don't want soft-failed (like always-online cloud
backups) in separate configuration files from your soft-failing
repositories.
* <span class="minilink minilink-addedin">New in version 1.9.0</span> Soft
failures in `action` or `before_*` command hooks only skip the current
repository rather than all repositories in a configuration file.
* If you're writing a soft failure script that you want to vary based on the
current repository, for instance so you can have multiple repositories in a
single configuration file, have a look at [command hook variable
interpolation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation).
And there's always still the option of putting anything that you don't want
soft-failed (like always-online cloud backups) in separate configuration
files from your soft-failing repositories.
* The soft failure doesn't have to test anything related to a repository. You
can even perform a test to make sure that individual source directories are
mounted and available. Use your imagination!
* The soft failure feature also works for before/after hooks for other
actions as well. But it is not implemented for `before_everything` or
`after_everything`.
can even perform a test that individual source directories are mounted and
available. Use your imagination!
* Soft failures are not currently implemented for `everything`,
`before_everything`, or `after_everything` command hooks.

View File

@@ -193,14 +193,14 @@ mysql_databases:
### Containers
If your database is running within a container and borgmatic is too, no
If your database server is running within a container and borgmatic is too, no
problem—configure borgmatic to connect to the container's name on its exposed
port. For instance:
```yaml
postgresql_databases:
- name: users
hostname: your-database-container-name
hostname: your-database-server-container-name
port: 5433
username: postgres
password: trustsome1
@@ -210,21 +210,22 @@ postgresql_databases:
these options in the `hooks:` section of your configuration.
But what if borgmatic is running on the host? You can still connect to a
database container if its ports are properly exposed to the host. For
database server container if its ports are properly exposed to the host. For
instance, when running the database container, you can specify `--publish
127.0.0.1:5433:5432` so that it exposes the container's port 5432 to port 5433
on the host (only reachable on localhost, in this case). Or the same thing
with Docker Compose:
on the host (only reachable on localhost, in this case). Or the same thing with
Docker Compose:
```yaml
services:
your-database-container-name:
your-database-server-container-name:
image: postgres
ports:
- 127.0.0.1:5433:5432
```
And then you can connect to the database from borgmatic running on the host:
And then you can configure borgmatic running on the host to connect to the
database:
```yaml
hooks:
@@ -240,9 +241,9 @@ Alter the ports in these examples to suit your particular database system.
Normally, borgmatic dumps a database by running a database dump command (e.g.
`pg_dump`) on the host or wherever borgmatic is running, and this command
connects to your containerized database via the given `hostname` and `port`.
But if you don't have any database dump commands installed on your host and
you'd rather use the commands inside your database container itself, borgmatic
connects to your containerized database via the given `hostname` and `port`. But
if you don't have any database dump commands installed on your host and you'd
rather use the commands inside your running database container itself, borgmatic
supports that too. For that, configure borgmatic to `exec` into your container
to run the dump command.
@@ -259,9 +260,10 @@ hooks:
pg_dump_command: docker exec my_pg_container pg_dump
```
... where `my_pg_container` is the name of your database container. In this
example, you'd also need to set the `pg_restore_command` and `psql_command`
options.
... where `my_pg_container` is the name of your running database container.
Running `pg_dump` this way takes advantage of the localhost "trust"
authentication within that container. In this example, you'd also need to set
the `pg_restore_command` and `psql_command` options.
If you choose to use the `pg_dump` command within the container, and you're
using the `directory` format in particular, you'll also need to mount the
@@ -280,6 +282,24 @@ services:
- /run/user/1000:/run/user/1000
```
Another variation: If you're running borgmatic on the host but want to spin up a
temporary `pg_dump` container whenever borgmatic dumps a database, for
instance to make use of a `pg_dump` version not present on the host, try
something like this:
```yaml
hooks:
postgresql_databases:
- name: users
hostname: your-database-hostname
username: postgres
password: trustsome1
pg_dump_command: docker run --rm --env PGPASSWORD postgres:17-alpine pg_dump
```
The `--env PGPASSWORD` is necessary here for borgmatic to provide your database
password to the temporary `pg_dump` container.
Similar command override options are available for (some of) the other
supported database types as well. See the [configuration
reference](https://torsion.org/borgmatic/docs/reference/configuration/) for

View File

@@ -482,16 +482,89 @@ applications, but then set the repository for each application at runtime. Or
you might want to try a variant of an option for testing purposes without
actually touching your configuration file.
<span class="minilink minilink-addedin">New in version 2.0.0</span>
Whatever the reason, you can override borgmatic configuration options at the
command-line via the `--override` flag. Here's an example:
command-line, as there's a command-line flag corresponding to every
configuration option (with its underscores converted to dashes).
For instance, to override the `compression` configuration option, use the
corresponding `--compression` flag on the command-line:
```bash
borgmatic create --compression zstd
```
What this does is load your given configuration files and for each one, disregard
the configured value for the `compression` option and use the value given on the
command-line instead—but just for the duration of the borgmatic run.
You can override nested configuration options too by separating such option
names with a period. For instance:
```bash
borgmatic create --bootstrap.store-config-files false
```
You can even set complex option data structures by using inline YAML syntax. For
example, set the `repositories` option with a YAML list of key/value pairs:
```bash
borgmatic create --repositories "[{path: /mnt/backup, label: local}]"
```
If your override value contains characters like colons or spaces, then you'll
need to use quotes for it to parse correctly.
You can also set individual nested options within existing list elements:
```bash
borgmatic create --repositories[0].path /mnt/backup
```
This updates the `path` option for the first repository in `repositories`.
Change the `[0]` index as needed to address different list elements. And note
that this only works for elements already set in configuration; you can't append
new list elements from the command-line.
See the [command-line reference
documentation](https://torsion.org/borgmatic/docs/reference/command-line/) for
the full set of available arguments, including examples of each for the complex
values.
There are a handful of configuration options that don't have corresponding
command-line flags at the global scope, but instead have flags within individual
borgmatic actions. For instance, the `list_details` option can be overridden by
the `--list` flag that's only present on particular actions. Similarly with
`progress` and `--progress`, `statistics` and `--stats`, and `match_archives`
and `--match-archives`.
Also note that if you want to pass a command-line flag itself as a value to one
of these override flags, that may not work. For instance, specifying
`--extra-borg-options.create --no-cache-sync` results in an error, because
`--no-cache-sync` gets interpreted as a borgmatic option (which in this case
doesn't exist) rather than a Borg option.
An alternate to command-line overrides is passing in your values via
[environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
### Deprecated overrides
<span class="minilink minilink-addedin">Prior to version 2.0.0</span>
Configuration overrides were performed with an `--override` flag. You can still
use `--override` with borgmatic 2.0.0+, but it's deprecated in favor of the new
command-line flags described above.
Here's an example of `--override`:
```bash
borgmatic create --override remote_path=/usr/local/bin/borg1
```
What this does is load your configuration files and for each one, disregard
the configured value for the `remote_path` option and use the value of
`/usr/local/bin/borg1` instead.
What this does is load your given configuration files and for each one, disregard
the configured value for the `remote_path` option and use the value given on the
command-line instead—but just for the duration of the borgmatic run.
You can even override nested values or multiple values at once. For instance:
@@ -540,10 +613,6 @@ reference](https://torsion.org/borgmatic/docs/reference/configuration/) for
which options are list types. (YAML list values look like `- this` with an
indentation and a leading dash.)
An alternate to command-line overrides is passing in your values via
[environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
## Constant interpolation

View File

@@ -14,140 +14,55 @@ and alerting comes in.
There are several different ways you can monitor your backups and find out
whether they're succeeding. Which of these you choose to do is up to you and
your particular infrastructure.
your particular infrastructure:
### Job runner alerts
The easiest place to start is with failure alerts from the [scheduled job
runner](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#autopilot)
(cron, systemd, etc.) that's running borgmatic. But note that if the job
doesn't even get scheduled (e.g. due to the job runner not running), you
probably won't get an alert at all! Still, this is a decent first line of
defense, especially when combined with some of the other approaches below.
### Commands run on error
The `on_error` hook allows you to run an arbitrary command or script when
borgmatic itself encounters an error running your backups. So for instance,
you can run a script to send yourself a text message alert. But note that if
borgmatic doesn't actually run, this alert won't fire. See [error
hooks](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#error-hooks)
below for how to configure this.
### Third-party monitoring services
borgmatic integrates with these monitoring services and libraries, pinging
them as backups happen:
* [Apprise](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook)
* [Cronhub](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook)
* [Cronitor](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook)
* [Grafana Loki](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook)
* [Healthchecks](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook)
* [ntfy](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
* [PagerDuty](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook)
* [Pushover](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pushover-hook)
* [Sentry](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#sentry-hook)
* [Uptime Kuma](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#uptime-kuma-hook)
* [Zabbix](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#zabbix-hook)
The idea is that you'll receive an alert when something goes wrong or when the
service doesn't hear from borgmatic for a configured interval (if supported).
See the documentation links above for configuration information.
While these services and libraries offer different features, you probably only
need to use one of them at most.
### Third-party monitoring software
You can use traditional monitoring software to consume borgmatic JSON output
and track when the last successful backup occurred. See [scripting
borgmatic](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#scripting-borgmatic)
below for how to configure this.
### Borg hosting providers
Most [Borg hosting
providers](https://torsion.org/borgmatic/#hosting-providers) include
monitoring and alerting as part of their offering. This gives you a dashboard
to check on all of your backups, and can alert you if the service doesn't hear
from borgmatic for a configured interval.
### Consistency checks
While not strictly part of monitoring, if you want confidence that your
backups are not only running but are restorable as well, you can configure
particular [consistency
checks](https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#consistency-check-configuration)
or even script full [extract
tests](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/).
## Error hooks
When an error occurs during a `create`, `prune`, `compact`, or `check` action,
borgmatic can run configurable shell commands to fire off custom error
notifications or take other actions, so you can get alerted as soon as
something goes wrong. Here's a not-so-useful example:
```yaml
on_error:
- echo "Error while creating a backup or running a backup hook."
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
The `on_error` hook supports interpolating particular runtime variables into
the hook command. Here's an example that assumes you provide a separate shell
script to handle the alerting:
```yaml
on_error:
- send-text-message.sh {configuration_filename} {repository}
```
In this example, when the error occurs, borgmatic interpolates runtime values
into the hook command: the borgmatic configuration filename and the path of
the repository. Here's the full set of supported variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the
error occurred
* `repository`: path of the repository in which the error occurred (may be
blank if the error occurs in a hook)
* `error`: the error message itself
* `output`: output of the command that failed (may be blank if an error
occurred without running a command)
Note that borgmatic runs the `on_error` hooks only for `create`, `prune`,
`compact`, or `check` actions/hooks in which an error occurs and not other
actions. borgmatic does not run `on_error` hooks if an error occurs within a
`before_everything` or `after_everything` hook. For more about hooks, see the
[borgmatic hooks
documentation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/),
especially the security information.
<span class="minilink minilink-addedin">New in version 1.8.7</span> borgmatic
automatically escapes these interpolated values to prevent shell injection
attacks. One implication of this change is that you shouldn't wrap the
interpolated values in your own quotes, as that will interfere with the
quoting performed by borgmatic and result in your command receiving incorrect
arguments. For instance, this won't work:
```yaml
on_error:
# Don't do this! It won't work, as the {error} value is already quoted.
- send-text-message.sh "Uh oh: {error}"
```
Do this instead:
```yaml
on_error:
- send-text-message.sh {error}
```
* **Job runner alerts**: The easiest place to start is with failure alerts from
the [scheduled job
runner](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#autopilot)
(cron, systemd, etc.) that's running borgmatic. But note that if the job
doesn't even get scheduled (e.g. due to the job runner not running), you
probably won't get an alert at all! Still, this is a decent first line of
defense, especially when combined with some of the other approaches below.
* **Third-party monitoring services:** borgmatic integrates with these monitoring
services and libraries, pinging them as backups happen. The idea is that
you'll receive an alert when something goes wrong or when the service doesn't
hear from borgmatic for a configured interval (if supported). While these
services and libraries offer different features, you probably only need to
use one of them at most. See these documentation links for configuration
information:
* [Apprise](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook)
* [Cronhub](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook)
* [Cronitor](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook)
* [Grafana Loki](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook)
* [Healthchecks](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook)
* [ntfy](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
* [PagerDuty](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook)
* [Pushover](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pushover-hook)
* [Sentry](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#sentry-hook)
* [Uptime Kuma](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#uptime-kuma-hook)
* [Zabbix](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#zabbix-hook)
* **Third-party monitoring software:** You can use traditional monitoring
software to consume borgmatic JSON output and track when the last successful
backup occurred. See [scripting
borgmatic](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#scripting-borgmatic)
below for how to configure this.
* **Borg hosting providers:** Some [Borg hosting
providers](https://torsion.org/borgmatic/#hosting-providers) include
monitoring and alerting as part of their offering. This gives you a dashboard
to check on all of your backups, and can alert you if the service doesn't
hear from borgmatic for a configured interval.
* **Consistency checks:** While not strictly part of monitoring, if you want
confidence that your backups are not only running but are restorable as well,
you can configure particular [consistency
checks](https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#consistency-check-configuration)
or even script full [extract
tests](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/).
* **Commands run on error:** borgmatic's command hooks support running
arbitrary commands or scripts when borgmatic itself encounters an error
running your backups. So for instance, you can run a script to send yourself
a text message alert. But note that if borgmatic doesn't actually run, this
alert won't fire. See the [documentation on command hooks](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/)
for details.
## Healthchecks hook
@@ -292,6 +207,27 @@ If you have any issues with the integration, [please contact
us](https://torsion.org/borgmatic/#support-and-contributing).
### Sending logs
<span class="minilink minilink-addedin">New in version 1.9.14</span> borgmatic
logs are included in the payload data sent to PagerDuty. This means that
(truncated) borgmatic logs, including error messages, show up in the PagerDuty
incident UI and corresponding notification emails.
You can customize the verbosity of the logs that are sent with borgmatic's
`--monitoring-verbosity` flag. The `--list` and `--stats` flags may also be of
use. See `borgmatic create --help` for more information.
If you don't want any logs sent, you can disable this feature by setting
`send_logs` to `false`:
```yaml
pagerduty:
integration_key: a177cad45bd374409f78906a810a3074
send_logs: false
```
## Pushover hook
<span class="minilink minilink-addedin">New in version 1.9.2</span>
@@ -724,11 +660,19 @@ Authentication can be accomplished via `api_key` or both `username` and
### Items
The item to be updated can be chosen by either declaring the `itemid` or both
`host` and `key`. If all three are declared, only `itemid` is used.
borgmatic writes its monitoring updates to a particular Zabbix item, which
you'll need to create in advance. In the Zabbix web UI, [make a new item with a
Type of "Zabbix
trapper"](https://www.zabbix.com/documentation/current/en/manual/config/items/itemtypes/trapper)
and a named Key. The "Type of information" for the item should be "Text", and
"History" designates how much data you want to retain.
Keep in mind that `host` is referring to the "Host name" on the Zabbix server
and not the "Visual name".
When configuring borgmatic with this item to be updated, you can either declare
the `itemid` or both `host` and `key`. If all three are declared, only `itemid`
is used.
Keep in mind that `host` refers to the "Host name" on the Zabbix server and not
the "Visual name".
## Scripting borgmatic

View File

@@ -296,6 +296,20 @@ skip_actions:
- compact
```
### Disabling default actions
By default, running `borgmatic` without any arguments will perform the default
backup actions (create, prune, compact and check). If you want to disable this
behavior and require explicit actions to be specified, add the following to
your configuration:
```yaml
default_actions: false
```
With this setting, running `borgmatic` without arguments will show the help
message instead of performing any actions.
## Autopilot
@@ -311,7 +325,6 @@ Then, from the directory where you downloaded it:
```bash
sudo mv borgmatic /etc/cron.d/borgmatic
sudo chmod +x /etc/cron.d/borgmatic
```
If borgmatic is installed at a different location than

View File

@@ -148,8 +148,9 @@ feedback](https://torsion.org/borgmatic/#issues) you have on this feature.
#### Subvolume discovery
For any subvolume you'd like backed up, add its path to borgmatic's
`source_directories` option.
For any read-write subvolume you'd like backed up, add its mount point path to
borgmatic's `source_directories` option. Btrfs does not support snapshotting
read-only subvolumes.
<span class="minilink minilink-addedin">New in version 1.9.6</span> Or include
the mount point as a root pattern with borgmatic's `patterns` or `patterns_from`
@@ -160,27 +161,27 @@ includes the snapshotted files in the paths sent to Borg. borgmatic is also
responsible for cleaning up (deleting) these snapshots after a backup completes.
borgmatic is smart enough to look at the parent (and grandparent, etc.)
directories of each of your `source_directories` to discover any subvolumes.
For instance, let's say you add `/var/log` and `/var/lib` to your source
directories, but `/var` is a subvolume. borgmatic will discover that and
snapshot `/var` accordingly. This also works even with nested subvolumes;
directories of each of your `source_directories` to discover any subvolumes. For
instance, let's say you add `/var/log` and `/var/lib` to your source
directories, but `/var` is a subvolume mount point. borgmatic will discover that
and snapshot `/var` accordingly. This also works even with nested subvolumes;
borgmatic selects the subvolume that's the "closest" parent to your source
directories.
<span class="minilink minilink-addedin">New in version 1.9.6</span> When using
[patterns](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-help-patterns),
the initial portion of a pattern's path that you intend borgmatic to match
against a subvolume can't have globs or other non-literal characters in it—or it
won't actually match. For instance, a subvolume of `/var` would match a pattern
of `+ fm:/var/*/data`, but borgmatic isn't currently smart enough to match
`/var` to a pattern like `+ fm:/v*/lib/data`.
against a subvolume mount point can't have globs or other non-literal characters
in it—or it won't actually match. For instance, a subvolume mount point of
`/var` would match a pattern of `+ fm:/var/*/data`, but borgmatic isn't
currently smart enough to match `/var` to a pattern like `+ fm:/v*/lib/data`.
Additionally, borgmatic rewrites the snapshot file paths so that they appear
at their original subvolume locations in a Borg archive. For instance, if your
subvolume exists at `/var/subvolume`, then the snapshotted files will appear
Additionally, borgmatic rewrites the snapshot file paths so that they appear at
their original subvolume locations in a Borg archive. For instance, if your
subvolume is mounted at `/var/subvolume`, then the snapshotted files will appear
in an archive at `/var/subvolume` as well—even if borgmatic has to mount the
snapshot somewhere in `/var/subvolume/.borgmatic-snapshot-1234/` to perform
the backup.
snapshot somewhere in `/var/subvolume/.borgmatic-snapshot-1234/` to perform the
backup.
<span class="minilink minilink-addedin">With Borg version 1.2 and
earlier</span>Snapshotted files are instead stored at a path dependent on the

View File

@@ -1,6 +1,6 @@
[project]
name = "borgmatic"
version = "1.9.13"
version = "2.0.0dev0"
authors = [
{ name="Dan Helfman", email="witten@torsion.org" },
]

View File

@@ -24,7 +24,14 @@ def parse_arguments(*unparsed_arguments):
delete_parser = subvolume_subparser.add_parser('delete')
delete_parser.add_argument('snapshot_path')
return global_parser.parse_args(unparsed_arguments)
property_parser = action_parsers.add_parser('property')
property_subparser = property_parser.add_subparsers(dest='subaction')
get_parser = property_subparser.add_parser('get')
get_parser.add_argument('-t', dest='type')
get_parser.add_argument('subvolume_path')
get_parser.add_argument('property_name')
return (global_parser, global_parser.parse_args(unparsed_arguments))
BUILTIN_SUBVOLUME_LIST_LINES = (
@@ -60,9 +67,13 @@ def print_subvolume_list(arguments, snapshot_paths):
def main():
arguments = parse_arguments(*sys.argv[1:])
(global_parser, arguments) = parse_arguments(*sys.argv[1:])
snapshot_paths = load_snapshots()
if not hasattr(arguments, 'subaction'):
global_parser.print_help()
sys.exit(1)
if arguments.subaction == 'list':
print_subvolume_list(arguments, snapshot_paths)
elif arguments.subaction == 'snapshot':
@@ -84,6 +95,8 @@ def main():
if snapshot_path.endswith('/' + arguments.snapshot_path)
]
save_snapshots(snapshot_paths)
elif arguments.action == 'property' and arguments.subaction == 'get':
print(f'{arguments.property_name}=false')
if __name__ == '__main__':

View File

@@ -0,0 +1,55 @@
import os
import shlex
import shutil
import subprocess
import tempfile
def generate_configuration(config_path):
'''
Generate borgmatic configuration into a file at the config path, and update the defaults so as
to work for testing (including injecting the given repository path and tacking on an encryption
passphrase). But don't actually set the repository path, as that's done on the command-line
below.
'''
subprocess.check_call(f'borgmatic config generate --destination {config_path}'.split(' '))
config = (
open(config_path)
.read()
.replace('- ssh://user@backupserver/./{fqdn}', '') # noqa: FS003
.replace('- /var/local/backups/local.borg', '')
.replace('- /home/user/path with spaces', '')
.replace('- /home', f'- {config_path}')
.replace('- /etc', '')
.replace('- /var/log/syslog*', '')
+ 'encryption_passphrase: "test"'
)
config_file = open(config_path, 'w')
config_file.write(config)
config_file.close()
def test_config_flags_do_not_error():
temporary_directory = tempfile.mkdtemp()
repository_path = os.path.join(temporary_directory, 'test.borg')
original_working_directory = os.getcwd()
try:
config_path = os.path.join(temporary_directory, 'test.yaml')
generate_configuration(config_path)
subprocess.check_call(
shlex.split(
f'borgmatic -v 2 --config {config_path} --repositories "[{{path: {repository_path}, label: repo}}]" repo-create --encryption repokey'
)
)
subprocess.check_call(
shlex.split(
f'borgmatic create --config {config_path} --repositories[0].path "{repository_path}"'
)
)
finally:
os.chdir(original_working_directory)
shutil.rmtree(temporary_directory)

View File

@@ -53,7 +53,7 @@ def fuzz_argument(arguments, argument_name):
def test_transfer_archives_command_does_not_duplicate_flags_or_raise():
arguments = borgmatic.commands.arguments.parse_arguments(
'transfer', '--source-repository', 'foo'
{}, 'transfer', '--source-repository', 'foo'
)['transfer']
flexmock(borgmatic.borg.transfer).should_receive('execute_command').replace_with(
assert_command_does_not_duplicate_flags
@@ -74,7 +74,7 @@ def test_transfer_archives_command_does_not_duplicate_flags_or_raise():
def test_prune_archives_command_does_not_duplicate_flags_or_raise():
arguments = borgmatic.commands.arguments.parse_arguments('prune')['prune']
arguments = borgmatic.commands.arguments.parse_arguments({}, 'prune')['prune']
flexmock(borgmatic.borg.prune).should_receive('execute_command').replace_with(
assert_command_does_not_duplicate_flags
)
@@ -94,7 +94,7 @@ def test_prune_archives_command_does_not_duplicate_flags_or_raise():
def test_mount_archive_command_does_not_duplicate_flags_or_raise():
arguments = borgmatic.commands.arguments.parse_arguments('mount', '--mount-point', 'tmp')[
arguments = borgmatic.commands.arguments.parse_arguments({}, 'mount', '--mount-point', 'tmp')[
'mount'
]
flexmock(borgmatic.borg.mount).should_receive('execute_command').replace_with(
@@ -116,7 +116,7 @@ def test_mount_archive_command_does_not_duplicate_flags_or_raise():
def test_make_list_command_does_not_duplicate_flags_or_raise():
arguments = borgmatic.commands.arguments.parse_arguments('list')['list']
arguments = borgmatic.commands.arguments.parse_arguments({}, 'list')['list']
for argument_name in dir(arguments):
if argument_name.startswith('_'):
@@ -134,7 +134,7 @@ def test_make_list_command_does_not_duplicate_flags_or_raise():
def test_make_repo_list_command_does_not_duplicate_flags_or_raise():
arguments = borgmatic.commands.arguments.parse_arguments('repo-list')['repo-list']
arguments = borgmatic.commands.arguments.parse_arguments({}, 'repo-list')['repo-list']
for argument_name in dir(arguments):
if argument_name.startswith('_'):
@@ -152,7 +152,7 @@ def test_make_repo_list_command_does_not_duplicate_flags_or_raise():
def test_display_archives_info_command_does_not_duplicate_flags_or_raise():
arguments = borgmatic.commands.arguments.parse_arguments('info')['info']
arguments = borgmatic.commands.arguments.parse_arguments({}, 'info')['info']
flexmock(borgmatic.borg.info).should_receive('execute_command_and_capture_output').replace_with(
assert_command_does_not_duplicate_flags
)

View File

@@ -1,4 +1,5 @@
import borgmatic.commands.arguments
import borgmatic.config.validate
from borgmatic.commands.completion import actions as module
@@ -7,7 +8,10 @@ def test_available_actions_uses_only_subactions_for_action_with_subactions():
unused_global_parser,
action_parsers,
unused_combined_parser,
) = borgmatic.commands.arguments.make_parsers()
) = borgmatic.commands.arguments.make_parsers(
schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
unparsed_arguments=(),
)
actions = module.available_actions(action_parsers, 'config')
@@ -20,7 +24,10 @@ def test_available_actions_omits_subactions_for_action_without_subactions():
unused_global_parser,
action_parsers,
unused_combined_parser,
) = borgmatic.commands.arguments.make_parsers()
) = borgmatic.commands.arguments.make_parsers(
schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
unparsed_arguments=(),
)
actions = module.available_actions(action_parsers, 'list')

View File

@@ -4,11 +4,144 @@ from flexmock import flexmock
from borgmatic.commands import arguments as module
def test_make_argument_description_with_object_adds_example():
assert (
module.make_argument_description(
schema={
'description': 'Thing.',
'type': 'object',
'example': {'bar': 'baz'},
},
flag_name='flag',
)
# Apparently different versions of ruamel.yaml serialize this
# differently.
in ('Thing. Example value: "bar: baz"' 'Thing. Example value: "{bar: baz}"')
)
def test_make_argument_description_with_array_adds_example():
assert (
module.make_argument_description(
schema={
'description': 'Thing.',
'type': 'array',
'example': [1, '- foo', {'bar': 'baz'}],
},
flag_name='flag',
)
# Apparently different versions of ruamel.yaml serialize this
# differently.
in (
'Thing. Example value: "[1, \'- foo\', bar: baz]"'
'Thing. Example value: "[1, \'- foo\', {bar: baz}]"'
)
)
def test_add_array_element_arguments_adds_arguments_for_array_index_flags():
parser = module.ArgumentParser(allow_abbrev=False, add_help=False)
arguments_group = parser.add_argument_group('arguments')
arguments_group.add_argument(
'--foo[0].val',
action='store_true',
dest='--foo[0].val',
)
flexmock(arguments_group).should_receive('add_argument').with_args(
'--foo[25].val',
action='store_true',
default=False,
dest='foo[25].val',
required=object,
).once()
module.add_array_element_arguments(
arguments_group=arguments_group,
unparsed_arguments=('--foo[25].val', 'fooval', '--bar[1].val', 'barval'),
flag_name='foo[0].val',
)
def test_add_arguments_from_schema_with_nested_object_adds_flag_for_each_option():
parser = module.ArgumentParser(allow_abbrev=False, add_help=False)
arguments_group = parser.add_argument_group('arguments')
flexmock(arguments_group).should_receive('add_argument').with_args(
'--foo.bar',
type=int,
metavar='BAR',
help='help 1',
).once()
flexmock(arguments_group).should_receive('add_argument').with_args(
'--foo.baz',
type=str,
metavar='BAZ',
help='help 2',
).once()
module.add_arguments_from_schema(
arguments_group=arguments_group,
schema={
'type': 'object',
'properties': {
'foo': {
'type': 'object',
'properties': {
'bar': {'type': 'integer', 'description': 'help 1'},
'baz': {'type': 'string', 'description': 'help 2'},
},
}
},
},
unparsed_arguments=(),
)
def test_add_arguments_from_schema_with_array_and_nested_object_adds_multiple_flags():
parser = module.ArgumentParser(allow_abbrev=False, add_help=False)
arguments_group = parser.add_argument_group('arguments')
flexmock(arguments_group).should_receive('add_argument').with_args(
'--foo[0].bar',
type=int,
metavar='BAR',
help=object,
).once()
flexmock(arguments_group).should_receive('add_argument').with_args(
'--foo',
type=str,
metavar='FOO',
help='help 2',
).once()
module.add_arguments_from_schema(
arguments_group=arguments_group,
schema={
'type': 'object',
'properties': {
'foo': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'bar': {
'type': 'integer',
'description': 'help 1',
}
},
},
'description': 'help 2',
}
},
},
unparsed_arguments=(),
)
def test_parse_arguments_with_no_arguments_uses_defaults():
config_paths = ['default']
flexmock(module.collect).should_receive('get_default_config_paths').and_return(config_paths)
arguments = module.parse_arguments()
arguments = module.parse_arguments({})
global_arguments = arguments['global']
assert global_arguments.config_paths == config_paths
@@ -21,7 +154,7 @@ def test_parse_arguments_with_no_arguments_uses_defaults():
def test_parse_arguments_with_multiple_config_flags_parses_as_list():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
arguments = module.parse_arguments('--config', 'myconfig', '--config', 'otherconfig')
arguments = module.parse_arguments({}, '--config', 'myconfig', '--config', 'otherconfig')
global_arguments = arguments['global']
assert global_arguments.config_paths == ['myconfig', 'otherconfig']
@@ -34,7 +167,7 @@ def test_parse_arguments_with_multiple_config_flags_parses_as_list():
def test_parse_arguments_with_action_after_config_path_omits_action():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
arguments = module.parse_arguments('--config', 'myconfig', 'list', '--json')
arguments = module.parse_arguments({}, '--config', 'myconfig', 'list', '--json')
global_arguments = arguments['global']
assert global_arguments.config_paths == ['myconfig']
@@ -45,7 +178,9 @@ def test_parse_arguments_with_action_after_config_path_omits_action():
def test_parse_arguments_with_action_after_config_path_omits_aliased_action():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
arguments = module.parse_arguments('--config', 'myconfig', 'init', '--encryption', 'repokey')
arguments = module.parse_arguments(
{}, '--config', 'myconfig', 'init', '--encryption', 'repokey'
)
global_arguments = arguments['global']
assert global_arguments.config_paths == ['myconfig']
@@ -56,7 +191,7 @@ def test_parse_arguments_with_action_after_config_path_omits_aliased_action():
def test_parse_arguments_with_action_and_positional_arguments_after_config_path_omits_action_and_arguments():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
arguments = module.parse_arguments('--config', 'myconfig', 'borg', 'key', 'export')
arguments = module.parse_arguments({}, '--config', 'myconfig', 'borg', 'key', 'export')
global_arguments = arguments['global']
assert global_arguments.config_paths == ['myconfig']
@@ -68,7 +203,7 @@ def test_parse_arguments_with_verbosity_overrides_default():
config_paths = ['default']
flexmock(module.collect).should_receive('get_default_config_paths').and_return(config_paths)
arguments = module.parse_arguments('--verbosity', '1')
arguments = module.parse_arguments({}, '--verbosity', '1')
global_arguments = arguments['global']
assert global_arguments.config_paths == config_paths
@@ -82,7 +217,7 @@ def test_parse_arguments_with_syslog_verbosity_overrides_default():
config_paths = ['default']
flexmock(module.collect).should_receive('get_default_config_paths').and_return(config_paths)
arguments = module.parse_arguments('--syslog-verbosity', '2')
arguments = module.parse_arguments({}, '--syslog-verbosity', '2')
global_arguments = arguments['global']
assert global_arguments.config_paths == config_paths
@@ -96,7 +231,7 @@ def test_parse_arguments_with_log_file_verbosity_overrides_default():
config_paths = ['default']
flexmock(module.collect).should_receive('get_default_config_paths').and_return(config_paths)
arguments = module.parse_arguments('--log-file-verbosity', '-1')
arguments = module.parse_arguments({}, '--log-file-verbosity', '-1')
global_arguments = arguments['global']
assert global_arguments.config_paths == config_paths
@@ -109,7 +244,7 @@ def test_parse_arguments_with_log_file_verbosity_overrides_default():
def test_parse_arguments_with_single_override_parses():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
arguments = module.parse_arguments('--override', 'foo.bar=baz')
arguments = module.parse_arguments({}, '--override', 'foo.bar=baz')
global_arguments = arguments['global']
assert global_arguments.overrides == ['foo.bar=baz']
@@ -119,7 +254,7 @@ def test_parse_arguments_with_multiple_overrides_flags_parses():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
arguments = module.parse_arguments(
'--override', 'foo.bar=baz', '--override', 'foo.quux=7', '--override', 'this.that=8'
{}, '--override', 'foo.bar=baz', '--overrid