Compare commits

...

332 Commits

Author SHA1 Message Date
929d343214 Add CLI flags for every config option and add config options for many action flags (#303).
Reviewed-on: borgmatic-collective/borgmatic#1040
2025-04-03 23:48:49 +00:00
9ea55d9aa3 Add a documentation note about a limitation: You can't pass flags as values to flags (#303). 2025-04-03 16:38:17 -07:00
3eabda45f2 If a boolean option name already starts with "no_", don't add a "--no-no-..." CLI flag (#303). 2025-04-03 16:21:22 -07:00
09212961a4 Add action "--help" note about running compact after recreate (#1053). 2025-04-03 12:55:26 -07:00
3f25f3f0ff Merge branch 'main' into config-command-line. 2025-04-03 11:47:29 -07:00
e8542f3613 Fix KeePassXC error when "keepassxc:" option is not present, add new options to NEWS (#1047). 2025-04-03 11:41:58 -07:00
9407f24674 Fix setting of "--checks" on the command-line (#303). 2025-04-03 11:28:32 -07:00
1c9d25b892 Add "key-file" and "yubikey" options to KeePassXC credential hook (#1047).
Reviewed-on: borgmatic-collective/borgmatic#1049
2025-04-03 18:28:08 +00:00
248999c23e Final 2025-04-03 17:10:52 +00:00
d0a5aa63be Add a TL;DR to NEWS since 2.0.0 is such a huge release and ain't nobody got time for reading a huge changelog. 2025-04-03 09:24:47 -07:00
d2c3ed26a9 Make a CLI flag for any config option that's a list of scalars (#303). 2025-04-02 23:15:21 -07:00
bbf6f27715 For boolean configuration options, add separate "--foo" and "--no-foo" CLI flags (#303). 2025-04-02 17:08:04 -07:00
9301ab13cc Merge branch 'main' into config-command-line. 2025-04-02 09:55:33 -07:00
d5d04b89dc Add configuration filename to "Successfully ran configuration file" log message (#1051). 2025-04-02 09:50:31 -07:00
364200c65a Fix incorrect matching of non-zero array index flags with dashed names (#303). 2025-04-02 09:37:52 -07:00
4e55547235 Command Restructuring 2025-04-02 15:35:12 +00:00
96ec66de79 Applied changes 2025-04-02 10:50:25 +00:00
7a0c56878b Applied changes 2025-04-02 10:47:35 +00:00
4065c5d0f7 Fix use of dashed command-line flags like "--repositories[2].append-only" generated from configuration (#303). 2025-04-01 23:04:53 -07:00
affe7cdc1b Expose propertyless YAML objects from configuration (e.g. "constants") as command-line flags (#303). 2025-04-01 21:05:44 -07:00
017cbae4f9 Fix for the example not showing up in generated config for empty YAML objects (#303). 2025-04-01 19:44:47 -07:00
e96db2e100 Fix "progress" option with the "transfer" action (#303). 2025-04-01 19:43:56 -07:00
af97b95e2b Merge branch 'main' into config-command-line. 2025-04-01 12:09:54 -07:00
6a61259f1a Fix a failure in the "spot" check when the archive contains a symlink (#1050). 2025-04-01 11:49:47 -07:00
5490a83d77 Merge branch 'main' into config-command-line. 2025-03-31 17:13:20 -07:00
8c907bb5a3 Fix broken "recreate" action with Borg 1.4 (#610). 2025-03-31 17:11:37 -07:00
f166111b9b Fix new "repositories:" sub-options ("append_only", "make_parent_directories", etc.) (#303). 2025-03-31 15:26:24 -07:00
10fb02c40a Fix bootstrap --progress flag (#303). 2025-03-31 13:33:39 -07:00
cf477bdc1c Fix broken list_details, progress, and statistics options (#303). 2025-03-31 11:33:56 -07:00
6f07402407 Fix end-to-end tests and don't stat() directories that don't exist (#1048). 2025-03-30 19:04:36 -07:00
ab01e97a5e Fix a "no such file or directory" error in ZFS, Btrfs, and LVM hooks with nested directories that reside on separate devices/filesystems (#1048). 2025-03-30 14:55:54 -07:00
92ebc77597 2nd Draft 2025-03-30 16:19:56 +00:00
863c954144 added schema.yaml 2025-03-30 15:57:42 +00:00
f7e4d38762 First Draft 2025-03-30 14:02:56 +00:00
de4d7af507 Merge branch 'main' into config-command-line. 2025-03-29 22:52:40 -07:00
5cea1e1b72 Fix flake error (#262). 2025-03-29 22:52:17 -07:00
fd8c11eb0a Add documentation for "native" command-line overrides without --override (#303). 2025-03-29 21:59:47 -07:00
92de539bf9 Merge branch 'main' into config-command-line. 2025-03-29 19:55:03 -07:00
5716e61f8f Code formatting (#262). 2025-03-29 19:54:40 -07:00
3e05eeb4de Merge branch 'main' into config-command-line. 2025-03-29 19:03:29 -07:00
65d1b9235d Add "default_actions" to NEWS (#262). 2025-03-29 19:02:11 -07:00
cffb8e88da Merge branch 'main' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic into config-command-line 2025-03-29 18:58:12 -07:00
a8362f2618 borgmatic without arguments/parameters should show usage help instead of starting a backup (#262).
Reviewed-on: borgmatic-collective/borgmatic#1046
2025-03-30 01:57:11 +00:00
36265eea7d Docs update 2025-03-30 01:34:30 +00:00
8101e5c56f Add "list_details" config option support to new "recreate" action (#303). 2025-03-29 15:24:37 -07:00
c7feb16ab5 Merge branch 'main' into config-command-line. 2025-03-29 15:16:29 -07:00
da324ebeb7 Add "recreate" action to NEWS and docs (#610). 2025-03-29 15:15:36 -07:00
59f9d56aae Add a recreate action (#1030).
Reviewed-on: borgmatic-collective/borgmatic#1030
2025-03-29 22:07:52 +00:00
Vandal
dbf2e78f62 help changes 2025-03-30 03:05:46 +05:30
f6929f8891 Add last couple of missing tests after audit (#303). 2025-03-29 14:26:54 -07:00
Vandal
2716d9d0b0 add to schema 2025-03-29 23:25:50 +05:30
668f767bfc Adding some missing tests and fixing related flag vs. config logic (#303). 2025-03-28 23:11:15 -07:00
0182dbd914 Added 2 new unit tests and updated docs 2025-03-29 03:43:58 +00:00
1c27e0dadc Add an end-to-end test for command-line flags of configuration options (#303). 2025-03-28 13:46:58 -07:00
Vandal
8b3a682edf add tests and minor fixes 2025-03-29 01:26:20 +05:30
975a6e4540 Add additional tests for complete coverage (#303). 2025-03-28 11:37:48 -07:00
Vandal
7020f0530a update existing tests 2025-03-28 22:22:19 +05:30
5bf2f546b9 More automated tests (#303). 2025-03-27 21:01:56 -07:00
b4c558d013 Add tests for CLI arguments from schema logic (#303). 2025-03-27 16:49:14 -07:00
79bf641668 Set the action type when cloning an argument for a list index flag (#303). 2025-03-27 12:42:49 -07:00
50beb334dc Add tests for adding array element arguments and fix the code under test (#303). 2025-03-27 11:07:25 -07:00
Vandal
26fd41da92 add rest of flags 2025-03-27 22:18:34 +05:30
088da19012 Added Unit Tests 2025-03-27 11:26:56 +00:00
4c6674e0ad Merge branch 'main' into config-command-line. 2025-03-26 22:14:36 -07:00
486bec698d Add "key import" to reference documentation (#345). 2025-03-26 22:13:30 -07:00
7a766c717e 2nd Draft 2025-03-27 02:55:16 +00:00
520fb78a00 Clarify Btrfs documentation: borgmatic expects subvolume mount points in "source_directories" (#1043). 2025-03-26 11:39:16 -07:00
Vandal
acc2814f11 add archive timestamp filter 2025-03-26 23:39:06 +05:30
996b037946 1st 2025-03-26 17:39:10 +00:00
Vandal
9356924418 add archive options 2025-03-26 22:30:11 +05:30
79e4e089ee Fix typo in NEWS (#1044). 2025-03-26 09:57:53 -07:00
d2714cb706 Fix an error in the systemd credential hook when the credential name contains a "." chararcter (#1044). 2025-03-26 09:53:52 -07:00
5a0430b9c8 Merge branch 'main' into config-command-line. 2025-03-25 22:39:51 -07:00
23efbb8df3 Fix line wrapping / code style (#837). 2025-03-25 22:31:50 -07:00
9e694e4df9 Add MongoDB custom command options to NEWS (#837). 2025-03-25 22:28:14 -07:00
76f7c53a1c Add custom command options for MongoDB hook (#837).
Reviewed-on: borgmatic-collective/borgmatic#1041
2025-03-26 05:27:03 +00:00
Vandal
203e84b91f hotfix 2025-03-25 21:57:06 +05:30
Vandal
ea5a2d8a46 add tests for the flags 2025-03-25 20:39:02 +05:30
Vandal
a8726c408a add tests 2025-03-25 19:35:15 +05:30
Vandal
3542673446 add test recreate with skip action 2025-03-25 11:36:06 +05:30
532a97623c Added test_build_restore_command_prevents_shell_injection() 2025-03-25 04:50:45 +00:00
e1fdfe4c2f Add credential hook directory expansion to NEWS (#422). 2025-03-24 13:00:38 -07:00
83a56a3fef Add directory expansion for file-based and KeyPassXC credential hooks (#1042).
Reviewed-on: borgmatic-collective/borgmatic#1042
2025-03-24 19:57:18 +00:00
Vandal
b60cf2449a add recreate to schema 2025-03-25 00:48:27 +05:30
Vandal
e7f14bca87 add tests and requested changes 2025-03-25 00:16:20 +05:30
Nish_
4bca7bb198 add directory expansion for file-based and KeyPassXC credentials
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-24 21:04:55 +05:30
Vandal
fa3b140590 add patterns 2025-03-24 12:09:08 +05:30
Vandal
a1d2f7f221 add path 2025-03-24 11:51:33 +05:30
6a470be924 Made some changes in test file 2025-03-24 03:53:42 +00:00
d651813601 Custom command options for MongoDB hook #837 2025-03-24 03:39:26 +00:00
65b1d8e8b2 Clarify NEWS items (#303). 2025-03-23 19:13:07 -07:00
16a1121649 Get existing end-to-end tests passing (#303). 2025-03-23 18:45:49 -07:00
423627e67b Get existing unit/integration tests passing (#303). 2025-03-23 17:00:04 -07:00
9f7c71265e Add Bash completion for completing flags like "--foo[3].bar". 2025-03-23 16:32:31 -07:00
ba75958a2f Fix missing argument descriptions (#303). 2025-03-23 11:26:49 -07:00
57721937a3 Factor out schema type comparion in config generation and get several tests passing (#303). 2025-03-23 11:24:36 -07:00
f222bf2c1a Organizational refactoring (#303). 2025-03-22 22:52:23 -07:00
dc9da3832d Bold "not yet released" in docs to prevent confusion (#303). 2025-03-22 14:03:44 -07:00
f8eda92379 Code formatting (#303). 2025-03-22 14:01:39 -07:00
cc14421460 Fix list examples in generated configuration. 2025-03-22 13:58:42 -07:00
Vandal
a750d58a2d add recreate action 2025-03-22 21:18:28 +05:30
2045706faa merge upstream 2025-03-22 13:00:07 +00:00
976fb8f343 Add "compact_threshold" option, overridden by "compact --threshold" flag (#303). 2025-03-21 22:44:49 -07:00
5246a10b99 Merge branch 'main' into config-command-line. 2025-03-21 15:44:12 -07:00
524ec6b3cb Add "extract" action fix to NEWS (#1037). 2025-03-21 15:43:05 -07:00
6f1c77bc7d Merge branch 'main' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic into config-command-line 2025-03-21 15:40:27 -07:00
7904ffb641 Fix extracting from remote repositories with working_directory defined (#1037).
Reviewed-on: borgmatic-collective/borgmatic#1038
Reviewed-by: Dan Helfman <witten@torsion.org>
2025-03-21 22:40:18 +00:00
cd5ba81748 Fix docs: Crontabs aren't executable (#1039).
Reviewed-on: borgmatic-collective/borgmatic#1039
2025-03-21 21:32:38 +00:00
5c11052b8c Merge branch 'main' into config-command-line 2025-03-21 14:30:39 -07:00
514ade6609 Fix inconsistent quotes in one documentation file (#790). 2025-03-21 14:27:40 -07:00
201469e2c2 Add "key import" action to NEWS (#345). 2025-03-21 14:26:01 -07:00
9ac2a2e286 Add key import action to import a copy of repository key from backup (#345).
Reviewed-on: borgmatic-collective/borgmatic#1036
Reviewed-by: Dan Helfman <witten@torsion.org>
2025-03-21 21:22:50 +00:00
Benjamin Bock
a16d138afc Crontabs aren't executable 2025-03-21 21:58:02 +01:00
Benjamin Bock
81a3a99578 Fix extracting from remote repositories with working_directory defined 2025-03-21 21:34:46 +01:00
f3cc3b1b65 Merge branch 'main' into config-command-line 2025-03-21 11:10:19 -07:00
587d31de7c Run all command hooks respecting the "working_directory" option if configured (#790). 2025-03-21 10:53:06 -07:00
cbfc0bead1 Exclude --match-archives from global flags since it already exists on several actions (#303). 2025-03-21 09:56:42 -07:00
Nish_
8aaa5ba8a6 minor changes
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-21 19:26:12 +05:30
7d989f727d Don't auto-add CLI flags for configuration options that already have per-action CLI flags (#303). 2025-03-20 12:23:00 -07:00
Nish_
5525b467ef add key import command
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-21 00:47:45 +05:30
89c98de122 Merge branch 'main' into config-command-line. 2025-03-20 11:37:04 -07:00
c2409d9968 Remove the "dump_data_sources" command hook, as it doesn't really solve the use case and works differently than all the other command hooks (#790). 2025-03-20 11:13:37 -07:00
624a7de622 Document "after" command hooks running in case of error and make sure that happens in case of "before" hook error (#790). 2025-03-20 10:57:39 -07:00
3119c924b4 In configuration option descriptions, remove mention of corresponding CLI flags because it looks dumb on the command-line help (#303). 2025-03-19 23:08:26 -07:00
ed6022d4a9 Add "list" option to configuration, corresponding to "--list" (#303). 2025-03-19 23:05:38 -07:00
3e21cdb579 Add "stats" option to configuration (#303). 2025-03-19 19:43:04 -07:00
d02d31f445 Use schema defaults instead of a flag name whitelist to make valueless boolean flags (#303). 2025-03-19 11:37:17 -07:00
1097a6576f Add "progress" option to configuration (#303). 2025-03-19 11:06:36 -07:00
63b0c69794 Add additional options under "repositories:" for parity with repo-create #303. 2025-03-18 20:54:14 -07:00
Vandal
4e2805918d update borg/recreate.py 2025-03-18 23:19:33 +05:30
711f5fa6cb UX nicety to make default-false boolean options into valueless CLI flags (#303). 2025-03-17 22:58:25 -07:00
93e7da823c Add an encryption option to repositories (#303). 2025-03-17 22:24:01 -07:00
903308864c Factor out schema type parsing (#303). 2025-03-17 10:46:02 -07:00
d75c8609c5 Merge branch 'main' into config-command-line 2025-03-17 10:34:20 -07:00
c926f0bd5d Clarify documentation for dump_data_sources command hook (#790). 2025-03-17 10:31:34 -07:00
7b14e8c7f2 Add feature to NEWS (#303). 2025-03-17 10:17:04 -07:00
87b9ad5aea Code formatting (#303). 2025-03-17 10:02:25 -07:00
eca78fbc2c Support setting whole lists and dicts from the command-line (#303). 2025-03-17 09:57:25 -07:00
Vandal
6adb0fd44c add borg recreate 2025-03-17 22:24:53 +05:30
05900c188f Expand docstrings (#303). 2025-03-15 22:58:39 -07:00
1d5713c4c5 Updated outdated schema comment referencing ~/.borgmatic path (#836). 2025-03-15 21:42:45 -07:00
f9612cc685 Add SQLite custom command option to NEWS (#836). 2025-03-15 21:37:23 -07:00
5742a1a2d9 Add custom command option for SQLite hook (#836).
Reviewed-on: borgmatic-collective/borgmatic#1027
2025-03-16 04:34:15 +00:00
Nish_
c84815bfb0 add custom dump and restore commands for sqlite hook
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-16 09:07:49 +05:30
e1ff51ff1e Merge branch 'main' into config-command-line. 2025-03-15 10:03:59 -07:00
1c92d84e09 Add Borg 2 "prune --stats" flag change to NEWS (#1010). 2025-03-15 10:02:47 -07:00
1d94fb501f Conditionally pass --stats to prune based on Borg version (#1010).
Reviewed-on: borgmatic-collective/borgmatic#1026
2025-03-15 16:59:50 +00:00
92279d3c71 Initial work on command-line flags for all configuration (#303). 2025-03-14 22:59:43 -07:00
Nish_
1b4c94ad1e Add feature toggle to pass --stats to prune on Borg 1, but not Borg 2
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-15 09:56:14 +05:30
901e668c76 Document a database use case involving a temporary database client container (#1020). 2025-03-12 17:10:35 -07:00
bcb224a243 Claim another implemented ticket in NEWS (#821). 2025-03-12 14:31:13 -07:00
6b6e1e0336 Make the "configuration" command hook support "error" hooks and also pinging monitoring on failure (#790). 2025-03-12 14:13:29 -07:00
f5c9bc4fa9 Add a "not yet released" note on 2.0.0 in docs (#790). 2025-03-11 16:46:07 -07:00
cdd0e6f052 Fix incorrect kwarg in LVM hook (#790). 2025-03-11 14:42:25 -07:00
7bdbadbac2 Deprecate all "before_*", "after_*" and "on_error" command hooks in favor of more flexible "commands:" (#790).
Reviewed-on: borgmatic-collective/borgmatic#1019
2025-03-11 21:22:33 +00:00
d3413e0907 Documentation clarification (#1019). 2025-03-11 14:20:42 -07:00
8a20ee7304 Fix typo in documentation (#1019). 2025-03-11 14:08:53 -07:00
325f53c286 Context tweaks + mention configuration upgrade in command hook documentation (#1019). 2025-03-11 14:07:06 -07:00
b4d24798bf More command hook documentation updates (#1019). 2025-03-11 13:03:58 -07:00
7965eb9de3 Correctly handle errors in command hooks (#1019). 2025-03-11 11:36:28 -07:00
8817364e6d Documentation on command hooks (#1019). 2025-03-10 22:38:48 -07:00
965740c778 Update version of command hooks since they didn't get released in 1.9.14 (#1019). 2025-03-10 10:37:09 -07:00
2a0319f02f Merge branch 'main' into unified-command-hooks. 2025-03-10 10:35:36 -07:00
fbdb09b87d Bump version for release. 2025-03-10 10:17:36 -07:00
bec5a0c0ca Fix end-to-end tests for Btrfs (#1023). 2025-03-10 10:15:23 -07:00
4ee7f72696 Fix an error in the Btrfs hook when attempting to snapshot a read-only subvolume (#1023). 2025-03-09 23:04:55 -07:00
9941d7dc57 More docs and command hook context tweaks (#1019). 2025-03-09 17:01:46 -07:00
ec88bb2e9c Merge branch 'main' into unified-command-hooks. 2025-03-09 13:37:17 -07:00
68b6d01071 Fix a regression in which the "exclude_patterns" option didn't expand "~" (#1021). 2025-03-09 13:35:22 -07:00
b52339652f Initial command hooks documentation work (#1019). 2025-03-09 09:57:13 -07:00
4fd22b2df0 Merge branch 'main' into unified-command-hooks. 2025-03-08 21:02:04 -08:00
86b138e73b Clarify command hook documentation. 2025-03-08 21:00:58 -08:00
5ab766b51c Add a few more missing tests (#1019). 2025-03-08 20:55:13 -08:00
45c114973c Add missing test coverage for new/changed code (#1019). 2025-03-08 18:31:16 -08:00
6a96a78cf1 Fix existing tests (#1019). 2025-03-07 22:58:25 -08:00
e06c6740f2 Switch to context manager for running "dump_data_sources" before/after hooks (#790). 2025-03-07 10:33:39 -08:00
10bd1c7b41 Remove restore_data_source_dump as a command hook for now (#790). 2025-03-06 22:53:19 -08:00
d4f48a3a9e Initial work on unified command hooks (#790). 2025-03-06 11:23:24 -08:00
c76a108422 Link to Zabbix documentation from NEWS. 2025-03-06 10:37:00 -08:00
eb5dc128bf Fix incorrect test name (#1017). 2025-03-06 10:34:28 -08:00
1d486d024b Fix a regression in which some MariaDB/MySQL passwords were not escaped correctly (#1017). 2025-03-06 10:32:38 -08:00
5a8f27d75c Add single quotes around the MariaDB password (#1017).
Reviewed-on: borgmatic-collective/borgmatic#1017
2025-03-06 18:01:43 +00:00
a926b413bc Updating automated test, and fixing linting errors. 2025-03-06 09:00:33 -03:30
18ffd96d62 Add single quotes around the password.
When the DB password uses some special characters, the
defaults-extra-file can be incorrect. In the case of a password with
the # symbol, anything after that is considered a comment. The single
quotes around the password rectify this.
2025-03-05 22:51:41 -03:30
c0135864c2 With the PagerDuty monitoring hook, send borgmatic logs to PagerDuty so they show up in the incident UI (#409). 2025-03-04 08:55:09 -08:00
ddfd3c6ca1 Clarify Zabbix monitoring hook documentation about creating items (#936). 2025-03-03 16:02:22 -08:00
dbe82ff11e Bump version for release. 2025-03-03 10:21:15 -08:00
55c0ab1610 Add "tls" options to the MariaDB and MySQL database hooks. 2025-03-03 10:07:03 -08:00
1f86100f26 NEWS wording tweaks. 2025-03-02 20:10:20 -08:00
2a16ffab1b When ctrl-C is pressed, ensure Borg actually exits (#1015). 2025-03-02 10:32:57 -08:00
4b2f7e03af Fix broken "config generate" (#975). 2025-03-01 21:02:32 -08:00
024006f4c0 Title case Borg. 2025-03-01 20:56:40 -08:00
4c71e600ca Expand a little on the specifics of backups of an LVM volume (#1014).
Reviewed-on: borgmatic-collective/borgmatic#1014
2025-03-02 04:55:13 +00:00
114f5702b2 Expand a little on the specifics of backups of an LVM volume. 2025-03-02 14:22:57 +11:00
54afe87a9f Add a "compression" option to the PostgreSQL database hook (#975). 2025-03-01 17:29:16 -08:00
25b6a49df7 Send database passwords to MongoDB via anonymous pipe (#1013). 2025-03-01 10:04:04 -08:00
b97372adf2 Add MariaDB and MySQL anonymous pipe to NEWS (#1009). 2025-03-01 08:49:42 -08:00
6bc9a592d9 Send MariaDB and MySQL passwords via anonymous pipe instead of environment variable (#1009).
Reviewed-on: borgmatic-collective/borgmatic#1011
2025-03-01 03:33:08 +00:00
839862cff0 Update documentation link text about providing database passwords from external sources (#1009). 2025-02-28 19:31:22 -08:00
06b065cb09 Add missing test coverage (#1009). 2025-02-28 18:28:09 -08:00
1e5c256d54 Get tests passing again (#1009). 2025-02-28 14:40:00 -08:00
baf5fec78d If the user supplies their own --defaults-extra-file, include it from the one we generate (#1009). 2025-02-28 10:53:17 -08:00
48a4fbaa89 Add missing test coverage for defaults file function (#1009). 2025-02-28 09:21:01 -08:00
1e274d7153 Add some missing test mocking (#1009). 2025-02-28 08:59:38 -08:00
c41b743819 Get existing unit tests passing (#1009). 2025-02-28 08:37:03 -08:00
36d0073375 Send MySQL passwords via anonymous pipe instead of environment variable (#1009). 2025-02-27 10:42:47 -08:00
0bd418836e Send MariaDB passwords via anonymous pipe instead of environment variable (#1009) 2025-02-27 10:15:45 -08:00
923fa7d82f Include contributors of closed tickets in "recent contributors" documentation. 2025-02-27 09:23:08 -08:00
dce0528057 In the Zabbix monitoring hook, support Zabbix 7.2's authentication changes (#1003). 2025-02-26 22:33:01 -08:00
8a6c6c84d2 Add Uptime Kuma "verify_tls" option to NEWS. 2025-02-24 11:30:16 -08:00
1e21c8f97b Add "verify_tls" option to Uptime Kuma hook.
Merge pull request #90 from columbarius/uptimekuma-verify-tls
2025-02-24 11:28:18 -08:00
columbarius
2eab74a521 Add "verify_tls" option to Uptime Kuma hook. 2025-02-24 20:12:47 +01:00
3bca686707 Fix a ZFS error during snapshot cleanup (#1001). 2025-02-23 17:01:35 -08:00
8854b9ad20 Backing out a ZFS change that hasn't been confirmed working quite yet. 2025-02-23 15:49:12 -08:00
bcc463688a When getting all ZFS dataset mount points, deduplicate and filter out "none". 2025-02-23 15:46:39 -08:00
596305e3de Bump version for release. 2025-02-23 09:59:53 -08:00
c462f0c84c Fix Python < 3.12 compatibility issue (#1005). 2025-02-23 09:59:19 -08:00
4f0142c3c5 Fix Python < 3.12 compatibility issue (#1005). 2025-02-23 09:09:47 -08:00
4f88018558 Bump version for release. 2025-02-22 14:39:45 -08:00
3642687ab5 Fix broken tests (#999). 2025-02-22 14:32:32 -08:00
5d9c111910 Fix a runtime directory error from a conflict between "extra_borg_options" and special file detection (#999). 2025-02-22 14:26:21 -08:00
3cf19dd1b0 Send the "encryption_passphrase" option to Borg via an anonymous pipe (#998).
Reviewed-on: borgmatic-collective/borgmatic#998
2025-02-22 17:57:37 +00:00
ad3392ca15 Ignore the BORG_PASSCOMMAND environment variable when the "encryption_passphase" option is set. 2025-02-22 09:55:07 -08:00
087b7f5c7b Merge branch 'main' into passphrase-via-file-descriptor 2025-02-22 09:27:39 -08:00
34bb09e9be Document Zabbix server version compatibility (#1003). 2025-02-22 09:26:08 -08:00
a61eba8c79 Add PR number to NEWS item. 2025-02-21 22:30:31 -08:00
2280bb26b6 Fix a few tests to mock more accurately. 2025-02-21 22:08:08 -08:00
4ee2603fef Merge branch 'main' into passphrase-via-file-descriptor 2025-02-21 20:26:48 -08:00
cc2ede70ac Fix ZFS mount errors (#1001).
Reviewed-on: borgmatic-collective/borgmatic#1002
2025-02-22 04:13:35 +00:00
02d8ecd66e Document the root pattern requirement for snapshotting (#1001). 2025-02-21 18:08:34 -08:00
9ba78fa33b Don't try to unmount empty directories (#1001). 2025-02-21 17:59:45 -08:00
a3e34d63e9 Remove debugging prints (#1001). 2025-02-21 16:36:12 -08:00
bc25ac4eea Fix Btrfs end-to-end-test (#1001). 2025-02-21 16:32:07 -08:00
e69c686abf Get all unit/integration tests passing (#1001). 2025-02-21 11:32:57 -08:00
0210bf76bc Fix ZFS and Btrfs tests (#1001). 2025-02-20 22:58:05 -08:00
e69cce7e51 Document ZFS snapshotting exclusion of "canmount=off" datasets (#1001). 2025-02-20 14:04:23 -08:00
3655e8784a Add NEWS items for filesystem hook fixes/changes (#1001). 2025-02-20 13:25:09 -08:00
58aed0892c Initial work on fixing ZFS mount errors (#1001). 2025-02-19 22:49:14 -08:00
0e65169503 Improve clarity of comments and variable names of runtime directory exclude detection logic (#999). 2025-02-17 14:12:55 -08:00
07ecc0ffd6 Send the "encryption_passphrase" option to Borg via an anonymous pipe. 2025-02-17 11:03:36 -08:00
37ad398aff Add a ticket number to NEWS for (some of) the credential hook work. 2025-02-16 09:12:52 -08:00
056dfc6d33 Add Btrfs "/" subvolume fix to NEWS. 2025-02-15 09:56:46 -08:00
bf850b9d38 Fix path handling error when handling btrfs '/' subvolume.
Merge pull request #89 from dmitry-t7ko/btrfs-root-submodule-fix
2025-02-15 09:49:13 -08:00
7f22612bf1 Add credential loading from file, KeePassXC, and Docker/Podman secrets.
Reviewed-on: borgmatic-collective/borgmatic#994
2025-02-15 04:20:11 +00:00
e02a0e6322 Support working directory for container and file credential hooks. 2025-02-14 19:35:12 -08:00
2ca23b629c Add end-to-end tests for new credential hooks, along with some related configuration options. 2025-02-14 15:33:30 -08:00
b283e379d0 Actually pass the current configuration to credential hooks. 2025-02-14 10:15:52 -08:00
5dda9c8ee5 Add unit tests for new credential hooks. 2025-02-13 16:38:50 -08:00
Dmitrii Tishchenko
653d8c0946 Remove unneeded 'continue' 2025-02-13 21:44:45 +00:00
Dmitrii Tishchenko
92e87d839d Fix path handling error when handling btrfs '/' submodule 2025-02-13 17:12:23 +00:00
d6cf48544a Get existing tests passing. 2025-02-12 22:49:16 -08:00
8745b9939d Add documentation for new credential hooks. 2025-02-12 21:44:17 -08:00
5661b67cde Merge branch 'main' into keepassxc-docker-podman-file-credentials 2025-02-12 09:14:49 -08:00
aa4a9de3b2 Fix the "create" action to omit the repository label prefix from Borg's output when databases are enabled (#996). 2025-02-12 09:12:59 -08:00
f9ea45493d Add missing dev0 tag to version. 2025-02-11 23:00:26 -08:00
a0ba5b673b Add credential loading from file, KeePassXC, and Docker/Podman secrets. 2025-02-11 22:54:07 -08:00
50096296da Revamp systemd credential syntax to be more consistent with constants (#966). 2025-02-10 22:01:23 -08:00
3bc14ba364 Bump version for release. 2025-02-10 14:21:33 -08:00
c9c6913547 Add a "!credential" tag for loading systemd credentials into borgmatic configuration (#966).
Reviewed-on: borgmatic-collective/borgmatic#993
2025-02-10 22:18:43 +00:00
779f51f40a Fix favicon on non-home pages. 2025-02-10 13:24:27 -08:00
24b846e9ca Additional test coverage (#966). 2025-02-10 10:05:51 -08:00
73fe29b055 Add additional test coverage for credential tag (#966). 2025-02-10 09:52:07 -08:00
775385e688 Get unit tests passing again (#966). 2025-02-09 22:44:38 -08:00
efdbee934a Update documentation to describe delayed !credential tag approach (#966). 2025-02-09 15:27:58 -08:00
49719dc309 Load credentials from database hooks (#966). 2025-02-09 11:35:26 -08:00
b7e3ee8277 Revamped the credentials to load them much closer to where they're used (#966). 2025-02-09 11:12:40 -08:00
97fe1a2c50 Flake fixes (#966). 2025-02-08 19:28:03 -08:00
66abf38b39 Add end-to-end tests for the systemd credential hook (#966). 2025-02-08 17:50:59 -08:00
5baf091853 Add automated tests for the systemd credential hook (#966). 2025-02-08 10:42:11 -08:00
c5abcc1fdf Add documentation for the "!credential" tag (#966). 2025-02-07 16:04:10 -08:00
9a9a8fd1c6 Add a "!credential" tag for loading systemd credentials into borgmatic configuration (#966). 2025-02-07 14:09:26 -08:00
ab9e8d06ee Add a delayed logging handler that delays anything logged before logging is actually configured. 2025-02-07 09:50:05 -08:00
5a2cd1b261 Add support for Python 3.13. 2025-02-06 14:21:36 -08:00
ffaa99ba15 With the "max_duration" option or the "--max-duration" flag, run the archives and repository checks separately so they don't interfere with one another (#988). 2025-02-06 11:52:16 -08:00
5dc0b08f22 Fix the log message code to avoid using Python 3.10+ logging features (#989). 2025-02-04 11:51:39 -08:00
23009e22aa When both "encryption_passcommand" and "encryption_passphrase" are configured, prefer "encryption_passphrase" even if it's an empty value (#987). 2025-02-03 23:20:31 -08:00
6cfa10fb7e Fix a "list" action error when the "encryption_passcommand" option is set (#987). 2025-02-03 23:11:59 -08:00
d29d0bc1c6 NEWS wording tweaks for clarity. 2025-02-03 11:22:54 -08:00
c3f4f94190 Bump version for release. 2025-02-03 11:20:13 -08:00
b2d61ade4e Change the default value for the "--original-hostname" flag from "localhost" to no host specified (#985). 2025-02-03 11:17:21 -08:00
cca9039863 Move the passcommand logic out of a hook to prevent future security issues (e.g., passphrase exfiltration attacks) if a user invokes a credential hook from an arbitrary configuration value (#961). 2025-01-31 22:15:53 -08:00
afcf253318 Fix flake errors (#961). 2025-01-31 10:27:20 -08:00
76533c7db5 Add a clarifying comment to the NEWS entry (#961). 2025-01-31 10:26:05 -08:00
0073366dfc Add a passcommand hook so borgmatic can collect the encryption passphrase once and pass it to Borg multiple times (#961).
Reviewed-on: borgmatic-collective/borgmatic#984
2025-01-31 18:13:38 +00:00
13acaa47e4 Add an end-to-end test for the passcommand hook (#961). 2025-01-30 22:50:13 -08:00
cf326a98a5 Add test coverage for new code (#961). 2025-01-30 21:29:52 -08:00
355eef186e Get existing tests passing again (#961). 2025-01-30 20:18:03 -08:00
c392e4914c Add documentation (#961). 2025-01-30 10:20:24 -08:00
8fed8e0695 Add a passcommand hook to NEWS (#961). 2025-01-30 09:55:32 -08:00
52189490a2 Docstring typo (#961). 2025-01-30 09:48:55 -08:00
26b44699ba Add a passphrase hook so borgmatic can collect the encryption passphrase once and pass it to Borg multiple times (#961). 2025-01-30 09:35:20 -08:00
09933c3dc7 Log the repository path or label on every relevant log message, not just some logs (#635).
Reviewed-on: borgmatic-collective/borgmatic#980
2025-01-29 18:39:49 +00:00
c702dca8da Merge branch 'main' into log-repository-everywhere 2025-01-29 10:31:30 -08:00
62003c58ea Fix the Btrfs hook to support subvolumes with names like "@home", different from their mount points (#983). 2025-01-29 09:46:39 -08:00
67c22e464a Code formatting (#635). 2025-01-29 08:00:42 -08:00
5a9066940f Add monitoring end-to-end tests (#635). 2025-01-28 23:06:22 -08:00
61f0987051 Merge branch 'main' into log-repository-everywhere 2025-01-27 22:03:35 -08:00
63c39be55f Fix flaking issues (#635). 2025-01-27 12:28:36 -08:00
7e344e6e0a Complete test coverage for new code (#635). 2025-01-27 12:25:28 -08:00
b02ff8b6e5 Fix "spot" check file count delta error (#981). 2025-01-27 10:51:06 -08:00
b6ff242d3a Fix for borgmatic "exclude_patterns" and "exclude_from" recursing into excluded subdirectories (#982). 2025-01-27 10:07:19 -08:00
71f1819f05 Some additional test coverage (#635). 2025-01-27 09:27:12 -08:00
31b6e21139 Fix end-to-end tests and update more log messages (#635). 2025-01-26 19:03:40 -08:00
7d56641f56 Get existing unit tests passing (#635). 2025-01-26 12:13:29 -08:00
1ad6be2077 Add missing test coverage and fix incorrect test expectations (#855). 2025-01-26 09:29:54 -08:00
803361b850 Some text fixes (#635). 2025-01-26 09:12:18 -08:00
e0059de711 Add log prefix context manager to make prefix cleanup/restoration easier (#635). 2025-01-25 21:56:41 -08:00
b9ec9bb873 Don't prefix command output (like Borg output) with the global log prefix (#635). 2025-01-25 14:49:39 -08:00
8c5db19490 Code formatting (#635). 2025-01-25 14:14:48 -08:00
cc7e01be68 Log the repository path or label on every relevant log message, not just some logs (#635). 2025-01-25 14:01:25 -08:00
1232ba8045 Revert "Log the repository path or label on every relevant log message, not just some logs (#635)."
This reverts commit 90c1161a8c.
2025-01-25 13:57:56 -08:00
90c1161a8c Log the repository path or label on every relevant log message, not just some logs (#635). 2025-01-25 13:55:58 -08:00
02451a8b30 Further database container dump documentation clarifications (#978). 2025-01-25 09:17:13 -08:00
730350b31a Fix incorrect option name within schema description. 2025-01-25 08:04:13 -08:00
203e1f4e99 Bump version for release. 2025-01-25 08:01:34 -08:00
4c35a564ef Fix root patterns so they don't have an invalid "sh:" prefix before getting passed to Borg (#979). 2025-01-25 07:59:53 -08:00
7551810ea6 Clarify/correct documentation about dumping databases when using containers (#978). 2025-01-24 14:31:38 -08:00
ce523eeed6 Add a blurb about recent contributors. 2025-01-23 15:11:54 -08:00
3c0def6d6d Expand the recent contributors documentation section to ticket submitters. 2025-01-23 14:41:26 -08:00
f08014e3be Code formatting. 2025-01-23 12:11:27 -08:00
86ad93676d Bump version for release. 2025-01-23 12:09:20 -08:00
e1825d2bcb Add #977 to NEWS. 2025-01-23 12:08:34 -08:00
92b8c0230e Fix exclude patterns parsing to support pattern styles (#977).
Reviewed-on: borgmatic-collective/borgmatic#976
2025-01-23 20:06:11 +00:00
Pavel Andreev
73c196aa70 Fix according to review comments 2025-01-23 19:49:10 +00:00
Pavel Andreev
5d390d7953 Fix patterns parsing 2025-01-23 15:58:43 +00:00
ffb342780b Link to Sentry's DSN documentation (#855). 2025-01-21 17:28:32 -08:00
9871267f97 Add a Sentry monitoring hook (#855). 2025-01-21 17:23:56 -08:00
914c2b17e9 Add a Sentry monitoring hook (#855). 2025-01-21 17:23:18 -08:00
804455ac9f Fix for "exclude_from" files being completely ignored (#971). 2025-01-19 10:27:13 -08:00
4fe0fd1576 Fix version number in NEWS. 2025-01-18 09:55:03 -08:00
e3d40125cb Fix for a "spot" check error when a filename in the most recent archive contains a newline (#968). 2025-01-18 09:54:30 -08:00
e66df22a6e Fix for an error when a blank line occurs in the configured patterns or excludes (#970). 2025-01-18 09:25:29 -08:00
226 changed files with 16006 additions and 4121 deletions

133
NEWS
View File

@@ -1,3 +1,136 @@
2.0.0.dev0
* TL;DR: More flexible, completely revamped command hooks. All config options settable on the
command-line. Config option defaults for many command-line flags. New "key import" and "recreate"
actions. Almost everything is backwards compatible.
* #262: Add a "default_actions" option that supports disabling default actions when borgmatic is
run without any command-line arguments.
* #303: Deprecate the "--override" flag in favor of direct command-line flags for every borgmatic
configuration option. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides
* #303: Add configuration options that serve as defaults for some (but not all) command-line
action flags. For example, each entry in "repositories:" now has an "encryption" option that
applies to the "repo-create" action, serving as a default for the "--encryption" flag. See the
documentation for more information: https://torsion.org/borgmatic/docs/reference/configuration/
* #345: Add a "key import" action to import a repository key from backup.
* #422: Add home directory expansion to file-based and KeePassXC credential hooks.
* #610: Add a "recreate" action for recreating archives, for instance for retroactively excluding
particular files from existing archives.
* #790, #821: Deprecate all "before_*", "after_*" and "on_error" command hooks in favor of more
flexible "commands:". See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #790: BREAKING: For both new and deprecated command hooks, run a configured "after" hook even if
an error occurs first. This allows you to perform cleanup steps that correspond to "before"
preparation commands—even when something goes wrong.
* #790: BREAKING: Run all command hooks (both new and deprecated) respecting the
"working_directory" option if configured, meaning that hook commands are run in that directory.
* #836: Add a custom command option for the SQLite hook.
* #837: Add custom command options for the MongoDB hook.
* #1010: When using Borg 2, don't pass the "--stats" flag to "borg prune".
* #1020: Document a database use case involving a temporary database client container:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
* #1037: Fix an error with the "extract" action when both a remote repository and a
"working_directory" are used.
* #1044: Fix an error in the systemd credential hook when the credential name contains a "."
character.
* #1047: Add "key-file" and "yubikey" options to the KeePassXC credential hook.
* #1048: Fix a "no such file or directory" error in ZFS, Btrfs, and LVM hooks with nested
directories that reside on separate devices/filesystems.
* #1050: Fix a failure in the "spot" check when the archive contains a symlink.
* #1051: Add configuration filename to the "Successfully ran configuration file" log message.
1.9.14
* #409: With the PagerDuty monitoring hook, send borgmatic logs to PagerDuty so they show up in the
incident UI. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook
* #936: Clarify Zabbix monitoring hook documentation about creating items:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#zabbix-hook
* #1017: Fix a regression in which some MariaDB/MySQL passwords were not escaped correctly.
* #1021: Fix a regression in which the "exclude_patterns" option didn't expand "~" (the user's
home directory). This fix means that all "patterns" and "patterns_from" also now expand "~".
* #1023: Fix an error in the Btrfs hook when attempting to snapshot a read-only subvolume. Now,
read-only subvolumes are ignored since Btrfs can't actually snapshot them.
1.9.13
* #975: Add a "compression" option to the PostgreSQL database hook.
* #1001: Fix a ZFS error during snapshot cleanup.
* #1003: In the Zabbix monitoring hook, support Zabbix 7.2's authentication changes.
* #1009: Send database passwords to MariaDB and MySQL via anonymous pipe, which is more secure than
using an environment variable.
* #1013: Send database passwords to MongoDB via anonymous pipe, which is more secure than using
"--password" on the command-line!
* #1015: When ctrl-C is pressed, more strongly encourage Borg to actually exit.
* Add a "verify_tls" option to the Uptime Kuma monitoring hook for disabling TLS verification.
* Add "tls" options to the MariaDB and MySQL database hooks to enable or disable TLS encryption
between client and server.
1.9.12
* #1005: Fix the credential hooks to avoid using Python 3.12+ string features. Now borgmatic will
work with Python 3.9, 3.10, and 3.11 again.
1.9.11
* #795: Add credential loading from file, KeePassXC, and Docker/Podman secrets. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
* #996: Fix the "create" action to omit the repository label prefix from Borg's output when
databases are enabled.
* #998: Send the "encryption_passphrase" option to Borg via an anonymous pipe, which is more secure
than using an environment variable.
* #999: Fix a runtime directory error from a conflict between "extra_borg_options" and special file
detection.
* #1001: For the ZFS, Btrfs, and LVM hooks, only make snapshots for root patterns that come from
a borgmatic configuration option (e.g. "source_directories")—not from other hooks within
borgmatic.
* #1001: Fix a ZFS/LVM error due to colliding snapshot mount points for nested datasets or logical
volumes.
* #1001: Don't try to snapshot ZFS datasets that have the "canmount=off" property.
* Fix another error in the Btrfs hook when a subvolume mounted at "/" is configured in borgmatic's
source directories.
1.9.10
* #966: Add a "{credential ...}" syntax for loading systemd credentials into borgmatic
configuration files. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
* #987: Fix a "list" action error when the "encryption_passcommand" option is set.
* #987: When both "encryption_passcommand" and "encryption_passphrase" are configured, prefer
"encryption_passphrase" even if it's an empty value.
* #988: With the "max_duration" option or the "--max-duration" flag, run the archives and
repository checks separately so they don't interfere with one another. Previously, borgmatic
refused to run checks in this situation.
* #989: Fix the log message code to avoid using Python 3.10+ logging features. Now borgmatic will
work with Python 3.9 again.
* Capture and delay any log records produced before logging is fully configured, so early log
records don't get lost.
* Add support for Python 3.13.
1.9.9
* #635: Log the repository path or label on every relevant log message, not just some logs.
* #961: When the "encryption_passcommand" option is set, call the command once from borgmatic to
collect the encryption passphrase and then pass it to Borg multiple times. See the documentation
for more information: https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
* #981: Fix a "spot" check file count delta error.
* #982: Fix for borgmatic "exclude_patterns" and "exclude_from" recursing into excluded
subdirectories.
* #983: Fix the Btrfs hook to support subvolumes with names like "@home" different from their
mount points.
* #985: Change the default value for the "--original-hostname" flag from "localhost" to no host
specified. This way, the "restore" action works without a hostname if there's a single matching
database dump.
1.9.8
* #979: Fix root patterns so they don't have an invalid "sh:" prefix before getting passed to Borg.
* Expand the recent contributors documentation section to include ticket submitters—not just code
contributors—because there are multiple ways to contribute to the project! See:
https://torsion.org/borgmatic/#recent-contributors
1.9.7
* #855: Add a Sentry monitoring hook. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#sentry-hook
* #968: Fix for a "spot" check error when a filename in the most recent archive contains a newline.
* #970: Fix for an error when there's a blank line in the configured patterns or excludes.
* #971: Fix for "exclude_from" files being completely ignored.
* #977: Fix for "exclude_patterns" and "exclude_from" not supporting explicit pattern styles (e.g.,
"sh:" or "re:").
1.9.6
* #959: Fix an error in the Btrfs hook when a subvolume mounted at "/" is configured in borgmatic's
source directories.

View File

@@ -56,6 +56,8 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
## Integrations
### Data
<a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
@@ -65,6 +67,11 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
<a href="https://btrfs.readthedocs.io/"><img src="docs/static/btrfs.png" alt="Btrfs" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://sourceware.org/lvm2/"><img src="docs/static/lvm.png" alt="LVM" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://rclone.org"><img src="docs/static/rclone.png" alt="rclone" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
### Monitoring
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://uptime.kuma.pet/"><img src="docs/static/uptimekuma.png" alt="Uptime Kuma" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
@@ -75,7 +82,15 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
<a href="https://grafana.com/oss/loki/"><img src="docs/static/loki.png" alt="Loki" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://github.com/caronc/apprise/wiki"><img src="docs/static/apprise.png" alt="Apprise" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.zabbix.com/"><img src="docs/static/zabbix.png" alt="Zabbix" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://sentry.io/"><img src="docs/static/sentry.png" alt="Sentry" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
### Credentials
<a href="https://systemd.io/"><img src="docs/static/systemd.png" alt="Sentry" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.docker.com/"><img src="docs/static/docker.png" alt="Docker" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://podman.io/"><img src="docs/static/podman.png" alt="Podman" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://keepassxc.org/"><img src="docs/static/keepassxc.png" alt="Podman" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
## Getting started
@@ -164,4 +179,8 @@ info on cloning source code, running tests, etc.
### Recent contributors
Thanks to all borgmatic contributors! There are multiple ways to contribute to
this project, so the following includes those who have fixed bugs, contributed
features, *or* filed tickets.
{% include borgmatic/contributors.html %}

View File

@@ -22,9 +22,7 @@ def run_borg(
if borg_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, borg_arguments.repository
):
logger.info(
f'{repository.get("label", repository["path"])}: Running arbitrary Borg command'
)
logger.info('Running arbitrary Borg command')
archive_name = borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],
borg_arguments.archive,

View File

@@ -21,9 +21,7 @@ def run_break_lock(
if break_lock_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, break_lock_arguments.repository
):
logger.info(
f'{repository.get("label", repository["path"])}: Breaking repository and cache locks'
)
logger.info('Breaking repository and cache locks')
borgmatic.borg.break_lock.break_lock(
repository['path'],
config,

View File

@@ -16,7 +16,7 @@ def run_change_passphrase(
remote_path,
):
'''
Run the "key change-passprhase" action for the given repository.
Run the "key change-passphrase" action for the given repository.
'''
if (
change_passphrase_arguments.repository is None
@@ -24,9 +24,7 @@ def run_change_passphrase(
repository, change_passphrase_arguments.repository
)
):
logger.info(
f'{repository.get("label", repository["path"])}: Changing repository passphrase'
)
logger.info('Changing repository passphrase')
borgmatic.borg.change_passphrase.change_passphrase(
repository['path'],
config,

View File

@@ -170,7 +170,7 @@ def filter_checks_on_frequency(
if calendar.day_name[datetime_now().weekday()] not in days:
logger.info(
f"Skipping {check} check due to day of the week; check only runs on {'/'.join(days)} (use --force to check anyway)"
f"Skipping {check} check due to day of the week; check only runs on {'/'.join(day.title() for day in days)} (use --force to check anyway)"
)
filtered_checks.remove(check)
continue
@@ -363,7 +363,6 @@ def collect_spot_check_source_paths(
borgmatic.hooks.dispatch.call_hooks(
'use_streaming',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
).values()
)
@@ -373,7 +372,7 @@ def collect_spot_check_source_paths(
borgmatic.borg.create.make_base_create_command(
dry_run=True,
repository_path=repository['path'],
config=config,
config=dict(config, list_details=True),
patterns=borgmatic.actions.create.process_patterns(
borgmatic.actions.create.collect_patterns(config),
working_directory,
@@ -383,17 +382,15 @@ def collect_spot_check_source_paths(
borgmatic_runtime_directory=borgmatic_runtime_directory,
local_path=local_path,
remote_path=remote_path,
list_files=True,
stream_processes=stream_processes,
)
)
borg_environment = borgmatic.borg.environment.make_environment(config)
working_directory = borgmatic.config.paths.get_working_directory(config)
paths_output = borgmatic.execute.execute_command_and_capture_output(
create_flags + create_positional_arguments,
capture_stderr=True,
extra_environment=borg_environment,
environment=borgmatic.borg.environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
@@ -401,7 +398,7 @@ def collect_spot_check_source_paths(
paths = tuple(
path_line.split(' ', 1)[1]
for path_line in paths_output.split('\n')
for path_line in paths_output.splitlines()
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
@@ -443,7 +440,7 @@ def collect_spot_check_archive_paths(
config,
local_borg_version,
global_arguments,
path_format='{type} {path}{NL}', # noqa: FS003
path_format='{type} {path}{NUL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
)
@@ -468,15 +465,14 @@ def compare_spot_check_hashes(
global_arguments,
local_path,
remote_path,
log_prefix,
source_paths,
):
'''
Given a repository configuration dict, the name of the latest archive, a configuration dict, the
local Borg version, global arguments as an argparse.Namespace instance, the local Borg path, the
remote Borg path, a log label, and spot check source paths, compare the hashes for a sampling of
the source paths with hashes from corresponding paths in the given archive. Return a sequence of
the paths that fail that hash comparison.
remote Borg path, and spot check source paths, compare the hashes for a sampling of the source
paths with hashes from corresponding paths in the given archive. Return a sequence of the paths
that fail that hash comparison.
'''
# Based on the configured sample percentage, come up with a list of random sample files from the
# source directories.
@@ -486,13 +482,15 @@ def compare_spot_check_hashes(
)
source_sample_paths = tuple(random.sample(source_paths, sample_count))
working_directory = borgmatic.config.paths.get_working_directory(config)
existing_source_sample_paths = {
hashable_source_sample_path = {
source_path
for source_path in source_sample_paths
if os.path.exists(os.path.join(working_directory or '', source_path))
for full_source_path in (os.path.join(working_directory or '', source_path),)
if os.path.exists(full_source_path)
if not os.path.islink(full_source_path)
}
logger.debug(
f'{log_prefix}: Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
f'Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
)
source_sample_paths_iterator = iter(source_sample_paths)
@@ -512,7 +510,7 @@ def compare_spot_check_hashes(
hash_output = borgmatic.execute.execute_command_and_capture_output(
(spot_check_config.get('xxh64sum_command', 'xxh64sum'),)
+ tuple(
path for path in source_sample_paths_subset if path in existing_source_sample_paths
path for path in source_sample_paths_subset if path in hashable_source_sample_path
),
working_directory=working_directory,
)
@@ -520,11 +518,13 @@ def compare_spot_check_hashes(
source_hashes.update(
**dict(
(reversed(line.split(' ', 1)) for line in hash_output.splitlines()),
# Represent non-existent files as having empty hashes so the comparison below still works.
# Represent non-existent files as having empty hashes so the comparison below still
# works. Same thing for filesystem links, since Borg produces empty archive hashes
# for them.
**{
path: ''
for path in source_sample_paths_subset
if path not in existing_source_sample_paths
if path not in hashable_source_sample_path
},
)
)
@@ -540,7 +540,7 @@ def compare_spot_check_hashes(
local_borg_version,
global_arguments,
list_paths=source_sample_paths_subset,
path_format='{xxh64} {path}{NL}', # noqa: FS003
path_format='{xxh64} {path}{NUL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
)
@@ -580,8 +580,7 @@ def spot_check(
disk to those stored in the latest archive. If any differences are beyond configured tolerances,
then the check fails.
'''
log_prefix = f'{repository.get("label", repository["path"])}'
logger.debug(f'{log_prefix}: Running spot check')
logger.debug('Running spot check')
try:
spot_check_config = next(
@@ -604,7 +603,7 @@ def spot_check(
remote_path,
borgmatic_runtime_directory,
)
logger.debug(f'{log_prefix}: {len(source_paths)} total source paths for spot check')
logger.debug(f'{len(source_paths)} total source paths for spot check')
archive = borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],
@@ -615,7 +614,7 @@ def spot_check(
local_path,
remote_path,
)
logger.debug(f'{log_prefix}: Using archive {archive} for spot check')
logger.debug(f'Using archive {archive} for spot check')
archive_paths = collect_spot_check_archive_paths(
repository,
@@ -627,11 +626,11 @@ def spot_check(
remote_path,
borgmatic_runtime_directory,
)
logger.debug(f'{log_prefix}: {len(archive_paths)} total archive paths for spot check')
logger.debug(f'{len(archive_paths)} total archive paths for spot check')
if len(source_paths) == 0:
logger.debug(
f'{log_prefix}: Paths in latest archive but not source paths: {", ".join(set(archive_paths)) or "none"}'
f'Paths in latest archive but not source paths: {", ".join(set(archive_paths)) or "none"}'
)
raise ValueError(
'Spot check failed: There are no source paths to compare against the archive'
@@ -644,10 +643,10 @@ def spot_check(
if count_delta_percentage > spot_check_config['count_tolerance_percentage']:
rootless_source_paths = set(path.lstrip(os.path.sep) for path in source_paths)
logger.debug(
f'{log_prefix}: Paths in source paths but not latest archive: {", ".join(rootless_source_paths - set(archive_paths)) or "none"}'
f'Paths in source paths but not latest archive: {", ".join(rootless_source_paths - set(archive_paths)) or "none"}'
)
logger.debug(
f'{log_prefix}: Paths in latest archive but not source paths: {", ".join(set(archive_paths) - rootless_source_paths) or "none"}'
f'Paths in latest archive but not source paths: {", ".join(set(archive_paths) - rootless_source_paths) or "none"}'
)
raise ValueError(
f'Spot check failed: {count_delta_percentage:.2f}% file count delta between source paths and latest archive (tolerance is {spot_check_config["count_tolerance_percentage"]}%)'
@@ -661,25 +660,24 @@ def spot_check(
global_arguments,
local_path,
remote_path,
log_prefix,
source_paths,
)
# Error if the percentage of failing hashes exceeds the configured tolerance percentage.
logger.debug(f'{log_prefix}: {len(failing_paths)} non-matching spot check hashes')
logger.debug(f'{len(failing_paths)} non-matching spot check hashes')
data_tolerance_percentage = spot_check_config['data_tolerance_percentage']
failing_percentage = (len(failing_paths) / len(source_paths)) * 100
if failing_percentage > data_tolerance_percentage:
logger.debug(
f'{log_prefix}: Source paths with data not matching the latest archive: {", ".join(failing_paths)}'
f'Source paths with data not matching the latest archive: {", ".join(failing_paths)}'
)
raise ValueError(
f'Spot check failed: {failing_percentage:.2f}% of source paths with data not matching the latest archive (tolerance is {data_tolerance_percentage}%)'
)
logger.info(
f'{log_prefix}: Spot check passed with a {count_delta_percentage:.2f}% file count delta and a {failing_percentage:.2f}% file data delta'
f'Spot check passed with a {count_delta_percentage:.2f}% file count delta and a {failing_percentage:.2f}% file data delta'
)
@@ -687,7 +685,6 @@ def run_check(
config_filename,
repository,
config,
hook_context,
local_borg_version,
check_arguments,
global_arguments,
@@ -704,17 +701,7 @@ def run_check(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_check'),
config.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
**hook_context,
)
log_prefix = repository.get('label', repository['path'])
logger.info(f'{log_prefix}: Running consistency checks')
logger.info('Running consistency checks')
repository_id = borgmatic.borg.check.get_repository_id(
repository['path'],
@@ -767,9 +754,7 @@ def run_check(
write_check_time(make_check_time_path(config, repository_id, 'extract'))
if 'spot' in checks:
with borgmatic.config.paths.Runtime_directory(
config, log_prefix
) as borgmatic_runtime_directory:
with borgmatic.config.paths.Runtime_directory(config) as borgmatic_runtime_directory:
spot_check(
repository,
config,
@@ -780,12 +765,3 @@ def run_check(
borgmatic_runtime_directory,
)
write_check_time(make_check_time_path(config, repository_id, 'spot'))
borgmatic.hooks.command.execute_hook(
config.get('after_check'),
config.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
**hook_context,
)

View File

@@ -12,7 +12,6 @@ def run_compact(
config_filename,
repository,
config,
hook_context,
local_borg_version,
compact_arguments,
global_arguments,
@@ -28,18 +27,8 @@ def run_compact(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_compact'),
config.get('umask'),
config_filename,
'pre-compact',
global_arguments.dry_run,
**hook_context,
)
if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
logger.info(
f'{repository.get("label", repository["path"])}: Compacting segments{dry_run_label}'
)
logger.info(f'Compacting segments{dry_run_label}')
borgmatic.borg.compact.compact_segments(
global_arguments.dry_run,
repository['path'],
@@ -48,19 +37,7 @@ def run_compact(
global_arguments,
local_path=local_path,
remote_path=remote_path,
progress=compact_arguments.progress,
cleanup_commits=compact_arguments.cleanup_commits,
threshold=compact_arguments.threshold,
)
else: # pragma: nocover
logger.info(
f'{repository.get("label", repository["path"])}: Skipping compact (only available/needed in Borg 1.2+)'
)
borgmatic.hooks.command.execute_hook(
config.get('after_compact'),
config.get('umask'),
config_filename,
'post-compact',
global_arguments.dry_run,
**hook_context,
)
logger.info('Skipping compact (only available/needed in Borg 1.2+)')

View File

@@ -45,7 +45,6 @@ def get_config_paths(archive_name, bootstrap_arguments, global_arguments, local_
# still want to support reading the manifest from previously created archives as well.
with borgmatic.config.paths.Runtime_directory(
{'user_runtime_directory': bootstrap_arguments.user_runtime_directory},
bootstrap_arguments.repository,
) as borgmatic_runtime_directory:
for base_directory in (
'borgmatic',
@@ -120,7 +119,9 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
bootstrap_arguments.repository,
archive_name,
[config_path.lstrip(os.path.sep) for config_path in manifest_config_paths],
config,
# Only add progress here and not the extract_archive() call above, because progress
# conflicts with extract_to_stdout.
dict(config, progress=bootstrap_arguments.progress or False),
local_borg_version,
global_arguments,
local_path=bootstrap_arguments.local_path,
@@ -128,5 +129,4 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
extract_to_stdout=False,
destination_path=bootstrap_arguments.destination,
strip_components=bootstrap_arguments.strip_components,
progress=bootstrap_arguments.progress,
)

View File

@@ -15,7 +15,7 @@ import borgmatic.hooks.dispatch
logger = logging.getLogger(__name__)
def parse_pattern(pattern_line):
def parse_pattern(pattern_line, default_style=borgmatic.borg.pattern.Pattern_style.NONE):
'''
Given a Borg pattern as a string, parse it into a borgmatic.borg.pattern.Pattern instance and
return it.
@@ -23,18 +23,20 @@ def parse_pattern(pattern_line):
try:
(pattern_type, remainder) = pattern_line.split(' ', maxsplit=1)
except ValueError:
raise ValueError('Invalid pattern:', pattern_line)
raise ValueError(f'Invalid pattern: {pattern_line}')
try:
(pattern_style, path) = remainder.split(':', maxsplit=1)
(parsed_pattern_style, path) = remainder.split(':', maxsplit=1)
pattern_style = borgmatic.borg.pattern.Pattern_style(parsed_pattern_style)
except ValueError:
pattern_style = ''
pattern_style = default_style
path = remainder
return borgmatic.borg.pattern.Pattern(
path,
borgmatic.borg.pattern.Pattern_type(pattern_type),
borgmatic.borg.pattern.Pattern_style(pattern_style),
source=borgmatic.borg.pattern.Pattern_source.CONFIG,
)
@@ -50,18 +52,20 @@ def collect_patterns(config):
try:
return (
tuple(
borgmatic.borg.pattern.Pattern(source_directory)
borgmatic.borg.pattern.Pattern(
source_directory, source=borgmatic.borg.pattern.Pattern_source.CONFIG
)
for source_directory in config.get('source_directories', ())
)
+ tuple(
parse_pattern(pattern_line.strip())
for pattern_line in config.get('patterns', ())
if not pattern_line.lstrip().startswith('#')
if pattern_line.strip()
)
+ tuple(
borgmatic.borg.pattern.Pattern(
exclude_line.strip(),
borgmatic.borg.pattern.Pattern_type.EXCLUDE,
parse_pattern(
f'{borgmatic.borg.pattern.Pattern_type.NO_RECURSE.value} {exclude_line.strip()}',
borgmatic.borg.pattern.Pattern_style.FNMATCH,
)
for exclude_line in config.get('exclude_patterns', ())
@@ -71,22 +75,23 @@ def collect_patterns(config):
for filename in config.get('patterns_from', ())
for pattern_line in open(filename).readlines()
if not pattern_line.lstrip().startswith('#')
if pattern_line.strip()
)
+ tuple(
borgmatic.borg.pattern.Pattern(
exclude_line.strip(),
borgmatic.borg.pattern.Pattern_type.EXCLUDE,
parse_pattern(
f'{borgmatic.borg.pattern.Pattern_type.NO_RECURSE.value} {exclude_line.strip()}',
borgmatic.borg.pattern.Pattern_style.FNMATCH,
)
for filename in config.get('excludes_from', ())
for filename in config.get('exclude_from', ())
for exclude_line in open(filename).readlines()
if not exclude_line.lstrip().startswith('#')
if exclude_line.strip()
)
)
except (FileNotFoundError, OSError) as error:
logger.debug(error)
raise ValueError(f'Cannot read patterns_from/excludes_from file: {error.filename}')
raise ValueError(f'Cannot read patterns_from/exclude_from file: {error.filename}')
def expand_directory(directory, working_directory):
@@ -125,8 +130,11 @@ def expand_directory(directory, working_directory):
def expand_patterns(patterns, working_directory=None, skip_paths=None):
'''
Given a sequence of borgmatic.borg.pattern.Pattern instances and an optional working directory,
expand tildes and globs in each root pattern. Return all the resulting patterns (not just the
root patterns) as a tuple.
expand tildes and globs in each root pattern and expand just tildes in each non-root pattern.
The idea is that non-root patterns may be regular expressions or other pattern styles containing
"*" that borgmatic should not expand as a shell glob.
Return all the resulting patterns as a tuple.
If a set of paths are given to skip, then don't expand any patterns matching them.
'''
@@ -142,12 +150,21 @@ def expand_patterns(patterns, working_directory=None, skip_paths=None):
pattern.type,
pattern.style,
pattern.device,
pattern.source,
)
for expanded_path in expand_directory(pattern.path, working_directory)
)
if pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
and pattern.path not in (skip_paths or ())
else (pattern,)
else (
borgmatic.borg.pattern.Pattern(
os.path.expanduser(pattern.path),
pattern.type,
pattern.style,
pattern.device,
pattern.source,
),
)
)
for pattern in patterns
)
@@ -176,6 +193,7 @@ def device_map_patterns(patterns, working_directory=None):
and os.path.exists(full_path)
else None
),
source=pattern.source,
)
for pattern in patterns
for full_path in (os.path.join(working_directory or '', pattern.path),)
@@ -254,7 +272,6 @@ def run_create(
repository,
config,
config_paths,
hook_context,
local_borg_version,
create_arguments,
global_arguments,
@@ -272,26 +289,23 @@ def run_create(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_backup'),
config.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
**hook_context,
)
if config.get('list_details') and config.get('progress'):
raise ValueError(
'With the create action, only one of --list/--files/list_details and --progress/progress can be used.'
)
log_prefix = repository.get('label', repository['path'])
logger.info(f'{log_prefix}: Creating archive{dry_run_label}')
if config.get('list_details') and create_arguments.json:
raise ValueError(
'With the create action, only one of --list/--files/list_details and --json can be used.'
)
logger.info(f'Creating archive{dry_run_label}')
working_directory = borgmatic.config.paths.get_working_directory(config)
with borgmatic.config.paths.Runtime_directory(
config, log_prefix
) as borgmatic_runtime_directory:
with borgmatic.config.paths.Runtime_directory(config) as borgmatic_runtime_directory:
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
global_arguments.dry_run,
@@ -300,7 +314,6 @@ def run_create(
active_dumps = borgmatic.hooks.dispatch.call_hooks(
'dump_data_sources',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
config_paths,
borgmatic_runtime_directory,
@@ -324,10 +337,7 @@ def run_create(
borgmatic_runtime_directory,
local_path=local_path,
remote_path=remote_path,
progress=create_arguments.progress,
stats=create_arguments.stats,
json=create_arguments.json,
list_files=create_arguments.list_files,
stream_processes=stream_processes,
)
@@ -337,17 +347,7 @@ def run_create(
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps',
config,
config_filename,
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
global_arguments.dry_run,
)
borgmatic.hooks.command.execute_hook(
config.get('after_backup'),
config.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
**hook_context,
)

View File

@@ -23,7 +23,7 @@ def run_delete(
if delete_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, delete_arguments.repository
):
logger.answer(f'{repository.get("label", repository["path"])}: Deleting archives')
logger.answer('Deleting archives')
archive_name = (
borgmatic.borg.repo_list.resolve_archive_name(

View File

@@ -21,7 +21,7 @@ def run_export_key(
if export_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_arguments.repository
):
logger.info(f'{repository.get("label", repository["path"])}: Exporting repository key')
logger.info('Exporting repository key')
borgmatic.borg.export_key.export_key(
repository['path'],
config,

View File

@@ -22,9 +22,7 @@ def run_export_tar(
if export_tar_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_tar_arguments.repository
):
logger.info(
f'{repository["path"]}: Exporting archive {export_tar_arguments.archive} as tar file'
)
logger.info(f'Exporting archive {export_tar_arguments.archive} as tar file')
borgmatic.borg.export_tar.export_tar_archive(
global_arguments.dry_run,
repository['path'],
@@ -45,6 +43,5 @@ def run_export_tar(
local_path=local_path,
remote_path=remote_path,
tar_filter=export_tar_arguments.tar_filter,
list_files=export_tar_arguments.list_files,
strip_components=export_tar_arguments.strip_components,
)

View File

@@ -12,7 +12,6 @@ def run_extract(
config_filename,
repository,
config,
hook_context,
local_borg_version,
extract_arguments,
global_arguments,
@@ -22,20 +21,10 @@ def run_extract(
'''
Run the "extract" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
config.get('before_extract'),
config.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
**hook_context,
)
if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, extract_arguments.repository
):
logger.info(
f'{repository.get("label", repository["path"])}: Extracting archive {extract_arguments.archive}'
)
logger.info(f'Extracting archive {extract_arguments.archive}')
borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
repository['path'],
@@ -56,13 +45,4 @@ def run_extract(
remote_path=remote_path,
destination_path=extract_arguments.destination,
strip_components=extract_arguments.strip_components,
progress=extract_arguments.progress,
)
borgmatic.hooks.command.execute_hook(
config.get('after_extract'),
config.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
**hook_context,
)

View File

@@ -0,0 +1,33 @@
import logging
import borgmatic.borg.import_key
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_import_key(
repository,
config,
local_borg_version,
import_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "key import" action for the given repository.
'''
if import_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, import_arguments.repository
):
logger.info('Importing repository key')
borgmatic.borg.import_key.import_key(
repository['path'],
config,
local_borg_version,
import_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@@ -27,9 +27,7 @@ def run_info(
repository, info_arguments.repository
):
if not info_arguments.json:
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying archive summary information'
)
logger.answer('Displaying archive summary information')
archive_name = borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],
info_arguments.archive,

View File

@@ -27,9 +27,9 @@ def run_list(
):
if not list_arguments.json:
if list_arguments.find_paths: # pragma: no cover
logger.answer(f'{repository.get("label", repository["path"])}: Searching archives')
logger.answer('Searching archives')
elif not list_arguments.archive: # pragma: no cover
logger.answer(f'{repository.get("label", repository["path"])}: Listing archives')
logger.answer('Listing archives')
archive_name = borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],

View File

@@ -23,11 +23,9 @@ def run_mount(
repository, mount_arguments.repository
):
if mount_arguments.archive:
logger.info(
f'{repository.get("label", repository["path"])}: Mounting archive {mount_arguments.archive}'
)
logger.info(f'Mounting archive {mount_arguments.archive}')
else: # pragma: nocover
logger.info(f'{repository.get("label", repository["path"])}: Mounting repository')
logger.info('Mounting repository')
borgmatic.borg.mount.mount_archive(
repository['path'],

View File

@@ -11,7 +11,6 @@ def run_prune(
config_filename,
repository,
config,
hook_context,
local_borg_version,
prune_arguments,
global_arguments,
@@ -27,15 +26,7 @@ def run_prune(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_prune'),
config.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository.get("label", repository["path"])}: Pruning archives{dry_run_label}')
logger.info(f'Pruning archives{dry_run_label}')
borgmatic.borg.prune.prune_archives(
global_arguments.dry_run,
repository['path'],
@@ -46,11 +37,3 @@ def run_prune(
local_path=local_path,
remote_path=remote_path,
)
borgmatic.hooks.command.execute_hook(
config.get('after_prune'),
config.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
**hook_context,
)

View File

@@ -0,0 +1,53 @@
import logging
import borgmatic.borg.recreate
import borgmatic.config.validate
from borgmatic.actions.create import collect_patterns, process_patterns
logger = logging.getLogger(__name__)
def run_recreate(
repository,
config,
local_borg_version,
recreate_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "recreate" action for the given repository.
'''
if recreate_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, recreate_arguments.repository
):
if recreate_arguments.archive:
logger.answer(f'Recreating archive {recreate_arguments.archive}')
else:
logger.answer('Recreating repository')
# Collect and process patterns.
processed_patterns = process_patterns(
collect_patterns(config), borgmatic.config.paths.get_working_directory(config)
)
borgmatic.borg.recreate.recreate_archive(
repository['path'],
borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],
recreate_arguments.archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
),
config,
local_borg_version,
recreate_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
patterns=processed_patterns,
)

View File

@@ -23,19 +23,39 @@ def run_repo_create(
):
return
logger.info(f'{repository.get("label", repository["path"])}: Creating repository')
logger.info('Creating repository')
encryption_mode = repo_create_arguments.encryption_mode or repository.get('encryption')
if not encryption_mode:
raise ValueError(
'With the repo-create action, either the --encryption flag or the repository encryption option is required.'
)
borgmatic.borg.repo_create.create_repository(
global_arguments.dry_run,
repository['path'],
config,
local_borg_version,
global_arguments,
repo_create_arguments.encryption_mode,
encryption_mode,
repo_create_arguments.source_repository,
repo_create_arguments.copy_crypt_key,
repo_create_arguments.append_only,
repo_create_arguments.storage_quota,
repo_create_arguments.make_parent_dirs,
(
repository.get('append_only')
if repo_create_arguments.append_only is None
else repo_create_arguments.append_only
),
(
repository.get('storage_quota')
if repo_create_arguments.storage_quota is None
else repo_create_arguments.storage_quota
),
(
repository.get('make_parent_directories')
if repo_create_arguments.make_parent_directories is None
else repo_create_arguments.make_parent_directories
),
local_path=local_path,
remote_path=remote_path,
)

View File

@@ -21,8 +21,7 @@ def run_repo_delete(
repository, repo_delete_arguments.repository
):
logger.answer(
f'{repository.get("label", repository["path"])}: Deleting repository'
+ (' cache' if repo_delete_arguments.cache_only else '')
'Deleting repository' + (' cache' if repo_delete_arguments.cache_only else '')
)
borgmatic.borg.repo_delete.delete_repository(

View File

@@ -25,9 +25,7 @@ def run_repo_info(
repository, repo_info_arguments.repository
):
if not repo_info_arguments.json:
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying repository summary information'
)
logger.answer('Displaying repository summary information')
json_output = borgmatic.borg.repo_info.display_repository_info(
repository['path'],

View File

@@ -25,7 +25,7 @@ def run_repo_list(
repository, repo_list_arguments.repository
):
if not repo_list_arguments.json:
logger.answer(f'{repository.get("label", repository["path"])}: Listing repository')
logger.answer('Listing repository')
json_output = borgmatic.borg.repo_list.list_repository(
repository['path'],

View File

@@ -57,7 +57,7 @@ def render_dump_metadata(dump):
Given a Dump instance, make a display string describing it for use in log messages.
'''
name = 'unspecified' if dump.data_source_name is UNSPECIFIED else dump.data_source_name
hostname = dump.hostname or 'localhost'
hostname = dump.hostname or UNSPECIFIED
port = None if dump.port is UNSPECIFIED else dump.port
if port:
@@ -71,10 +71,10 @@ def render_dump_metadata(dump):
return metadata
def get_configured_data_source(config, restore_dump, log_prefix):
def get_configured_data_source(config, restore_dump):
'''
Search in the given configuration dict for dumps corresponding to the given dump to restore. If
there are multiple matches, error. Log using the given log prefix.
there are multiple matches, error.
Return the found data source as a data source configuration dict or None if not found.
'''
@@ -91,7 +91,6 @@ def get_configured_data_source(config, restore_dump, log_prefix):
borgmatic.hooks.dispatch.call_hook(
function_name='get_default_port',
config=config,
log_prefix=log_prefix,
hook_name=hook_name,
),
)
@@ -173,14 +172,11 @@ def restore_single_dump(
Dump(hook_name, data_source['name'], data_source.get('hostname'), data_source.get('port'))
)
logger.info(
f'{repository.get("label", repository["path"])}: Restoring data source {dump_metadata}'
)
logger.info(f'Restoring data source {dump_metadata}')
dump_patterns = borgmatic.hooks.dispatch.call_hooks(
'make_data_source_dump_patterns',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
data_source['name'],
@@ -227,7 +223,6 @@ def restore_single_dump(
borgmatic.hooks.dispatch.call_hook(
function_name='restore_data_source_dump',
config=config,
log_prefix=repository['path'],
hook_name=hook_name,
data_source=data_source,
dry_run=global_arguments.dry_run,
@@ -319,7 +314,7 @@ def collect_dumps_from_archive(
break
else:
logger.warning(
f'{repository}: Ignoring invalid data source dump path "{dump_path}" in archive {archive}'
f'Ignoring invalid data source dump path "{dump_path}" in archive {archive}'
)
return dumps_from_archive
@@ -348,12 +343,15 @@ def get_dumps_to_restore(restore_arguments, dumps_from_archive):
else UNSPECIFIED
),
data_source_name=name,
hostname=restore_arguments.original_hostname or 'localhost',
hostname=restore_arguments.original_hostname or UNSPECIFIED,
port=restore_arguments.original_port,
)
for name in restore_arguments.data_sources
for name in restore_arguments.data_sources or (UNSPECIFIED,)
}
if restore_arguments.data_sources
if restore_arguments.hook
or restore_arguments.data_sources
or restore_arguments.original_hostname
or restore_arguments.original_port
else {
Dump(
hook_name=UNSPECIFIED,
@@ -444,16 +442,12 @@ def run_restore(
):
return
log_prefix = repository.get('label', repository['path'])
logger.info(f'{log_prefix}: Restoring data sources from archive {restore_arguments.archive}')
logger.info(f'Restoring data sources from archive {restore_arguments.archive}')
with borgmatic.config.paths.Runtime_directory(
config, log_prefix
) as borgmatic_runtime_directory:
with borgmatic.config.paths.Runtime_directory(config) as borgmatic_runtime_directory:
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
global_arguments.dry_run,
@@ -494,7 +488,6 @@ def run_restore(
found_data_source = get_configured_data_source(
config,
restore_dump,
log_prefix=repository['path'],
)
# For a dump that wasn't found via an exact match in the configuration, try to fallback
@@ -503,7 +496,6 @@ def run_restore(
found_data_source = get_configured_data_source(
config,
Dump(restore_dump.hook_name, 'all', restore_dump.hostname, restore_dump.port),
log_prefix=repository['path'],
)
if not found_data_source:
@@ -531,7 +523,6 @@ def run_restore(
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
global_arguments.dry_run,

View File

@@ -17,9 +17,13 @@ def run_transfer(
'''
Run the "transfer" action for the given repository.
'''
logger.info(
f'{repository.get("label", repository["path"])}: Transferring archives to repository'
)
if transfer_arguments.archive and config.get('match_archives'):
raise ValueError(
'With the transfer action, only one of --archive and --match-archives/match_archives can be used.'
)
logger.info('Transferring archives to repository')
borgmatic.borg.transfer.transfer_archives(
global_arguments.dry_run,
repository['path'],

View File

@@ -61,7 +61,7 @@ def run_arbitrary_borg(
tuple(shlex.quote(part) for part in full_command),
output_file=DO_NOT_CAPTURE,
shell=True,
extra_environment=dict(
environment=dict(
(environment.make_environment(config) or {}),
**{
'BORG_REPO': repository_path,

View File

@@ -34,10 +34,9 @@ def break_lock(
+ flags.make_repository_flags(repository_path, local_borg_version)
)
borg_environment = environment.make_environment(config)
execute_command(
full_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -41,7 +41,7 @@ def change_passphrase(
)
if global_arguments.dry_run:
logger.info(f'{repository_path}: Skipping change password (dry run)')
logger.info('Skipping change password (dry run)')
return
# If the original passphrase is set programmatically, then Borg won't prompt for a new one! So
@@ -56,7 +56,7 @@ def change_passphrase(
full_command,
output_file=borgmatic.execute.DO_NOT_CAPTURE,
output_log_level=logging.ANSWER,
extra_environment=environment.make_environment(config_without_passphrase),
environment=environment.make_environment(config_without_passphrase),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -32,7 +32,7 @@ def make_archive_filter_flags(local_borg_version, config, checks, check_argument
if prefix
else (
flags.make_match_archives_flags(
check_arguments.match_archives or config.get('match_archives'),
config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
@@ -64,15 +64,11 @@ def make_check_name_flags(checks, archive_filter_flags):
('--repository-only',)
However, if both "repository" and "archives" are in checks, then omit them from the returned
flags because Borg does both checks by default. If "data" is in checks, that implies "archives".
However, if both "repository" and "archives" are in checks, then omit the "only" flags from the
returned flags because Borg does both checks by default. Note that a "data" check only works
along with an "archives" check.
'''
if 'data' in checks:
data_flags = ('--verify-data',)
checks.update({'archives'})
else:
data_flags = ()
data_flags = ('--verify-data',) if 'data' in checks else ()
common_flags = (archive_filter_flags if 'archives' in checks else ()) + data_flags
if {'repository', 'archives'}.issubset(checks):
@@ -142,51 +138,51 @@ def check_archives(
except StopIteration:
repository_check_config = {}
if check_arguments.max_duration and 'archives' in checks:
raise ValueError('The archives check cannot run when the --max-duration flag is used')
if repository_check_config.get('max_duration') and 'archives' in checks:
raise ValueError(
'The archives check cannot run when the repository check has the max_duration option set'
)
max_duration = check_arguments.max_duration or repository_check_config.get('max_duration')
umask = config.get('umask')
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
full_command = (
(local_path, 'check')
+ (('--repair',) if check_arguments.repair else ())
+ (('--max-duration', str(max_duration)) if max_duration else ())
+ make_check_name_flags(checks, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if check_arguments.progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
working_directory = borgmatic.config.paths.get_working_directory(config)
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
if check_arguments.repair or check_arguments.progress:
if 'data' in checks:
checks.add('archives')
grouped_checks = (checks,)
# If max_duration is set, then archives and repository checks need to be run separately, as Borg
# doesn't support --max-duration along with an archives checks.
if max_duration and 'archives' in checks and 'repository' in checks:
checks.remove('repository')
grouped_checks = (checks, {'repository'})
for checks_subset in grouped_checks:
full_command = (
(local_path, 'check')
+ (('--repair',) if check_arguments.repair else ())
+ (
('--max-duration', str(max_duration))
if max_duration and 'repository' in checks_subset
else ()
)
+ make_check_name_flags(checks_subset, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if config.get('progress') else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
extra_environment=borg_environment,
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
else:
execute_command(
full_command,
extra_environment=borg_environment,
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
output_file=(
DO_NOT_CAPTURE if check_arguments.repair or config.get('progress') else None
),
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@@ -15,9 +15,7 @@ def compact_segments(
global_arguments,
local_path='borg',
remote_path=None,
progress=False,
cleanup_commits=False,
threshold=None,
):
'''
Given dry-run flag, a local or remote repository path, a configuration dict, and the local Borg
@@ -26,6 +24,7 @@ def compact_segments(
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
extra_borg_options = config.get('extra_borg_options', {}).get('compact', '')
threshold = config.get('compact_threshold')
full_command = (
(local_path, 'compact')
@@ -33,7 +32,7 @@ def compact_segments(
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--progress',) if progress else ())
+ (('--progress',) if config.get('progress') else ())
+ (('--cleanup-commits',) if cleanup_commits else ())
+ (('--threshold', str(threshold)) if threshold else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
@@ -43,13 +42,13 @@ def compact_segments(
)
if dry_run:
logging.info(f'{repository_path}: Skipping compact (dry run)')
logging.info('Skipping compact (dry run)')
return
execute_command(
full_command,
output_log_level=logging.INFO,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -20,14 +20,12 @@ from borgmatic.execute import (
logger = logging.getLogger(__name__)
def write_patterns_file(patterns, borgmatic_runtime_directory, log_prefix, patterns_file=None):
def write_patterns_file(patterns, borgmatic_runtime_directory, patterns_file=None):
'''
Given a sequence of patterns as borgmatic.borg.pattern.Pattern instances, write them to a named
temporary file in the given borgmatic runtime directory and return the file object so it can
continue to exist on disk as long as the caller needs it.
Use the given log prefix in any logging.
If an optional open pattern file is given, append to it instead of making a new temporary file.
Return None if no patterns are provided.
'''
@@ -36,14 +34,16 @@ def write_patterns_file(patterns, borgmatic_runtime_directory, log_prefix, patte
if patterns_file is None:
patterns_file = tempfile.NamedTemporaryFile('w', dir=borgmatic_runtime_directory)
operation_name = 'Writing'
else:
patterns_file.write('\n')
operation_name = 'Appending'
patterns_output = '\n'.join(
f'{pattern.type.value} {pattern.style.value}{":" if pattern.style.value else ""}{pattern.path}'
for pattern in patterns
)
logger.debug(f'{log_prefix}: Writing patterns to {patterns_file.name}:\n{patterns_output}')
logger.debug(f'{operation_name} patterns to {patterns_file.name}:\n{patterns_output}')
patterns_file.write(patterns_output)
patterns_file.flush()
@@ -122,52 +122,63 @@ def collect_special_file_paths(
config,
local_path,
working_directory,
borg_environment,
borgmatic_runtime_directory,
):
'''
Given a dry-run flag, a Borg create command as a tuple, a configuration dict, a local Borg path,
a working directory, a dict of environment variables to pass to Borg, and the borgmatic runtime
directory, collect the paths for any special files (character devices, block devices, and named
pipes / FIFOs) that Borg would encounter during a create. These are all paths that could cause
Borg to hang if its --read-special flag is used.
a working directory, and the borgmatic runtime directory, collect the paths for any special
files (character devices, block devices, and named pipes / FIFOs) that Borg would encounter
during a create. These are all paths that could cause Borg to hang if its --read-special flag is
used.
Skip looking for special files in the given borgmatic runtime directory, as borgmatic creates
its own special files there for database dumps. And if the borgmatic runtime directory is
configured to be excluded from the files Borg backs up, error, because this means Borg won't be
able to consume any database dumps and therefore borgmatic will hang.
its own special files there for database dumps and we don't want those omitted.
Additionally, if the borgmatic runtime directory is not contained somewhere in the files Borg
plans to backup, that means the user must have excluded the runtime directory (e.g. via
"exclude_patterns" or similar). Therefore, raise, because this means Borg won't be able to
consume any database dumps and therefore borgmatic will hang when it tries to do so.
'''
# Omit "--exclude-nodump" from the Borg dry run command, because that flag causes Borg to open
# files including any named pipe we've created.
# files including any named pipe we've created. And omit "--filter" because that can break the
# paths output parsing below such that path lines no longer start with th expected "- ".
paths_output = execute_command_and_capture_output(
tuple(argument for argument in create_command if argument != '--exclude-nodump')
flags.omit_flag_and_value(flags.omit_flag(create_command, '--exclude-nodump'), '--filter')
+ ('--dry-run', '--list'),
capture_stderr=True,
working_directory=working_directory,
extra_environment=borg_environment,
environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
# These are all the individual files that Borg is planning to backup as determined by the Borg
# create dry run above.
paths = tuple(
path_line.split(' ', 1)[1]
for path_line in paths_output.split('\n')
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
skip_paths = {}
# These are the subset of those files that contain the borgmatic runtime directory.
paths_containing_runtime_directory = {}
if os.path.exists(borgmatic_runtime_directory):
skip_paths = {
paths_containing_runtime_directory = {
path for path in paths if any_parent_directories(path, (borgmatic_runtime_directory,))
}
if not skip_paths and not dry_run:
# If no paths to backup contain the runtime directory, it must've been excluded.
if not paths_containing_runtime_directory and not dry_run:
raise ValueError(
f'The runtime directory {os.path.normpath(borgmatic_runtime_directory)} overlaps with the configured excludes or patterns with excludes. Please ensure the runtime directory is not excluded.'
)
return tuple(
path for path in paths if special_file(path, working_directory) if path not in skip_paths
path
for path in paths
if special_file(path, working_directory)
if path not in paths_containing_runtime_directory
)
@@ -185,7 +196,7 @@ def check_all_root_patterns_exist(patterns):
if missing_paths:
raise ValueError(
f"Source directories / root pattern paths do not exist: {', '.join(missing_paths)}"
f"Source directories or root pattern paths do not exist: {', '.join(missing_paths)}"
)
@@ -202,9 +213,7 @@ def make_base_create_command(
borgmatic_runtime_directory,
local_path='borg',
remote_path=None,
progress=False,
json=False,
list_files=False,
stream_processes=None,
):
'''
@@ -217,9 +226,7 @@ def make_base_create_command(
if config.get('source_directories_must_exist', False):
check_all_root_patterns_exist(patterns)
patterns_file = write_patterns_file(
patterns, borgmatic_runtime_directory, log_prefix=repository_path
)
patterns_file = write_patterns_file(patterns, borgmatic_runtime_directory)
checkpoint_interval = config.get('checkpoint_interval', None)
checkpoint_volume = config.get('checkpoint_volume', None)
chunker_params = config.get('chunker_params', None)
@@ -284,7 +291,7 @@ def make_base_create_command(
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (
('--list', '--filter', list_filter_flags)
if list_files and not json and not progress
if config.get('list_details') and not json and not config.get('progress')
else ()
)
+ (('--dry-run',) if dry_run else ())
@@ -299,19 +306,17 @@ def make_base_create_command(
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
if stream_processes and not config.get('read_special'):
logger.warning(
f'{repository_path}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
'Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
borg_environment = environment.make_environment(config)
working_directory = borgmatic.config.paths.get_working_directory(config)
logger.debug(f'{repository_path}: Collecting special file paths')
logger.debug('Collecting special file paths')
special_file_paths = collect_special_file_paths(
dry_run,
create_flags + create_positional_arguments,
config,
local_path,
working_directory,
borg_environment,
borgmatic_runtime_directory=borgmatic_runtime_directory,
)
@@ -322,19 +327,19 @@ def make_base_create_command(
placeholder=' ...',
)
logger.warning(
f'{repository_path}: Excluding special files to prevent Borg from hanging: {truncated_special_file_paths}'
f'Excluding special files to prevent Borg from hanging: {truncated_special_file_paths}'
)
patterns_file = write_patterns_file(
tuple(
borgmatic.borg.pattern.Pattern(
special_file_path,
borgmatic.borg.pattern.Pattern_type.EXCLUDE,
borgmatic.borg.pattern.Pattern_type.NO_RECURSE,
borgmatic.borg.pattern.Pattern_style.FNMATCH,
source=borgmatic.borg.pattern.Pattern_source.INTERNAL,
)
for special_file_path in special_file_paths
),
borgmatic_runtime_directory,
log_prefix=repository_path,
patterns_file=patterns_file,
)
@@ -354,10 +359,7 @@ def create_archive(
borgmatic_runtime_directory,
local_path='borg',
remote_path=None,
progress=False,
stats=False,
json=False,
list_files=False,
stream_processes=None,
):
'''
@@ -382,30 +384,26 @@ def create_archive(
borgmatic_runtime_directory,
local_path,
remote_path,
progress,
json,
list_files,
stream_processes,
)
if json:
output_log_level = None
elif list_files or (stats and not dry_run):
elif config.get('list_details') or (config.get('statistics') and not dry_run):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.
output_file = DO_NOT_CAPTURE if progress else None
borg_environment = environment.make_environment(config)
output_file = DO_NOT_CAPTURE if config.get('progress') else None
create_flags += (
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ())
+ (('--stats',) if config.get('statistics') and not json and not dry_run else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
+ (('--progress',) if progress else ())
+ (('--progress',) if config.get('progress') else ())
+ (('--json',) if json else ())
)
borg_exit_codes = config.get('borg_exit_codes')
@@ -417,7 +415,7 @@ def create_archive(
output_log_level,
output_file,
working_directory=working_directory,
extra_environment=borg_environment,
environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
@@ -425,7 +423,7 @@ def create_archive(
return execute_command_and_capture_output(
create_flags + create_positional_arguments,
working_directory=working_directory,
extra_environment=borg_environment,
environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
@@ -435,7 +433,7 @@ def create_archive(
output_log_level,
output_file,
working_directory=working_directory,
extra_environment=borg_environment,
environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)

View File

@@ -34,7 +34,7 @@ def make_delete_command(
+ borgmatic.borg.flags.make_flags('umask', config.get('umask'))
+ borgmatic.borg.flags.make_flags('log-json', global_arguments.log_json)
+ borgmatic.borg.flags.make_flags('lock-wait', config.get('lock_wait'))
+ borgmatic.borg.flags.make_flags('list', delete_arguments.list_archives)
+ borgmatic.borg.flags.make_flags('list', config.get('list_details'))
+ (
(('--force',) + (('--force',) if delete_arguments.force >= 2 else ()))
if delete_arguments.force
@@ -48,9 +48,17 @@ def make_delete_command(
local_borg_version=local_borg_version,
default_archive_name_format='*',
)
+ (('--stats',) if config.get('statistics') else ())
+ borgmatic.borg.flags.make_flags_from_arguments(
delete_arguments,
excludes=('list_archives', 'force', 'match_archives', 'archive', 'repository'),
excludes=(
'list_details',
'statistics',
'force',
'match_archives',
'archive',
'repository',
),
)
+ borgmatic.borg.flags.make_repository_flags(repository['path'], local_borg_version)
)
@@ -98,7 +106,7 @@ def delete_archives(
repo_delete_arguments = argparse.Namespace(
repository=repository['path'],
list_archives=delete_arguments.list_archives,
list_details=delete_arguments.list_details,
force=delete_arguments.force,
cache_only=delete_arguments.cache_only,
keep_security_info=delete_arguments.keep_security_info,
@@ -128,7 +136,7 @@ def delete_archives(
borgmatic.execute.execute_command(
command,
output_log_level=logging.ANSWER,
extra_environment=borgmatic.borg.environment.make_environment(config),
environment=borgmatic.borg.environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -1,5 +1,8 @@
import os
import borgmatic.borg.passcommand
import borgmatic.hooks.credential.parse
OPTION_TO_ENVIRONMENT_VARIABLE = {
'borg_base_directory': 'BORG_BASE_DIR',
'borg_config_directory': 'BORG_CONFIG_DIR',
@@ -7,8 +10,6 @@ OPTION_TO_ENVIRONMENT_VARIABLE = {
'borg_files_cache_ttl': 'BORG_FILES_CACHE_TTL',
'borg_security_directory': 'BORG_SECURITY_DIR',
'borg_keys_directory': 'BORG_KEYS_DIR',
'encryption_passcommand': 'BORG_PASSCOMMAND',
'encryption_passphrase': 'BORG_PASSPHRASE',
'ssh_command': 'BORG_RSH',
'temporary_directory': 'TMPDIR',
}
@@ -25,17 +26,59 @@ DEFAULT_BOOL_OPTION_TO_UPPERCASE_ENVIRONMENT_VARIABLE = {
def make_environment(config):
'''
Given a borgmatic configuration dict, return its options converted to a Borg environment
variable dict.
Given a borgmatic configuration dict, convert it to a Borg environment variable dict, merge it
with a copy of the current environment variables, and return the result.
Do not reuse this environment across multiple Borg invocations, because it can include
references to resources like anonymous pipes for passphrases—which can only be consumed once.
Here's how native Borg precedence works for a few of the environment variables:
1. BORG_PASSPHRASE, if set, is used first.
2. BORG_PASSCOMMAND is used only if BORG_PASSPHRASE isn't set.
3. BORG_PASSPHRASE_FD is used only if neither of the above are set.
In borgmatic, we want to simulate this precedence order, but there are some additional
complications. First, values can come from either configuration or from environment variables
set outside borgmatic; configured options should take precedence. Second, when borgmatic gets a
passphrase—directly from configuration or indirectly via a credential hook or a passcommand—we
want to pass that passphrase to Borg via an anonymous pipe (+ BORG_PASSPHRASE_FD), since that's
more secure than using an environment variable (BORG_PASSPHRASE).
'''
environment = {}
environment = dict(os.environ)
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
value = config.get(option_name)
if value:
if value is not None:
environment[environment_variable_name] = str(value)
if 'encryption_passphrase' in config:
environment.pop('BORG_PASSPHRASE', None)
environment.pop('BORG_PASSCOMMAND', None)
if 'encryption_passcommand' in config:
environment.pop('BORG_PASSCOMMAND', None)
passphrase = borgmatic.hooks.credential.parse.resolve_credential(
config.get('encryption_passphrase'), config
)
if passphrase is None:
passphrase = borgmatic.borg.passcommand.get_passphrase_from_passcommand(config)
# If there's a passphrase (from configuration, from a configured credential, or from a
# configured passcommand), send it to Borg via an anonymous pipe.
if passphrase is not None:
read_file_descriptor, write_file_descriptor = os.pipe()
os.write(write_file_descriptor, passphrase.encode('utf-8'))
os.close(write_file_descriptor)
# This plus subprocess.Popen(..., close_fds=False) in execute.py is necessary for the Borg
# child process to inherit the file descriptor.
os.set_inheritable(read_file_descriptor, True)
environment['BORG_PASSPHRASE_FD'] = str(read_file_descriptor)
for (
option_name,
environment_variable_name,

View File

@@ -60,14 +60,14 @@ def export_key(
)
if global_arguments.dry_run:
logger.info(f'{repository_path}: Skipping key export (dry run)')
logger.info('Skipping key export (dry run)')
return
execute_command(
full_command,
output_file=output_file,
output_log_level=logging.ANSWER,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -20,7 +20,6 @@ def export_tar_archive(
local_path='borg',
remote_path=None,
tar_filter=None,
list_files=False,
strip_components=None,
):
'''
@@ -43,7 +42,7 @@ def export_tar_archive(
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--list',) if list_files else ())
+ (('--list',) if config.get('list_details') else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (('--tar-filter', tar_filter) if tar_filter else ())
@@ -57,20 +56,20 @@ def export_tar_archive(
+ (tuple(paths) if paths else ())
)
if list_files:
if config.get('list_details'):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
if dry_run:
logging.info(f'{repository_path}: Skipping export to tar file (dry run)')
logging.info('Skipping export to tar file (dry run)')
return
execute_command(
full_command,
output_file=DO_NOT_CAPTURE if destination_path == '-' else None,
output_log_level=output_log_level,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -44,7 +44,6 @@ def extract_last_archive_dry_run(
return
list_flag = ('--list',) if logger.isEnabledFor(logging.DEBUG) else ()
borg_environment = environment.make_environment(config)
full_extract_command = (
(local_path, 'extract', '--dry-run')
+ (('--remote-path', remote_path) if remote_path else ())
@@ -59,7 +58,7 @@ def extract_last_archive_dry_run(
execute_command(
full_extract_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
@@ -78,7 +77,6 @@ def extract_archive(
remote_path=None,
destination_path=None,
strip_components=None,
progress=False,
extract_to_stdout=False,
):
'''
@@ -93,8 +91,8 @@ def extract_archive(
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
if progress and extract_to_stdout:
raise ValueError('progress and extract_to_stdout cannot both be set')
if config.get('progress') and extract_to_stdout:
raise ValueError('progress and extract to stdout cannot both be set')
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if config.get('numeric_ids') else ()
@@ -129,22 +127,19 @@ def extract_archive(
+ (('--debug', '--list', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (('--strip-components', str(strip_components)) if strip_components else ())
+ (('--progress',) if progress else ())
+ (('--progress',) if config.get('progress') else ())
+ (('--stdout',) if extract_to_stdout else ())
+ flags.make_repository_archive_flags(
# Make the repository path absolute so the destination directory used below via changing
# the working directory doesn't prevent Borg from finding the repo. But also apply the
# user's configured working directory (if any) to the repo path.
borgmatic.config.validate.normalize_repository_path(
os.path.join(working_directory or '', repository)
),
borgmatic.config.validate.normalize_repository_path(repository, working_directory),
archive,
local_borg_version,
)
+ (tuple(paths) if paths else ())
)
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
full_destination_path = (
os.path.join(working_directory or '', destination_path) if destination_path else None
@@ -152,11 +147,11 @@ def extract_archive(
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.
if progress:
if config.get('progress'):
return execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=full_destination_path,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@@ -168,7 +163,7 @@ def extract_archive(
full_command,
output_file=subprocess.PIPE,
run_to_completion=False,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=full_destination_path,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@@ -178,7 +173,7 @@ def extract_archive(
# if the restore paths don't exist in the archive.
execute_command(
full_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=full_destination_path,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@@ -17,6 +17,7 @@ class Feature(Enum):
MATCH_ARCHIVES = 11
EXCLUDED_FILES_MINUS = 12
ARCHIVE_SERIES = 13
NO_PRUNE_STATS = 14
FEATURE_TO_MINIMUM_BORG_VERSION = {
@@ -33,6 +34,7 @@ FEATURE_TO_MINIMUM_BORG_VERSION = {
Feature.MATCH_ARCHIVES: parse('2.0.0b3'), # borg --match-archives
Feature.EXCLUDED_FILES_MINUS: parse('2.0.0b5'), # --list --filter uses "-" for excludes
Feature.ARCHIVE_SERIES: parse('2.0.0b11'), # identically named archives form a series
Feature.NO_PRUNE_STATS: parse('2.0.0b10'), # prune --stats is not available
}

View File

@@ -156,3 +156,44 @@ def warn_for_aggressive_archive_flags(json_command, json_output):
logger.debug(f'Cannot parse JSON output from archive command: {error}')
except (TypeError, KeyError):
logger.debug('Cannot parse JSON output from archive command: No "archives" key found')
def omit_flag(arguments, flag):
'''
Given a sequence of Borg command-line arguments, return them with the given (valueless) flag
omitted. For instance, if the flag is "--flag" and arguments is:
('borg', 'create', '--flag', '--other-flag')
... then return:
('borg', 'create', '--other-flag')
'''
return tuple(argument for argument in arguments if argument != flag)
def omit_flag_and_value(arguments, flag):
'''
Given a sequence of Borg command-line arguments, return them with the given flag and its
corresponding value omitted. For instance, if the flag is "--flag" and arguments is:
('borg', 'create', '--flag', 'value', '--other-flag')
... or:
('borg', 'create', '--flag=value', '--other-flag')
... then return:
('borg', 'create', '--other-flag')
'''
# This works by zipping together a list of overlapping pairwise arguments. E.g., ('one', 'two',
# 'three', 'four') becomes ((None, 'one'), ('one, 'two'), ('two', 'three'), ('three', 'four')).
# This makes it easy to "look back" at the previous arguments so we can exclude both a flag and
# its value.
return tuple(
argument
for (previous_argument, argument) in zip((None,) + arguments, arguments)
if flag not in (previous_argument, argument)
if not argument.startswith(f'{flag}=')
)

View File

@@ -0,0 +1,70 @@
import logging
import os
import borgmatic.config.paths
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def import_key(
repository_path,
config,
local_borg_version,
import_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, import
arguments, and optional local and remote Borg paths, import the repository key from the
path indicated in the import arguments.
If the path is empty or "-", then read the key from stdin.
Raise ValueError if the path is given and it does not exist.
'''
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
working_directory = borgmatic.config.paths.get_working_directory(config)
if import_arguments.path and import_arguments.path != '-':
if not os.path.exists(os.path.join(working_directory or '', import_arguments.path)):
raise ValueError(f'Path {import_arguments.path} does not exist. Aborting.')
input_file = None
else:
input_file = DO_NOT_CAPTURE
full_command = (
(local_path, 'key', 'import')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('paper', import_arguments.paper)
+ flags.make_repository_flags(
repository_path,
local_borg_version,
)
+ ((import_arguments.path,) if input_file is None else ())
)
if global_arguments.dry_run:
logger.info('Skipping key import (dry run)')
return
execute_command(
full_command,
input_file=input_file,
output_log_level=logging.INFO,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@@ -48,9 +48,7 @@ def make_info_command(
if info_arguments.prefix
else (
flags.make_match_archives_flags(
info_arguments.match_archives
or info_arguments.archive
or config.get('match_archives'),
info_arguments.archive or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
@@ -102,7 +100,7 @@ def display_archives_info(
json_info = execute_command_and_capture_output(
json_command,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@@ -116,7 +114,7 @@ def display_archives_info(
execute_command(
main_command,
output_log_level=logging.ANSWER,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@@ -106,8 +106,6 @@ def capture_archive_listing(
format to use for the output, and local and remote Borg paths, capture the
output of listing that archive and return it as a list of file paths.
'''
borg_environment = environment.make_environment(config)
return tuple(
execute_command_and_capture_output(
make_list_command(
@@ -120,19 +118,19 @@ def capture_archive_listing(
paths=[path for path in list_paths] if list_paths else None,
find_paths=None,
json=None,
format=path_format or '{path}{NL}', # noqa: FS003
format=path_format or '{path}{NUL}', # noqa: FS003
),
global_arguments,
local_path,
remote_path,
),
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
.strip('\n')
.split('\n')
.strip('\0')
.split('\0')
)
@@ -194,7 +192,6 @@ def list_archive(
'The --json flag on the list action is not supported when using the --archive/--find flags.'
)
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
# If there are any paths to find (and there's not a single archive already selected), start by
@@ -224,20 +221,20 @@ def list_archive(
local_path,
remote_path,
),
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
.strip('\n')
.split('\n')
.splitlines()
)
else:
archive_lines = (list_arguments.archive,)
# For each archive listed by Borg, run list on the contents of that archive.
for archive in archive_lines:
logger.answer(f'{repository_path}: Listing archive {archive}')
logger.answer(f'Listing archive {archive}')
archive_arguments = copy.copy(list_arguments)
archive_arguments.archive = archive
@@ -260,7 +257,7 @@ def list_archive(
execute_command(
main_command,
output_log_level=logging.ANSWER,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@@ -59,7 +59,6 @@ def mount_archive(
+ (tuple(mount_arguments.paths) if mount_arguments.paths else ())
)
borg_environment = environment.make_environment(config)
working_directory = borgmatic.config.paths.get_working_directory(config)
# Don't capture the output when foreground mode is used so that ctrl-C can work properly.
@@ -67,7 +66,7 @@ def mount_archive(
execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
@@ -76,7 +75,7 @@ def mount_archive(
execute_command(
full_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -0,0 +1,40 @@
import functools
import logging
import shlex
import borgmatic.config.paths
import borgmatic.execute
logger = logging.getLogger(__name__)
@functools.cache
def run_passcommand(passcommand, working_directory):
'''
Run the given passcommand using the given working directory and return the passphrase produced
by the command.
Cache the results so that the passcommand only needs to run—and potentially prompt the user—once
per borgmatic invocation.
'''
return borgmatic.execute.execute_command_and_capture_output(
shlex.split(passcommand),
working_directory=working_directory,
)
def get_passphrase_from_passcommand(config):
'''
Given the configuration dict, call the configured passcommand to produce and return an
encryption passphrase. In effect, we're doing an end-run around Borg by invoking its passcommand
ourselves. This allows us to pass the resulting passphrase to multiple different Borg
invocations without the user having to be prompted multiple times.
If no passcommand is configured, then return None.
'''
passcommand = config.get('encryption_passcommand')
if not passcommand:
return None
return run_passcommand(passcommand, borgmatic.config.paths.get_working_directory(config))

View File

@@ -20,12 +20,31 @@ class Pattern_style(enum.Enum):
PATH_FULL_MATCH = 'pf'
class Pattern_source(enum.Enum):
'''
Where the pattern came from within borgmatic. This is important because certain use cases (like
filesystem snapshotting) only want to consider patterns that the user actually put in a
configuration file and not patterns from other sources.
'''
# The pattern is from a borgmatic configuration option, e.g. listed in "source_directories".
CONFIG = 'config'
# The pattern is generated internally within borgmatic, e.g. for special file excludes.
INTERNAL = 'internal'
# The pattern originates from within a borgmatic hook, e.g. a database hook that adds its dump
# directory.
HOOK = 'hook'
Pattern = collections.namedtuple(
'Pattern',
('path', 'type', 'style', 'device'),
('path', 'type', 'style', 'device', 'source'),
defaults=(
Pattern_type.ROOT,
Pattern_style.NONE,
None,
Pattern_source.HOOK,
),
)

View File

@@ -41,7 +41,7 @@ def make_prune_flags(config, prune_arguments, local_borg_version):
if prefix
else (
flags.make_match_archives_flags(
prune_arguments.match_archives or config.get('match_archives'),
config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
@@ -75,20 +75,26 @@ def prune_archives(
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--stats',) if prune_arguments.stats and not dry_run else ())
+ (
('--stats',)
if config.get('statistics')
and not dry_run
and not feature.available(feature.Feature.NO_PRUNE_STATS, local_borg_version)
else ()
)
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ flags.make_flags_from_arguments(
prune_arguments,
excludes=('repository', 'match_archives', 'stats', 'list_archives'),
excludes=('repository', 'match_archives', 'statistics', 'list_details'),
)
+ (('--list',) if prune_arguments.list_archives else ())
+ (('--list',) if config.get('list_details') else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if prune_arguments.stats or prune_arguments.list_archives:
if config.get('statistics') or config.get('list_details'):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
@@ -96,7 +102,7 @@ def prune_archives(
execute_command(
full_command,
output_log_level=output_log_level,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

103
borgmatic/borg/recreate.py Normal file
View File

@@ -0,0 +1,103 @@
import logging
import shlex
import borgmatic.borg.environment
import borgmatic.borg.feature
import borgmatic.config.paths
import borgmatic.execute
from borgmatic.borg import flags
from borgmatic.borg.create import make_exclude_flags, make_list_filter_flags, write_patterns_file
logger = logging.getLogger(__name__)
def recreate_archive(
repository,
archive,
config,
local_borg_version,
recreate_arguments,
global_arguments,
local_path,
remote_path=None,
patterns=None,
):
'''
Given a local or remote repository path, an archive name, a configuration dict, the local Borg
version string, an argparse.Namespace of recreate arguments, an argparse.Namespace of global
arguments, optional local and remote Borg paths, executes the recreate command with the given
arguments.
'''
lock_wait = config.get('lock_wait', None)
exclude_flags = make_exclude_flags(config)
compression = config.get('compression', None)
chunker_params = config.get('chunker_params', None)
# Available recompress MODES: "if-different", "always", "never" (default)
recompress = config.get('recompress', None)
# Write patterns to a temporary file and use that file with --patterns-from.
patterns_file = write_patterns_file(
patterns, borgmatic.config.paths.get_working_directory(config)
)
recreate_command = (
(local_path, 'recreate')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait is not None else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--patterns-from', patterns_file.name) if patterns_file else ())
+ (
(
'--list',
'--filter',
make_list_filter_flags(local_borg_version, global_arguments.dry_run),
)
if config.get('list_details')
else ()
)
# Flag --target works only for a single archive.
+ (('--target', recreate_arguments.target) if recreate_arguments.target and archive else ())
+ (
('--comment', shlex.quote(recreate_arguments.comment))
if recreate_arguments.comment
else ()
)
+ (('--timestamp', recreate_arguments.timestamp) if recreate_arguments.timestamp else ())
+ (('--compression', compression) if compression else ())
+ (('--chunker-params', chunker_params) if chunker_params else ())
+ (('--recompress', recompress) if recompress else ())
+ exclude_flags
+ (
(
flags.make_repository_flags(repository, local_borg_version)
+ flags.make_match_archives_flags(
archive or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
)
if borgmatic.borg.feature.available(
borgmatic.borg.feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version
)
else (
flags.make_repository_archive_flags(repository, archive, local_borg_version)
if archive
else flags.make_repository_flags(repository, local_borg_version)
)
)
)
if global_arguments.dry_run:
logger.info('Skipping the archive recreation (dry run)')
return
borgmatic.execute.execute_command(
full_command=recreate_command,
output_log_level=logging.INFO,
environment=borgmatic.borg.environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@@ -24,7 +24,7 @@ def create_repository(
copy_crypt_key=False,
append_only=None,
storage_quota=None,
make_parent_dirs=False,
make_parent_directories=False,
local_path='borg',
remote_path=None,
):
@@ -57,7 +57,7 @@ def create_repository(
f'Requested encryption mode "{encryption_mode}" does not match existing repository encryption mode "{repository_encryption_mode}"'
)
logger.info(f'{repository_path}: Repository already exists. Skipping creation.')
logger.info('Repository already exists. Skipping creation.')
return
except subprocess.CalledProcessError as error:
if error.returncode not in REPO_INFO_REPOSITORY_NOT_FOUND_EXIT_CODES:
@@ -79,7 +79,7 @@ def create_repository(
+ (('--copy-crypt-key',) if copy_crypt_key else ())
+ (('--append-only',) if append_only else ())
+ (('--storage-quota', storage_quota) if storage_quota else ())
+ (('--make-parent-dirs',) if make_parent_dirs else ())
+ (('--make-parent-dirs',) if make_parent_directories else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--log-json',) if global_arguments.log_json else ())
@@ -91,14 +91,14 @@ def create_repository(
)
if dry_run:
logging.info(f'{repository_path}: Skipping repository creation (dry run)')
logging.info('Skipping repository creation (dry run)')
return
# Do not capture output here, so as to support interactive prompts.
execute_command(
repo_create_command,
output_file=DO_NOT_CAPTURE,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -39,14 +39,14 @@ def make_repo_delete_command(
+ borgmatic.borg.flags.make_flags('umask', config.get('umask'))
+ borgmatic.borg.flags.make_flags('log-json', global_arguments.log_json)
+ borgmatic.borg.flags.make_flags('lock-wait', config.get('lock_wait'))
+ borgmatic.borg.flags.make_flags('list', repo_delete_arguments.list_archives)
+ borgmatic.borg.flags.make_flags('list', config.get('list_details'))
+ (
(('--force',) + (('--force',) if repo_delete_arguments.force >= 2 else ()))
if repo_delete_arguments.force
else ()
)
+ borgmatic.borg.flags.make_flags_from_arguments(
repo_delete_arguments, excludes=('list_archives', 'force', 'repository')
repo_delete_arguments, excludes=('list_details', 'force', 'repository')
)
+ borgmatic.borg.flags.make_repository_flags(repository['path'], local_borg_version)
)
@@ -88,7 +88,7 @@ def delete_repository(
if repo_delete_arguments.force or repo_delete_arguments.cache_only
else borgmatic.execute.DO_NOT_CAPTURE
),
extra_environment=borgmatic.borg.environment.make_environment(config),
environment=borgmatic.borg.environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -50,14 +50,13 @@ def display_repository_info(
+ flags.make_repository_flags(repository_path, local_borg_version)
)
extra_environment = environment.make_environment(config)
working_directory = borgmatic.config.paths.get_working_directory(config)
borg_exit_codes = config.get('borg_exit_codes')
if repo_info_arguments.json:
return execute_command_and_capture_output(
full_command,
extra_environment=extra_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@@ -66,7 +65,7 @@ def display_repository_info(
execute_command(
full_command,
output_log_level=logging.ANSWER,
extra_environment=extra_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@@ -49,7 +49,7 @@ def resolve_archive_name(
output = execute_command_and_capture_output(
full_command,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
@@ -59,7 +59,7 @@ def resolve_archive_name(
except IndexError:
raise ValueError('No archives found in the repository')
logger.debug(f'{repository_path}: Latest archive is {latest_archive}')
logger.debug(f'Latest archive is {latest_archive}')
return latest_archive
@@ -113,7 +113,7 @@ def make_repo_list_command(
if repo_list_arguments.prefix
else (
flags.make_match_archives_flags(
repo_list_arguments.match_archives or config.get('match_archives'),
config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
@@ -140,7 +140,6 @@ def list_repository(
return JSON output).
'''
borgmatic.logger.add_custom_log_levels()
borg_environment = environment.make_environment(config)
main_command = make_repo_list_command(
repository_path,
@@ -165,7 +164,7 @@ def list_repository(
json_listing = execute_command_and_capture_output(
json_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@@ -179,7 +178,7 @@ def list_repository(
execute_command(
main_command,
output_log_level=logging.ANSWER,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@@ -32,17 +32,22 @@ def transfer_archives(
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('umask', config.get('umask'))
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', config.get('lock_wait', None))
+ flags.make_flags('lock-wait', config.get('lock_wait'))
+ flags.make_flags('progress', config.get('progress'))
+ (
flags.make_flags_from_arguments(
transfer_arguments,
excludes=('repository', 'source_repository', 'archive', 'match_archives'),
excludes=(
'repository',
'source_repository',
'archive',
'match_archives',
'progress',
),
)
or (
flags.make_match_archives_flags(
transfer_arguments.match_archives
or transfer_arguments.archive
or config.get('match_archives'),
transfer_arguments.archive or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
@@ -56,8 +61,8 @@ def transfer_archives(
return execute_command(
full_command,
output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
extra_environment=environment.make_environment(config),
output_file=DO_NOT_CAPTURE if config.get('progress') else None,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -21,7 +21,7 @@ def local_borg_version(config, local_path='borg'):
)
output = execute_command_and_capture_output(
full_command,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@@ -1,8 +1,13 @@
import collections
import io
import itertools
import re
import sys
from argparse import ArgumentParser
import ruamel.yaml
import borgmatic.config.schema
from borgmatic.config import collect
ACTION_ALIASES = {
@@ -27,6 +32,7 @@ ACTION_ALIASES = {
'break-lock': [],
'key': [],
'borg': [],
'recreate': [],
}
@@ -63,9 +69,9 @@ def get_subactions_for_actions(action_parsers):
def omit_values_colliding_with_action_names(unparsed_arguments, parsed_arguments):
'''
Given a sequence of string arguments and a dict from action name to parsed argparse.Namespace
arguments, return the string arguments with any values omitted that happen to be the same as
the name of a borgmatic action.
Given unparsed arguments as a sequence of strings and a dict from action name to parsed
argparse.Namespace arguments, return the string arguments with any values omitted that happen to
be the same as the name of a borgmatic action.
This prevents, for instance, "check --only extract" from triggering the "extract" action.
'''
@@ -282,17 +288,270 @@ def parse_arguments_for_actions(unparsed_arguments, action_parsers, global_parse
)
def make_parsers():
OMITTED_FLAG_NAMES = {'match-archives', 'progress', 'statistics', 'list-details'}
def make_argument_description(schema, flag_name):
'''
Build a global arguments parser, individual action parsers, and a combined parser containing
both. Return them as a tuple. The global parser is useful for parsing just global arguments
while ignoring actions, and the combined parser is handy for displaying help that includes
everything: global flags, a list of actions, etc.
Given a configuration schema dict and a flag name for it, extend the schema's description with
an example or additional information as appropriate based on its type. Return the updated
description for use in a command-line argument.
'''
description = schema.get('description')
schema_type = schema.get('type')
example = schema.get('example')
pieces = [description] if description else []
if '[0]' in flag_name:
pieces.append(
' To specify a different list element, replace the "[0]" with another array index ("[1]", "[2]", etc.).'
)
if example and schema_type in ('array', 'object'):
example_buffer = io.StringIO()
yaml = ruamel.yaml.YAML(typ='safe')
yaml.default_flow_style = True
yaml.dump(example, example_buffer)
pieces.append(f'Example value: "{example_buffer.getvalue().strip()}"')
return ' '.join(pieces).replace('%', '%%')
def add_array_element_arguments(arguments_group, unparsed_arguments, flag_name):
r'''
Given an argparse._ArgumentGroup instance, a sequence of unparsed argument strings, and a dotted
flag name, add command-line array element flags that correspond to the given unparsed arguments.
Here's the background. We want to support flags that can have arbitrary indices like:
--foo.bar[1].baz
But argparse doesn't support that natively because the index can be an arbitrary number. We
won't let that stop us though, will we?
If the current flag name has an array component in it (e.g. a name with "[0]"), then make a
pattern that would match the flag name regardless of the number that's in it. The idea is that
we want to look for unparsed arguments that appear like the flag name, but instead of "[0]" they
have, say, "[1]" or "[123]".
Next, we check each unparsed argument against that pattern. If one of them matches, add an
argument flag for it to the argument parser group. Example:
Let's say flag_name is:
--foo.bar[0].baz
... then the regular expression pattern will be:
^--foo\.bar\[\d+\]\.baz
... and, if that matches an unparsed argument of:
--foo.bar[1].baz
... then an argument flag will get added equal to that unparsed argument. And so the unparsed
argument will match it when parsing is performed! In this manner, we're using the actual user
CLI input to inform what exact flags we support.
'''
if '[0]' not in flag_name or not unparsed_arguments or '--help' in unparsed_arguments:
return
pattern = re.compile(fr'^--{flag_name.replace("[0]", r"\[\d+\]").replace(".", r"\.")}$')
try:
# Find an existing list index flag (and its action) corresponding to the given flag name.
(argument_action, existing_flag_name) = next(
(action, action_flag_name)
for action in arguments_group._group_actions
for action_flag_name in action.option_strings
if pattern.match(action_flag_name)
if f'--{flag_name}'.startswith(action_flag_name)
)
# Based on the type of the action (e.g. argparse._StoreTrueAction), look up the corresponding
# action registry name (e.g., "store_true") to pass to add_argument(action=...) below.
action_registry_name = next(
registry_name
for registry_name, action_type in arguments_group._registries['action'].items()
# Not using isinstance() here because we only want an exact match—no parent classes.
if type(argument_action) is action_type
)
except StopIteration:
return
for unparsed in unparsed_arguments:
unparsed_flag_name = unparsed.split('=', 1)[0]
destination_name = unparsed_flag_name.lstrip('-').replace('-', '_')
if not pattern.match(unparsed_flag_name) or unparsed_flag_name == existing_flag_name:
continue
if action_registry_name in ('store_true', 'store_false'):
arguments_group.add_argument(
unparsed_flag_name,
action=action_registry_name,
default=argument_action.default,
dest=destination_name,
required=argument_action.nargs,
)
else:
arguments_group.add_argument(
unparsed_flag_name,
action=action_registry_name,
choices=argument_action.choices,
default=argument_action.default,
dest=destination_name,
nargs=argument_action.nargs,
required=argument_action.nargs,
type=argument_action.type,
)
def add_arguments_from_schema(arguments_group, schema, unparsed_arguments, names=None):
'''
Given an argparse._ArgumentGroup instance, a configuration schema dict, and a sequence of
unparsed argument strings, convert the entire schema into corresponding command-line flags and
add them to the arguments group.
For instance, given a schema of:
{
'type': 'object',
'properties': {
'foo': {
'type': 'object',
'properties': {
'bar': {'type': 'integer'}
}
}
}
}
... the following flag will be added to the arguments group:
--foo.bar
If "foo" is instead an array of objects, both of the following will get added:
--foo
--foo[0].bar
And if names are also passed in, they are considered to be the name components of an option
(e.g. "foo" and "bar") and are used to construct a resulting flag.
Bail if the schema is not a dict.
'''
if names is None:
names = ()
if not isinstance(schema, dict):
return
schema_type = schema.get('type')
# If this option has multiple types, just use the first one (that isn't "null").
if isinstance(schema_type, list):
try:
schema_type = next(single_type for single_type in schema_type if single_type != 'null')
except StopIteration:
raise ValueError(f'Unknown type in configuration schema: {schema_type}')
# If this is an "object" type, recurse for each child option ("property").
if schema_type == 'object':
properties = schema.get('properties')
# If there are child properties, recurse for each one. But if there are no child properties,
# fall through so that a flag gets added below for the (empty) object.
if properties:
for name, child in properties.items():
add_arguments_from_schema(
arguments_group, child, unparsed_arguments, names + (name,)
)
return
# If this is an "array" type, recurse for each items type child option. Don't return yet so that
# a flag also gets added below for the array itself.
if schema_type == 'array':
items = schema.get('items', {})
properties = borgmatic.config.schema.get_properties(items)
if properties:
for name, child in properties.items():
add_arguments_from_schema(
arguments_group,
child,
unparsed_arguments,
names[:-1] + (f'{names[-1]}[0]',) + (name,),
)
# If there aren't any children, then this is an array of scalars. Recurse accordingly.
else:
add_arguments_from_schema(
arguments_group, items, unparsed_arguments, names[:-1] + (f'{names[-1]}[0]',)
)
flag_name = '.'.join(names).replace('_', '-')
# Certain options already have corresponding flags on individual actions (like "create
# --progress"), so don't bother adding them to the global flags.
if not flag_name or flag_name in OMITTED_FLAG_NAMES:
return
metavar = names[-1].upper()
description = make_argument_description(schema, flag_name)
# The object=str and array=str given here is to support specifying an object or an array as a
# YAML string on the command-line.
argument_type = borgmatic.config.schema.parse_type(schema_type, object=str, array=str)
# As a UX nicety, add separate true and false flags for boolean options.
if schema_type == 'boolean':
arguments_group.add_argument(
f'--{flag_name}',
action='store_true',
default=None,
help=description,
)
if names[-1].startswith('no_'):
no_flag_name = '.'.join(names[:-1] + (names[-1][len('no_') :],)).replace('_', '-')
else:
no_flag_name = '.'.join(names[:-1] + ('no-' + names[-1],)).replace('_', '-')
arguments_group.add_argument(
f'--{no_flag_name}',
dest=flag_name.replace('-', '_'),
action='store_false',
default=None,
help=f'Set the --{flag_name} value to false.',
)
else:
arguments_group.add_argument(
f'--{flag_name}',
type=argument_type,
metavar=metavar,
help=description,
)
add_array_element_arguments(arguments_group, unparsed_arguments, flag_name)
def make_parsers(schema, unparsed_arguments):
'''
Given a configuration schema dict and unparsed arguments as a sequence of strings, build a
global arguments parser, individual action parsers, and a combined parser containing both.
Return them as a tuple. The global parser is useful for parsing just global arguments while
ignoring actions, and the combined parser is handy for displaying help that includes everything:
global flags, a list of actions, etc.
'''
config_paths = collect.get_default_config_paths(expand_home=True)
unexpanded_config_paths = collect.get_default_config_paths(expand_home=False)
global_parser = ArgumentParser(add_help=False)
# Using allow_abbrev=False here prevents the global parser from erroring about "ambiguous"
# options like --encryption. Such options are intended for an action parser rather than the
# global parser, and so we don't want to error on them here.
global_parser = ArgumentParser(allow_abbrev=False, add_help=False)
global_group = global_parser.add_argument_group('global arguments')
global_group.add_argument(
@@ -309,9 +568,6 @@ def make_parsers():
action='store_true',
help='Go through the motions, but do not actually write to any repositories',
)
global_group.add_argument(
'-nc', '--no-color', dest='no_color', action='store_true', help='Disable colored output'
)
global_group.add_argument(
'-v',
'--verbosity',
@@ -349,12 +605,12 @@ def make_parsers():
global_group.add_argument(
'--log-file-format',
type=str,
help='Log format string used for log messages written to the log file',
help='Python format string used for log messages written to the log file',
)
global_group.add_argument(
'--log-json',
action='store_true',
help='Write log messages and console output as one JSON object per log line instead of formatted text',
help='Write Borg log messages and console output as one JSON object per log line instead of formatted text',
)
global_group.add_argument(
'--override',
@@ -388,6 +644,7 @@ def make_parsers():
action='store_true',
help='Display installed version number of borgmatic and exit',
)
add_arguments_from_schema(global_group, schema, unparsed_arguments)
global_plus_action_parser = ArgumentParser(
description='''
@@ -415,7 +672,6 @@ def make_parsers():
'--encryption',
dest='encryption_mode',
help='Borg repository encryption mode',
required=True,
)
repo_create_group.add_argument(
'--source-repository',
@@ -434,6 +690,7 @@ def make_parsers():
)
repo_create_group.add_argument(
'--append-only',
default=None,
action='store_true',
help='Create an append-only repository',
)
@@ -443,6 +700,8 @@ def make_parsers():
)
repo_create_group.add_argument(
'--make-parent-dirs',
dest='make_parent_directories',
default=None,
action='store_true',
help='Create any missing parent directories of the repository directory',
)
@@ -477,7 +736,7 @@ def make_parsers():
)
transfer_group.add_argument(
'--progress',
default=False,
default=None,
action='store_true',
help='Display progress as each archive is transferred',
)
@@ -544,13 +803,17 @@ def make_parsers():
)
prune_group.add_argument(
'--stats',
dest='stats',
default=False,
dest='statistics',
default=None,
action='store_true',
help='Display statistics of the pruned archive',
help='Display statistics of the pruned archive [Borg 1 only]',
)
prune_group.add_argument(
'--list', dest='list_archives', action='store_true', help='List archives kept/pruned'
'--list',
dest='list_details',
default=None,
action='store_true',
help='List archives kept/pruned',
)
prune_group.add_argument(
'--oldest',
@@ -588,8 +851,7 @@ def make_parsers():
)
compact_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress as each segment is compacted',
)
@@ -603,7 +865,7 @@ def make_parsers():
compact_group.add_argument(
'--threshold',
type=int,
dest='threshold',
dest='compact_threshold',
help='Minimum saved space percentage threshold for compacting a segment, defaults to 10',
)
compact_group.add_argument(
@@ -624,20 +886,24 @@ def make_parsers():
)
create_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress for each file as it is backed up',
)
create_group.add_argument(
'--stats',
dest='stats',
default=False,
dest='statistics',
default=None,
action='store_true',
help='Display statistics of archive',
)
create_group.add_argument(
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
'--list',
'--files',
dest='list_details',
default=None,
action='store_true',
help='Show per-file details',
)
create_group.add_argument(
'--json', dest='json', default=False, action='store_true', help='Output results as JSON'
@@ -658,8 +924,7 @@ def make_parsers():
)
check_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress for each file as it is checked',
)
@@ -716,12 +981,15 @@ def make_parsers():
)
delete_group.add_argument(
'--list',
dest='list_archives',
dest='list_details',
default=None,
action='store_true',
help='Show details for the deleted archives',
)
delete_group.add_argument(
'--stats',
dest='statistics',
default=None,
action='store_true',
help='Display statistics for the deleted archives',
)
@@ -826,8 +1094,7 @@ def make_parsers():
)
extract_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress for each file as it is extracted',
)
@@ -902,8 +1169,7 @@ def make_parsers():
)
config_bootstrap_group.add_argument(
'--progress',
dest='progress',
default=False,
default=None,
action='store_true',
help='Display progress for each file as it is extracted',
)
@@ -996,7 +1262,12 @@ def make_parsers():
'--tar-filter', help='Name of filter program to pipe data through'
)
export_tar_group.add_argument(
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
'--list',
'--files',
dest='list_details',
default=None,
action='store_true',
help='Show per-file details',
)
export_tar_group.add_argument(
'--strip-components',
@@ -1107,7 +1378,8 @@ def make_parsers():
)
repo_delete_group.add_argument(
'--list',
dest='list_archives',
dest='list_details',
default=None,
action='store_true',
help='Show details for the archives in the given repository',
)
@@ -1479,6 +1751,31 @@ def make_parsers():
'-h', '--help', action='help', help='Show this help message and exit'
)
key_import_parser = key_parsers.add_parser(
'import',
help='Import a copy of the repository key from backup',
description='Import a copy of the repository key from backup',
add_help=False,
)
key_import_group = key_import_parser.add_argument_group('key import arguments')
key_import_group.add_argument(
'--paper',
action='store_true',
help='Import interactively from a backup done with --paper',
)
key_import_group.add_argument(
'--repository',
help='Path of repository to import the key from, defaults to the configured repository if there is only one, quoted globs supported',
)
key_import_group.add_argument(
'--path',
metavar='PATH',
help='Path to import the key from backup, defaults to stdin',
)
key_import_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
key_change_passphrase_parser = key_parsers.add_parser(
'change-passphrase',
help='Change the passphrase protecting the repository key',
@@ -1496,6 +1793,56 @@ def make_parsers():
'-h', '--help', action='help', help='Show this help message and exit'
)
recreate_parser = action_parsers.add_parser(
'recreate',
aliases=ACTION_ALIASES['recreate'],
help='Recreate an archive in a repository (with Borg 1.2+, you must run compact afterwards to actually free space)',
description='Recreate an archive in a repository (with Borg 1.2+, you must run compact afterwards to actually free space)',
add_help=False,
)
recreate_group = recreate_parser.add_argument_group('recreate arguments')
recreate_group.add_argument(
'--repository',
help='Path of repository containing archive to recreate, defaults to the configured repository if there is only one, quoted globs supported',
)
recreate_group.add_argument(
'--archive',
help='Archive name, hash, or series to recreate',
)
recreate_group.add_argument(
'--list',
dest='list_details',
default=None,
action='store_true',
help='Show per-file details',
)
recreate_group.add_argument(
'--target',
metavar='TARGET',
help='Create a new archive from the specified archive (via --archive), without replacing it',
)
recreate_group.add_argument(
'--comment',
metavar='COMMENT',
help='Add a comment text to the archive or, if an archive is not provided, to all matching archives',
)
recreate_group.add_argument(
'--timestamp',
metavar='TIMESTAMP',
help='Manually override the archive creation date/time (UTC)',
)
recreate_group.add_argument(
'-a',
'--match-archives',
'--glob-archives',
dest='match_archives',
metavar='PATTERN',
help='Only consider archive names, hashes, or series matching this pattern [Borg 2.x+ only]',
)
recreate_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
borg_parser = action_parsers.add_parser(
'borg',
aliases=ACTION_ALIASES['borg'],
@@ -1523,15 +1870,18 @@ def make_parsers():
return global_parser, action_parsers, global_plus_action_parser
def parse_arguments(*unparsed_arguments):
def parse_arguments(schema, *unparsed_arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as a dict mapping from action name (or "global") to an argparse.Namespace instance.
Given a configuration schema dict and the command-line arguments with which this script was
invoked and unparsed arguments as a sequence of strings, parse the arguments and return them as
a dict mapping from action name (or "global") to an argparse.Namespace instance.
Raise ValueError if the arguments cannot be parsed.
Raise SystemExit with an error code of 0 if "--help" was requested.
'''
global_parser, action_parsers, global_plus_action_parser = make_parsers()
global_parser, action_parsers, global_plus_action_parser = make_parsers(
schema, unparsed_arguments
)
arguments, remaining_action_arguments = parse_arguments_for_actions(
unparsed_arguments, action_parsers.choices, global_parser
)
@@ -1559,15 +1909,6 @@ def parse_arguments(*unparsed_arguments):
f"Unrecognized argument{'s' if len(unknown_arguments) > 1 else ''}: {' '.join(unknown_arguments)}"
)
if 'create' in arguments and arguments['create'].list_files and arguments['create'].progress:
raise ValueError(
'With the create action, only one of --list (--files) and --progress flags can be used.'
)
if 'create' in arguments and arguments['create'].list_files and arguments['create'].json:
raise ValueError(
'With the create action, only one of --list (--files) and --json flags can be used.'
)
if (
('list' in arguments and 'repo-info' in arguments and arguments['list'].json)
or ('list' in arguments and 'info' in arguments and arguments['list'].json)
@@ -1575,15 +1916,6 @@ def parse_arguments(*unparsed_arguments):
):
raise ValueError('With the --json flag, multiple actions cannot be used together.')
if (
'transfer' in arguments
and arguments['transfer'].archive
and arguments['transfer'].match_archives
):
raise ValueError(
'With the transfer action, only one of --archive and --match-archives flags can be used.'
)
if 'list' in arguments and (arguments['list'].prefix and arguments['list'].match_archives):
raise ValueError(
'With the list action, only one of --prefix or --match-archives flags can be used.'

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,7 @@
import borgmatic.commands.arguments
import borgmatic.commands.completion.actions
import borgmatic.commands.completion.flag
import borgmatic.config.validate
def parser_flags(parser):
@@ -7,7 +9,12 @@ def parser_flags(parser):
Given an argparse.ArgumentParser instance, return its argument flags in a space-separated
string.
'''
return ' '.join(option for action in parser._actions for option in action.option_strings)
return ' '.join(
flag_variant
for action in parser._actions
for flag_name in action.option_strings
for flag_variant in borgmatic.commands.completion.flag.variants(flag_name)
)
def bash_completion():
@@ -19,7 +26,10 @@ def bash_completion():
unused_global_parser,
action_parsers,
global_plus_action_parser,
) = borgmatic.commands.arguments.make_parsers()
) = borgmatic.commands.arguments.make_parsers(
schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
unparsed_arguments=(),
)
global_flags = parser_flags(global_plus_action_parser)
# Avert your eyes.

View File

@@ -4,6 +4,7 @@ from textwrap import dedent
import borgmatic.commands.arguments
import borgmatic.commands.completion.actions
import borgmatic.config.validate
def has_file_options(action: Action):
@@ -26,9 +27,11 @@ def has_choice_options(action: Action):
def has_unknown_required_param_options(action: Action):
'''
A catch-all for options that take a required parameter, but we don't know what the parameter is.
This should be used last. These are actions that take something like a glob, a list of numbers, or a string.
This should be used last. These are actions that take something like a glob, a list of numbers,
or a string.
Actions that match this pattern should not show the normal arguments, because those are unlikely to be valid.
Actions that match this pattern should not show the normal arguments, because those are unlikely
to be valid.
'''
return (
action.required is True
@@ -52,9 +55,9 @@ def has_exact_options(action: Action):
def exact_options_completion(action: Action):
'''
Given an argparse.Action instance, return a completion invocation that forces file completions, options completion,
or just that some value follow the action, if the action takes such an argument and was the last action on the
command line prior to the cursor.
Given an argparse.Action instance, return a completion invocation that forces file completions,
options completion, or just that some value follow the action, if the action takes such an
argument and was the last action on the command line prior to the cursor.
Otherwise, return an empty string.
'''
@@ -80,8 +83,9 @@ def exact_options_completion(action: Action):
def dedent_strip_as_tuple(string: str):
'''
Dedent a string, then strip it to avoid requiring your first line to have content, then return a tuple of the string.
Makes it easier to write multiline strings for completions when you join them with a tuple.
Dedent a string, then strip it to avoid requiring your first line to have content, then return a
tuple of the string. Makes it easier to write multiline strings for completions when you join
them with a tuple.
'''
return (dedent(string).strip('\n'),)
@@ -95,7 +99,10 @@ def fish_completion():
unused_global_parser,
action_parsers,
global_plus_action_parser,
) = borgmatic.commands.arguments.make_parsers()
) = borgmatic.commands.arguments.make_parsers(
schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
unparsed_arguments=(),
)
all_action_parsers = ' '.join(action for action in action_parsers.choices.keys())

View File

@@ -0,0 +1,13 @@
def variants(flag_name):
'''
Given a flag name as a string, yield it and any variations that should be complete-able as well.
For instance, for a string like "--foo[0].bar", yield "--foo[0].bar", "--foo[1].bar", ...,
"--foo[9].bar".
'''
if '[0]' in flag_name:
for index in range(0, 10):
yield flag_name.replace('[0]', f'[{index}]')
return
yield flag_name

View File

@@ -0,0 +1,176 @@
import io
import re
import ruamel.yaml
import borgmatic.config.schema
LIST_INDEX_KEY_PATTERN = re.compile(r'^(?P<list_name>[a-zA-z-]+)\[(?P<index>\d+)\]$')
def set_values(config, keys, value):
'''
Given a configuration dict, a sequence of parsed key strings, and a string value, descend into
the configuration hierarchy based on the given keys and set the value into the right place.
For example, consider these keys:
('foo', 'bar', 'baz')
This looks up "foo" in the given configuration dict. And within that, it looks up "bar". And
then within that, it looks up "baz" and sets it to the given value. Another example:
('mylist[0]', 'foo')
This looks for the zeroth element of "mylist" in the given configuration. And within that, it
looks up "foo" and sets it to the given value.
'''
if not keys:
return
first_key = keys[0]
# Support "mylist[0]" list index syntax.
match = LIST_INDEX_KEY_PATTERN.match(first_key)
if match:
list_key = match.group('list_name')
list_index = int(match.group('index'))
try:
if len(keys) == 1:
config[list_key][list_index] = value
return
if list_key not in config:
config[list_key] = []
set_values(config[list_key][list_index], keys[1:], value)
except (IndexError, KeyError):
raise ValueError(f'Argument list index {first_key} is out of range')
return
if len(keys) == 1:
config[first_key] = value
return
if first_key not in config:
config[first_key] = {}
set_values(config[first_key], keys[1:], value)
def type_for_option(schema, option_keys):
'''
Given a configuration schema dict and a sequence of keys identifying a potentially nested
option, e.g. ('extra_borg_options', 'create'), return the schema type of that option as a
string.
Return None if the option or its type cannot be found in the schema.
'''
option_schema = schema
for key in option_keys:
# Support "name[0]"-style list index syntax.
match = LIST_INDEX_KEY_PATTERN.match(key)
properties = borgmatic.config.schema.get_properties(option_schema)
try:
if match:
option_schema = properties[match.group('list_name')]['items']
else:
option_schema = properties[key]
except KeyError:
return None
try:
return option_schema['type']
except KeyError:
return None
def convert_value_type(value, option_type):
'''
Given a string value and its schema type as a string, determine its logical type (string,
boolean, integer, etc.), and return it converted to that type.
If the destination option type is a string, then leave the value as-is so that special
characters in it don't get interpreted as YAML during conversion.
And if the source value isn't a string, return it as-is.
Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
Raise ValueError if the parsed value doesn't match the option type.
'''
if not isinstance(value, str):
return value
if option_type == 'string':
return value
try:
parsed_value = ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
except ruamel.yaml.error.YAMLError as error:
raise ValueError(f'Argument value "{value}" is invalid: {error.problem}')
if not isinstance(parsed_value, borgmatic.config.schema.parse_type(option_type)):
raise ValueError(f'Argument value "{value}" is not of the expected type: {option_type}')
return parsed_value
def prepare_arguments_for_config(global_arguments, schema):
'''
Given global arguments as an argparse.Namespace and a configuration schema dict, parse each
argument that corresponds to an option in the schema and return a sequence of tuples (keys,
values) for that option, where keys is a sequence of strings. For instance, given the following
arguments:
argparse.Namespace(**{'my_option.sub_option': 'value1', 'other_option': 'value2'})
... return this:
(
(('my_option', 'sub_option'), 'value1'),
(('other_option',), 'value2'),
)
'''
prepared_values = []
for argument_name, value in global_arguments.__dict__.items():
if value is None:
continue
keys = tuple(argument_name.split('.'))
option_type = type_for_option(schema, keys)
# The argument doesn't correspond to any option in the schema, so ignore it. It's
# probably a flag that borgmatic has on the command-line but not in configuration.
if option_type is None:
continue
prepared_values.append(
(
keys,
convert_value_type(value, option_type),
)
)
return tuple(prepared_values)
def apply_arguments_to_config(config, schema, arguments):
'''
Given a configuration dict, a corresponding configuration schema dict, and arguments as a dict
from action name to argparse.Namespace, set those given argument values into their corresponding
configuration options in the configuration dict.
This supports argument flags of the from "--foo.bar.baz" where each dotted component is a nested
configuration object. Additionally, flags like "--foo.bar[0].baz" are supported to update a list
element in the configuration.
'''
for action_arguments in arguments.values():
for keys, value in prepare_arguments_for_config(action_arguments, schema):
set_values(config, keys, value)

View File

@@ -5,6 +5,7 @@ import re
import ruamel.yaml
import borgmatic.config.schema
from borgmatic.config import load, normalize
INDENT = 4
@@ -21,45 +22,59 @@ def insert_newline_before_comment(config, field_name):
)
def get_properties(schema):
'''
Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
potential properties, returned their merged properties instead.
'''
if 'oneOf' in schema:
return dict(
collections.ChainMap(*[sub_schema['properties'] for sub_schema in schema['oneOf']])
)
return schema['properties']
SCALAR_SCHEMA_TYPES = {'string', 'boolean', 'integer', 'number'}
def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
def schema_to_sample_configuration(schema, source_config=None, level=0, parent_is_sequence=False):
'''
Given a loaded configuration schema, generate and return sample config for it. Include comments
for each option based on the schema "description".
Given a loaded configuration schema and a source configuration, generate and return sample
config for the schema. Include comments for each option based on the schema "description".
If a source config is given, walk it alongside the given schema so that both can be taken into
account when commenting out particular options in add_comments_to_configuration_object().
'''
schema_type = schema.get('type')
example = schema.get('example')
if example is not None:
return example
if schema_type == 'array' or (isinstance(schema_type, list) and 'array' in schema_type):
if borgmatic.config.schema.compare_types(schema_type, {'array'}):
config = ruamel.yaml.comments.CommentedSeq(
[schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
example
if borgmatic.config.schema.compare_types(
schema['items'].get('type'), SCALAR_SCHEMA_TYPES
)
else [
schema_to_sample_configuration(
schema['items'], source_config, level, parent_is_sequence=True
)
]
)
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
elif schema_type == 'object' or (isinstance(schema_type, list) and 'object' in schema_type):
config = ruamel.yaml.comments.CommentedMap(
[
(field_name, schema_to_sample_configuration(sub_schema, level + 1))
for field_name, sub_schema in get_properties(schema).items()
]
elif borgmatic.config.schema.compare_types(schema_type, {'object'}):
if source_config and isinstance(source_config, list) and isinstance(source_config[0], dict):
source_config = dict(collections.ChainMap(*source_config))
config = (
ruamel.yaml.comments.CommentedMap(
[
(
field_name,
schema_to_sample_configuration(
sub_schema, (source_config or {}).get(field_name, {}), level + 1
),
)
for field_name, sub_schema in borgmatic.config.schema.get_properties(
schema
).items()
]
)
or example
)
indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
add_comments_to_configuration_object(
config, schema, indent=indent, skip_first=parent_is_sequence
config, schema, source_config, indent=indent, skip_first=parent_is_sequence
)
elif borgmatic.config.schema.compare_types(schema_type, SCALAR_SCHEMA_TYPES, match=all):
return example
else:
raise ValueError(f'Schema at level {level} is unsupported: {schema}')
@@ -164,7 +179,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
return
for field_name in config[0].keys():
field_schema = get_properties(schema['items']).get(field_name, {})
field_schema = borgmatic.config.schema.get_properties(schema['items']).get(field_name, {})
description = field_schema.get('description')
# No description to use? Skip it.
@@ -178,26 +193,35 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
return
REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
DEFAULT_KEYS = {'source_directories', 'repositories', 'keep_daily'}
COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
def add_comments_to_configuration_object(config, schema, indent=0, skip_first=False):
def add_comments_to_configuration_object(
config, schema, source_config=None, indent=0, skip_first=False
):
'''
Using descriptions from a schema as a source, add those descriptions as comments to the given
config mapping, before each field. Indent the comment the given number of characters.
configuration dict, putting them before each field. Indent the comment the given number of
characters.
And a sentinel for commenting out options that are neither in DEFAULT_KEYS nor the the given
source configuration dict. The idea is that any options used in the source configuration should
stay active in the generated configuration.
'''
for index, field_name in enumerate(config.keys()):
if skip_first and index == 0:
continue
field_schema = get_properties(schema).get(field_name, {})
field_schema = borgmatic.config.schema.get_properties(schema).get(field_name, {})
description = field_schema.get('description', '').strip()
# If this is an optional key, add an indicator to the comment flagging it to be commented
# If this isn't a default key, add an indicator to the comment flagging it to be commented
# out from the sample configuration. This sentinel is consumed by downstream processing that
# does the actual commenting out.
if field_name not in REQUIRED_KEYS:
if field_name not in DEFAULT_KEYS and (
source_config is None or field_name not in source_config
):
description = (
'\n'.join((description, COMMENTED_OUT_SENTINEL))
if description
@@ -217,21 +241,6 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
RUAMEL_YAML_COMMENTS_INDEX = 1
def remove_commented_out_sentinel(config, field_name):
'''
Given a configuration CommentedMap and a top-level field name in it, remove any "commented out"
sentinel found at the end of its YAML comments. This prevents the given field name from getting
commented out by downstream processing that consumes the sentinel.
'''
try:
last_comment_value = config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX][-1].value
except KeyError:
return
if last_comment_value == f'# {COMMENTED_OUT_SENTINEL}\n':
config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX].pop()
def merge_source_configuration_into_destination(destination_config, source_config):
'''
Deep merge the given source configuration dict into the destination configuration CommentedMap,
@@ -246,12 +255,6 @@ def merge_source_configuration_into_destination(destination_config, source_confi
return source_config
for field_name, source_value in source_config.items():
# Since this key/value is from the source configuration, leave it uncommented and remove any
# sentinel that would cause it to get commented out.
remove_commented_out_sentinel(
ruamel.yaml.comments.CommentedMap(destination_config), field_name
)
# This is a mapping. Recurse for this key/value.
if isinstance(source_value, collections.abc.Mapping):
destination_config[field_name] = merge_source_configuration_into_destination(
@@ -297,7 +300,7 @@ def generate_sample_configuration(
normalize.normalize(source_filename, source_config)
destination_config = merge_source_configuration_into_destination(
schema_to_sample_configuration(schema), source_config
schema_to_sample_configuration(schema, source_config), source_config
)
if dry_run:

View File

@@ -69,7 +69,7 @@ def include_configuration(loader, filename_node, include_directory, config_paths
]
raise ValueError(
'!include value is not supported; use a single filename or a list of filenames'
'The value given for the !include tag is invalid; use a single filename or a list of filenames instead'
)

View File

@@ -58,6 +58,90 @@ def normalize_sections(config_filename, config):
return []
def make_command_hook_deprecation_log(config_filename, option_name): # pragma: no cover
'''
Given a configuration filename and the name of a configuration option, return a deprecation
warning log for it.
'''
return logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: {option_name} is deprecated and support will be removed from a future release. Use commands: instead.',
)
)
def normalize_commands(config_filename, config):
'''
Given a configuration filename and a configuration dict, transform any "before_*"- and
"after_*"-style command hooks into "commands:".
'''
logs = []
# Normalize "before_actions" and "after_actions".
for preposition in ('before', 'after'):
option_name = f'{preposition}_actions'
commands = config.pop(option_name, None)
if commands:
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
config.setdefault('commands', []).append(
{
preposition: 'repository',
'run': commands,
}
)
# Normalize "before_backup", "before_prune", "after_backup", "after_prune", etc.
for action_name in ('create', 'prune', 'compact', 'check', 'extract'):
for preposition in ('before', 'after'):
option_name = f'{preposition}_{"backup" if action_name == "create" else action_name}'
commands = config.pop(option_name, None)
if not commands:
continue
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
config.setdefault('commands', []).append(
{
preposition: 'action',
'when': [action_name],
'run': commands,
}
)
# Normalize "on_error".
commands = config.pop('on_error', None)
if commands:
logs.append(make_command_hook_deprecation_log(config_filename, 'on_error'))
config.setdefault('commands', []).append(
{
'after': 'error',
'when': ['create', 'prune', 'compact', 'check'],
'run': commands,
}
)
# Normalize "before_everything" and "after_everything".
for preposition in ('before', 'after'):
option_name = f'{preposition}_everything'
commands = config.pop(option_name, None)
if commands:
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
config.setdefault('commands', []).append(
{
preposition: 'everything',
'when': ['create'],
'run': commands,
}
)
return logs
def normalize(config_filename, config):
'''
Given a configuration filename and a configuration dict of its loaded contents, apply particular
@@ -67,6 +151,7 @@ def normalize(config_filename, config):
Raise ValueError the configuration cannot be normalized.
'''
logs = normalize_sections(config_filename, config)
logs += normalize_commands(config_filename, config)
if config.get('borgmatic_source_directory'):
logs.append(
@@ -241,7 +326,11 @@ def normalize(config_filename, config):
config['repositories'] = []
for repository_dict in repositories:
repository_path = repository_dict['path']
repository_path = repository_dict.get('path')
if repository_path is None:
continue
if '~' in repository_path:
logs.append(
logging.makeLogRecord(

View File

@@ -1,7 +1,10 @@
import io
import logging
import ruamel.yaml
logger = logging.getLogger(__name__)
def set_values(config, keys, value):
'''
@@ -134,6 +137,11 @@ def apply_overrides(config, schema, raw_overrides):
'''
overrides = parse_overrides(raw_overrides, schema)
if overrides:
logger.warning(
"The --override flag is deprecated and will be removed from a future release. Instead, use a command-line flag corresponding to the configuration option you'd like to set."
)
for keys, value in overrides:
set_values(config, keys, value)
set_values(config, strip_section_names(keys), value)

View File

@@ -76,14 +76,13 @@ class Runtime_directory:
automatically gets cleaned up as necessary.
'''
def __init__(self, config, log_prefix):
def __init__(self, config):
'''
Given a configuration dict and a log prefix, determine the borgmatic runtime directory,
creating a secure, temporary directory within it if necessary. Defaults to
$XDG_RUNTIME_DIR/./borgmatic or $RUNTIME_DIRECTORY/./borgmatic or
$TMPDIR/borgmatic-[random]/./borgmatic or $TEMP/borgmatic-[random]/./borgmatic or
/tmp/borgmatic-[random]/./borgmatic where "[random]" is a randomly generated string intended
to avoid path collisions.
Given a configuration dict determine the borgmatic runtime directory, creating a secure,
temporary directory within it if necessary. Defaults to $XDG_RUNTIME_DIR/./borgmatic or
$RUNTIME_DIRECTORY/./borgmatic or $TMPDIR/borgmatic-[random]/./borgmatic or
$TEMP/borgmatic-[random]/./borgmatic or /tmp/borgmatic-[random]/./borgmatic where "[random]"
is a randomly generated string intended to avoid path collisions.
If XDG_RUNTIME_DIR or RUNTIME_DIRECTORY is set and already ends in "/borgmatic", then don't
tack on a second "/borgmatic" path component.
@@ -127,7 +126,7 @@ class Runtime_directory:
)
os.makedirs(self.runtime_path, mode=0o700, exist_ok=True)
logger.debug(f'{log_prefix}: Using runtime directory {os.path.normpath(self.runtime_path)}')
logger.debug(f'Using runtime directory {os.path.normpath(self.runtime_path)}')
def __enter__(self):
'''
@@ -135,7 +134,7 @@ class Runtime_directory:
'''
return self.runtime_path
def __exit__(self, exception, value, traceback):
def __exit__(self, exception_type, exception, traceback):
'''
Delete any temporary directory that was created as part of initialization.
'''

View File

@@ -0,0 +1,72 @@
import decimal
import itertools
def get_properties(schema):
'''
Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
potential properties, return their merged properties instead (interleaved so the first
properties of each sub-schema come first). The idea is that the user should see all possible
options even if they're not all possible together.
'''
if 'oneOf' in schema:
return dict(
item
for item in itertools.chain(
*itertools.zip_longest(
*[sub_schema['properties'].items() for sub_schema in schema['oneOf']]
)
)
if item is not None
)
return schema.get('properties', {})
SCHEMA_TYPE_TO_PYTHON_TYPE = {
'array': list,
'boolean': bool,
'integer': int,
'number': decimal.Decimal,
'object': dict,
'string': str,
}
def parse_type(schema_type, **overrides):
'''
Given a schema type as a string, return the corresponding Python type.
If any overrides are given in the from of a schema type string to a Python type, then override
the default type mapping with them.
Raise ValueError if the schema type is unknown.
'''
try:
return dict(
SCHEMA_TYPE_TO_PYTHON_TYPE,
**overrides,
)[schema_type]
except KeyError:
raise ValueError(f'Unknown type in configuration schema: {schema_type}')
def compare_types(schema_type, target_types, match=any):
'''
Given a schema type as a string or a list of strings (representing multiple types) and a set of
target type strings, return whether every schema type is in the set of target types.
If the schema type is a list of strings, use the given match function (such as any or all) to
compare elements. For instance, if match is given as all, then every element of the schema_type
list must be in the target types.
'''
if isinstance(schema_type, list):
if match(element_schema_type in target_types for element_schema_type in schema_type):
return True
return False
if schema_type in target_types:
return True
return False

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ import os
import jsonschema
import ruamel.yaml
import borgmatic.config
import borgmatic.config.arguments
from borgmatic.config import constants, environment, load, normalize, override
@@ -21,6 +21,18 @@ def schema_filename():
return schema_path
def load_schema(schema_path): # pragma: no cover
'''
Given a schema filename path, load the schema and return it as a dict.
Raise Validation_error if the schema could not be parsed.
'''
try:
return load.load_configuration(schema_path)
except (ruamel.yaml.error.YAMLError, RecursionError) as error:
raise Validation_error(schema_path, (str(error),))
def format_json_error_path_element(path_element):
'''
Given a path element into a JSON data structure, format it for display as a string.
@@ -84,12 +96,17 @@ def apply_logical_validation(config_filename, parsed_configuration):
)
def parse_configuration(config_filename, schema_filename, overrides=None, resolve_env=True):
def parse_configuration(
config_filename, schema_filename, arguments, overrides=None, resolve_env=True
):
'''
Given the path to a config filename in YAML format, the path to a schema filename in a YAML
rendition of JSON Schema format, a sequence of configuration file override strings in the form
of "option.suboption=value", return the parsed configuration as a data structure of nested dicts
and lists corresponding to the schema. Example return value:
rendition of JSON Schema format, arguments as dict from action name to argparse.Namespace, a
sequence of configuration file override strings in the form of "option.suboption=value", and
whether to resolve environment variables, return the parsed configuration as a data structure of
nested dicts and lists corresponding to the schema. Example return value.
Example return value:
{
'source_directories': ['/home', '/etc'],
@@ -112,6 +129,7 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
except (ruamel.yaml.error.YAMLError, RecursionError) as error:
raise Validation_error(config_filename, (str(error),))
borgmatic.config.arguments.apply_arguments_to_config(config, schema, arguments)
override.apply_overrides(config, schema, overrides)
constants.apply_constants(config, config.get('constants') if config else {})
@@ -124,6 +142,7 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
validator = jsonschema.Draft7Validator(schema)
except AttributeError: # pragma: no cover
validator = jsonschema.Draft4Validator(schema)
validation_errors = tuple(validator.iter_errors(config))
if validation_errors:
@@ -136,16 +155,22 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
return config, config_paths, logs
def normalize_repository_path(repository):
def normalize_repository_path(repository, base=None):
'''
Given a repository path, return the absolute path of it (for local repositories).
Optionally, use a base path for resolving relative paths, e.g. to the configured working directory.
'''
# A colon in the repository could mean that it's either a file:// URL or a remote repository.
# If it's a remote repository, we don't want to normalize it. If it's a file:// URL, we do.
if ':' not in repository:
return os.path.abspath(repository)
return (
os.path.abspath(os.path.join(base, repository)) if base else os.path.abspath(repository)
)
elif repository.startswith('file://'):
return os.path.abspath(repository.partition('file://')[-1])
local_path = repository.partition('file://')[-1]
return (
os.path.abspath(os.path.join(base, local_path)) if base else os.path.abspath(local_path)
)
else:
return repository

View File

@@ -1,11 +1,12 @@
import collections
import enum
import logging
import os
import select
import subprocess
import textwrap
import borgmatic.logger
logger = logging.getLogger(__name__)
@@ -241,6 +242,9 @@ def mask_command_secrets(full_command):
MAX_LOGGED_COMMAND_LENGTH = 1000
PREFIXES_OF_ENVIRONMENT_VARIABLES_TO_LOG = ('BORG_', 'PG', 'MARIADB_', 'MYSQL_')
def log_command(full_command, input_file=None, output_file=None, environment=None):
'''
Log the given command (a sequence of command/argument strings), along with its input/output file
@@ -249,14 +253,21 @@ def log_command(full_command, input_file=None, output_file=None, environment=Non
logger.debug(
textwrap.shorten(
' '.join(
tuple(f'{key}=***' for key in (environment or {}).keys())
tuple(
f'{key}=***'
for key in (environment or {}).keys()
if any(
key.startswith(prefix)
for prefix in PREFIXES_OF_ENVIRONMENT_VARIABLES_TO_LOG
)
)
+ mask_command_secrets(full_command)
),
width=MAX_LOGGED_COMMAND_LENGTH,
placeholder=' ...',
)
+ (f" < {getattr(input_file, 'name', '')}" if input_file else '')
+ (f" > {getattr(output_file, 'name', '')}" if output_file else '')
+ (f" < {getattr(input_file, 'name', input_file)}" if input_file else '')
+ (f" > {getattr(output_file, 'name', output_file)}" if output_file else '')
)
@@ -272,7 +283,7 @@ def execute_command(
output_file=None,
input_file=None,
shell=False,
extra_environment=None,
environment=None,
working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
@@ -282,18 +293,17 @@ def execute_command(
Execute the given command (a sequence of command/argument strings) and log its output at the
given log level. If an open output file object is given, then write stdout to the file and only
log stderr. If an open input file object is given, then read stdin from the file. If shell is
True, execute the command within a shell. If an extra environment dict is given, then use it to
augment the current environment, and pass the result into the command. If a working directory is
given, use that as the present working directory when running the command. If a Borg local path
is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning
instead of an error. But if Borg exit codes are given as a sequence of exit code configuration
dicts, then use that configuration to decide what's an error and what's a warning. If run to
completion is False, then return the process for the command without executing it to completion.
True, execute the command within a shell. If an environment variables dict is given, then pass
it into the command. If a working directory is given, use that as the present working directory
when running the command. If a Borg local path is given, and the command matches it (regardless
of arguments), treat exit code 1 as a warning instead of an error. But if Borg exit codes are
given as a sequence of exit code configuration dicts, then use that configuration to decide
what's an error and what's a warning. If run to completion is False, then return the process for
the command without executing it to completion.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
log_command(full_command, input_file, output_file, extra_environment)
environment = {**os.environ, **extra_environment} if extra_environment else None
log_command(full_command, input_file, output_file, environment)
do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command
@@ -305,52 +315,58 @@ def execute_command(
shell=shell,
env=environment,
cwd=working_directory,
# Necessary for passing credentials via anonymous pipe.
close_fds=False,
)
if not run_to_completion:
return process
log_outputs(
(process,),
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
)
with borgmatic.logger.Log_prefix(None): # Log command output without any prefix.
log_outputs(
(process,),
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
)
def execute_command_and_capture_output(
full_command,
input_file=None,
capture_stderr=False,
shell=False,
extra_environment=None,
environment=None,
working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
):
'''
Execute the given command (a sequence of command/argument strings), capturing and returning its
output (stdout). If capture stderr is True, then capture and return stderr in addition to
stdout. If shell is True, execute the command within a shell. If an extra environment dict is
given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the
command. If a Borg local path is given, and the command matches it (regardless of arguments),
treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a
sequence of exit code configuration dicts, then use that configuration to decide what's an error
and what's a warning.
output (stdout). If an input file descriptor is given, then pipe it to the command's stdin. If
capture stderr is True, then capture and return stderr in addition to stdout. If shell is True,
execute the command within a shell. If an environment variables dict is given, then pass it into
the command. If a working directory is given, use that as the present working directory when
running the command. If a Borg local path is given, and the command matches it (regardless of
arguments), treat exit code 1 as a warning instead of an error. But if Borg exit codes are given
as a sequence of exit code configuration dicts, then use that configuration to decide what's an
error and what's a warning.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
log_command(full_command, environment=extra_environment)
environment = {**os.environ, **extra_environment} if extra_environment else None
log_command(full_command, input_file, environment=environment)
command = ' '.join(full_command) if shell else full_command
try:
output = subprocess.check_output(
command,
stdin=input_file,
stderr=subprocess.STDOUT if capture_stderr else None,
shell=shell,
env=environment,
cwd=working_directory,
# Necessary for passing credentials via anonymous pipe.
close_fds=False,
)
except subprocess.CalledProcessError as error:
if (
@@ -370,7 +386,7 @@ def execute_command_with_processes(
output_file=None,
input_file=None,
shell=False,
extra_environment=None,
environment=None,
working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
@@ -384,19 +400,17 @@ def execute_command_with_processes(
If an open output file object is given, then write stdout to the file and only log stderr. But
if output log level is None, instead suppress logging and return the captured output for (only)
the given command. If an open input file object is given, then read stdin from the file. If
shell is True, execute the command within a shell. If an extra environment dict is given, then
use it to augment the current environment, and pass the result into the command. If a working
directory is given, use that as the present working directory when running the command. If a
Borg local path is given, then for any matching command or process (regardless of arguments),
treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a
sequence of exit code configuration dicts, then use that configuration to decide what's an error
and what's a warning.
shell is True, execute the command within a shell. If an environment variables dict is given,
then pass it into the command. If a working directory is given, use that as the present working
directory when running the command. If a Borg local path is given, then for any matching command
or process (regardless of arguments), treat exit code 1 as a warning instead of an error. But if
Borg exit codes are given as a sequence of exit code configuration dicts, then use that
configuration to decide what's an error and what's a warning.
Raise subprocesses.CalledProcessError if an error occurs while running the command or in the
upstream process.
'''
log_command(full_command, input_file, output_file, extra_environment)
environment = {**os.environ, **extra_environment} if extra_environment else None
log_command(full_command, input_file, output_file, environment)
do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command
@@ -411,6 +425,8 @@ def execute_command_with_processes(
shell=shell,
env=environment,
cwd=working_directory,
# Necessary for passing credentials via anonymous pipe.
close_fds=False,
)
except (subprocess.CalledProcessError, OSError):
# Something has gone wrong. So vent each process' output buffer to prevent it from hanging.
@@ -421,13 +437,14 @@ def execute_command_with_processes(
process.kill()
raise
captured_outputs = log_outputs(
tuple(processes) + (command_process,),
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
)
with borgmatic.logger.Log_prefix(None): # Log command output without any prefix.
captured_outputs = log_outputs(
tuple(processes) + (command_process,),
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
)
if output_log_level is None:
return captured_outputs.get(command_process)

View File

@@ -2,9 +2,11 @@ import logging
import os
import re
import shlex
import subprocess
import sys
import borgmatic.execute
import borgmatic.logger
logger = logging.getLogger(__name__)
@@ -12,7 +14,7 @@ logger = logging.getLogger(__name__)
SOFT_FAIL_EXIT_CODE = 75
def interpolate_context(config_filename, hook_description, command, context):
def interpolate_context(hook_description, command, context):
'''
Given a config filename, a hook description, a single hook command, and a dict of context
names/values, interpolate the values by "{name}" into the command and return the result.
@@ -22,7 +24,7 @@ def interpolate_context(config_filename, hook_description, command, context):
for unsupported_variable in re.findall(r'{\w+}', command):
logger.warning(
f"{config_filename}: Variable '{unsupported_variable}' is not supported in {hook_description} hook"
f"Variable '{unsupported_variable}' is not supported in {hook_description} hook"
)
return command
@@ -30,71 +32,201 @@ def interpolate_context(config_filename, hook_description, command, context):
def make_environment(current_environment, sys_module=sys):
'''
Given the existing system environment as a map from environment variable name to value, return
(in the same form) any extra environment variables that should be used when running command
hooks.
Given the existing system environment as a map from environment variable name to value, return a
copy of it, augmented with any extra environment variables that should be used when running
command hooks.
'''
environment = dict(current_environment)
# Detect whether we're running within a PyInstaller bundle. If so, set or clear LD_LIBRARY_PATH
# based on the value of LD_LIBRARY_PATH_ORIG. This prevents library version information errors.
if getattr(sys_module, 'frozen', False) and hasattr(sys_module, '_MEIPASS'):
return {'LD_LIBRARY_PATH': current_environment.get('LD_LIBRARY_PATH_ORIG', '')}
environment['LD_LIBRARY_PATH'] = environment.get('LD_LIBRARY_PATH_ORIG', '')
return {}
return environment
def execute_hook(commands, umask, config_filename, description, dry_run, **context):
def filter_hooks(command_hooks, before=None, after=None, hook_name=None, action_names=None):
'''
Given a list of hook commands to execute, a umask to execute with (or None), a config filename,
a hook description, and whether this is a dry run, run the given commands. Or, don't run them
if this is a dry run.
Given a sequence of command hook dicts from configuration and one or more filters (before name,
after name, calling hook name, or a sequence of action names), filter down the command hooks to
just the ones that match the given filters.
'''
return tuple(
hook_config
for hook_config in command_hooks or ()
for config_action_names in (hook_config.get('when'),)
if before is None or hook_config.get('before') == before
if after is None or hook_config.get('after') == after
if action_names is None
or config_action_names is None
or set(config_action_names or ()).intersection(set(action_names))
)
def execute_hooks(command_hooks, umask, working_directory, dry_run, **context):
'''
Given a sequence of command hook dicts from configuration, a umask to execute with (or None), a
working directory to execute with, and whether this is a dry run, run the commands for each
hook. Or don't run them if this is a dry run.
The context contains optional values interpolated by name into the hook commands.
Raise ValueError if the umask cannot be parsed.
Raise ValueError if the umask cannot be parsed or a hook is invalid.
Raise subprocesses.CalledProcessError if an error occurs in a hook.
'''
if not commands:
logger.debug(f'{config_filename}: No commands to run for {description} hook')
return
borgmatic.logger.add_custom_log_levels()
dry_run_label = ' (dry run; not actually running hooks)' if dry_run else ''
context['configuration_filename'] = config_filename
commands = [
interpolate_context(config_filename, description, command, context) for command in commands
]
for hook_config in command_hooks:
commands = hook_config.get('run')
if len(commands) == 1:
logger.info(f'{config_filename}: Running command for {description} hook{dry_run_label}')
else:
logger.info(
f'{config_filename}: Running {len(commands)} commands for {description} hook{dry_run_label}',
)
if 'before' in hook_config:
description = f'before {hook_config.get("before")}'
elif 'after' in hook_config:
description = f'after {hook_config.get("after")}'
else:
raise ValueError(f'Invalid hook configuration: {hook_config}')
if umask:
parsed_umask = int(str(umask), 8)
logger.debug(f'{config_filename}: Set hook umask to {oct(parsed_umask)}')
original_umask = os.umask(parsed_umask)
else:
original_umask = None
if not commands:
logger.debug(f'No commands to run for {description} hook')
continue
try:
for command in commands:
if dry_run:
continue
commands = [interpolate_context(description, command, context) for command in commands]
borgmatic.execute.execute_command(
[command],
output_log_level=(logging.ERROR if description == 'on-error' else logging.WARNING),
shell=True,
extra_environment=make_environment(os.environ),
if len(commands) == 1:
logger.info(f'Running {description} command hook{dry_run_label}')
else:
logger.info(
f'Running {len(commands)} commands for {description} hook{dry_run_label}',
)
finally:
if original_umask:
os.umask(original_umask)
if umask:
parsed_umask = int(str(umask), 8)
logger.debug(f'Setting hook umask to {oct(parsed_umask)}')
original_umask = os.umask(parsed_umask)
else:
original_umask = None
try:
for command in commands:
if dry_run:
continue
borgmatic.execute.execute_command(
[command],
output_log_level=(
logging.ERROR if hook_config.get('after') == 'error' else logging.ANSWER
),
shell=True,
environment=make_environment(os.environ),
working_directory=working_directory,
)
finally:
if original_umask:
os.umask(original_umask)
def considered_soft_failure(config_filename, error):
class Before_after_hooks:
'''
A Python context manager for executing command hooks both before and after the wrapped code.
Example use as a context manager:
with borgmatic.hooks.command.Before_after_hooks(
command_hooks=config.get('commands'),
before_after='do_stuff',
umask=config.get('umask'),
dry_run=dry_run,
hook_name='myhook',
):
do()
some()
stuff()
With that context manager in place, "before" command hooks execute before the wrapped code runs,
and "after" command hooks execute after the wrapped code completes.
'''
def __init__(
self,
command_hooks,
before_after,
umask,
working_directory,
dry_run,
hook_name=None,
action_names=None,
**context,
):
'''
Given a sequence of command hook configuration dicts, the before/after name, a umask to run
commands with, a working directory to run commands with, a dry run flag, the name of the
calling hook, a sequence of action names, and any context for the executed commands, save
those data points for use below.
'''
self.command_hooks = command_hooks
self.before_after = before_after
self.umask = umask
self.working_directory = working_directory
self.dry_run = dry_run
self.hook_name = hook_name
self.action_names = action_names
self.context = context
def __enter__(self):
'''
Run the configured "before" command hooks that match the initialized data points.
'''
try:
execute_hooks(
borgmatic.hooks.command.filter_hooks(
self.command_hooks,
before=self.before_after,
hook_name=self.hook_name,
action_names=self.action_names,
),
self.umask,
self.working_directory,
self.dry_run,
**self.context,
)
except (OSError, subprocess.CalledProcessError) as error:
if considered_soft_failure(error):
return
# Trigger the after hook manually, since raising here will prevent it from being run
# otherwise.
self.__exit__(None, None, None)
raise ValueError(f'Error running before {self.before_after} hook: {error}')
def __exit__(self, exception_type, exception, traceback):
'''
Run the configured "after" command hooks that match the initialized data points.
'''
try:
execute_hooks(
borgmatic.hooks.command.filter_hooks(
self.command_hooks,
after=self.before_after,
hook_name=self.hook_name,
action_names=self.action_names,
),
self.umask,
self.working_directory,
self.dry_run,
**self.context,
)
except (OSError, subprocess.CalledProcessError) as error:
if considered_soft_failure(error):
return
raise ValueError(f'Error running after {self.before_after} hook: {error}')
def considered_soft_failure(error):
'''
Given a configuration filename and an exception object, return whether the exception object
represents a subprocess.CalledProcessError with a return code of SOFT_FAIL_EXIT_CODE. If so,
@@ -106,7 +238,7 @@ def considered_soft_failure(config_filename, error):
if exit_code == SOFT_FAIL_EXIT_CODE:
logger.info(
f'{config_filename}: Command hook exited with soft failure exit code ({SOFT_FAIL_EXIT_CODE}); skipping remaining repository actions',
f'Command hook exited with soft failure exit code ({SOFT_FAIL_EXIT_CODE}); skipping remaining repository actions',
)
return True

View File

View File

@@ -0,0 +1,43 @@
import logging
import os
import re
logger = logging.getLogger(__name__)
SECRET_NAME_PATTERN = re.compile(r'^\w+$')
DEFAULT_SECRETS_DIRECTORY = '/run/secrets'
def load_credential(hook_config, config, credential_parameters):
'''
Given the hook configuration dict, the configuration dict, and a credential parameters tuple
containing a secret name to load, read the secret from the corresponding container secrets file
and return it.
Raise ValueError if the credential parameters is not one element, the secret name is invalid, or
the secret file cannot be read.
'''
try:
(secret_name,) = credential_parameters
except ValueError:
name = ' '.join(credential_parameters)
raise ValueError(f'Cannot load invalid secret name: "{name}"')
if not SECRET_NAME_PATTERN.match(secret_name):
raise ValueError(f'Cannot load invalid secret name: "{secret_name}"')
try:
with open(
os.path.join(
config.get('working_directory', ''),
(hook_config or {}).get('secrets_directory', DEFAULT_SECRETS_DIRECTORY),
secret_name,
)
) as secret_file:
return secret_file.read().rstrip(os.linesep)
except (FileNotFoundError, OSError) as error:
logger.warning(error)
raise ValueError(f'Cannot load secret "{secret_name}" from file: {error.filename}')

View File

@@ -0,0 +1,32 @@
import logging
import os
logger = logging.getLogger(__name__)
def load_credential(hook_config, config, credential_parameters):
'''
Given the hook configuration dict, the configuration dict, and a credential parameters tuple
containing a credential path to load, load the credential from file and return it.
Raise ValueError if the credential parameters is not one element or the secret file cannot be
read.
'''
try:
(credential_path,) = credential_parameters
except ValueError:
name = ' '.join(credential_parameters)
raise ValueError(f'Cannot load invalid credential: "{name}"')
expanded_credential_path = os.path.expanduser(credential_path)
try:
with open(
os.path.join(config.get('working_directory', ''), expanded_credential_path)
) as credential_file:
return credential_file.read().rstrip(os.linesep)
except (FileNotFoundError, OSError) as error:
logger.warning(error)
raise ValueError(f'Cannot load credential file: {error.filename}')

View File

@@ -0,0 +1,45 @@
import logging
import os
import shlex
import borgmatic.execute
logger = logging.getLogger(__name__)
def load_credential(hook_config, config, credential_parameters):
'''
Given the hook configuration dict, the configuration dict, and a credential parameters tuple
containing a KeePassXC database path and an attribute name to load, run keepassxc-cli to fetch
the corresponding KeePassXC credential and return it.
Raise ValueError if keepassxc-cli can't retrieve the credential.
'''
try:
(database_path, attribute_name) = credential_parameters
except ValueError: