Compare commits

...

207 Commits
1.9.6 ... main

Author SHA1 Message Date
e1fdfe4c2f Add credential hook directory expansion to NEWS (#422).
All checks were successful
build / test (push) Successful in 8m40s
build / docs (push) Successful in 1m15s
2025-03-24 13:00:38 -07:00
83a56a3fef Add directory expansion for file-based and KeyPassXC credential hooks (#1042).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: #1042
2025-03-24 19:57:18 +00:00
Nish_
4bca7bb198 add directory expansion for file-based and KeyPassXC credentials
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-24 21:04:55 +05:30
524ec6b3cb Add "extract" action fix to NEWS (#1037).
All checks were successful
build / test (push) Successful in 8m11s
build / docs (push) Successful in 1m22s
2025-03-21 15:43:05 -07:00
7904ffb641 Fix extracting from remote repositories with working_directory defined (#1037).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: #1038
Reviewed-by: Dan Helfman <witten@torsion.org>
2025-03-21 22:40:18 +00:00
cd5ba81748 Fix docs: Crontabs aren't executable (#1039).
All checks were successful
build / test (push) Successful in 5m59s
build / docs (push) Successful in 59s
Reviewed-on: #1039
2025-03-21 21:32:38 +00:00
514ade6609 Fix inconsistent quotes in one documentation file (#790).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-03-21 14:27:40 -07:00
201469e2c2 Add "key import" action to NEWS (#345).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-03-21 14:26:01 -07:00
9ac2a2e286 Add key import action to import a copy of repository key from backup (#345).
Some checks failed
build / test (push) Failing after 1m41s
build / docs (push) Has been skipped
Reviewed-on: #1036
Reviewed-by: Dan Helfman <witten@torsion.org>
2025-03-21 21:22:50 +00:00
Benjamin Bock
a16d138afc Crontabs aren't executable 2025-03-21 21:58:02 +01:00
Benjamin Bock
81a3a99578 Fix extracting from remote repositories with working_directory defined 2025-03-21 21:34:46 +01:00
587d31de7c Run all command hooks respecting the "working_directory" option if configured (#790).
All checks were successful
build / test (push) Successful in 10m15s
build / docs (push) Successful in 1m14s
2025-03-21 10:53:06 -07:00
Nish_
8aaa5ba8a6 minor changes
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-21 19:26:12 +05:30
Nish_
5525b467ef add key import command
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-21 00:47:45 +05:30
c2409d9968 Remove the "dump_data_sources" command hook, as it doesn't really solve the use case and works differently than all the other command hooks (#790).
All checks were successful
build / test (push) Successful in 5m47s
build / docs (push) Successful in 1m6s
2025-03-20 11:13:37 -07:00
624a7de622 Document "after" command hooks running in case of error and make sure that happens in case of "before" hook error (#790).
All checks were successful
build / test (push) Successful in 10m16s
build / docs (push) Successful in 1m22s
2025-03-20 10:57:39 -07:00
c926f0bd5d Clarify documentation for dump_data_sources command hook (#790).
All checks were successful
build / test (push) Successful in 10m21s
build / docs (push) Successful in 1m14s
2025-03-17 10:31:34 -07:00
1d5713c4c5 Updated outdated schema comment referencing ~/.borgmatic path (#836).
All checks were successful
build / test (push) Successful in 6m7s
build / docs (push) Successful in 1m13s
2025-03-15 21:42:45 -07:00
f9612cc685 Add SQLite custom command option to NEWS (#836). 2025-03-15 21:37:23 -07:00
5742a1a2d9 Add custom command option for SQLite hook (#836).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: #1027
2025-03-16 04:34:15 +00:00
Nish_
c84815bfb0 add custom dump and restore commands for sqlite hook
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-16 09:07:49 +05:30
1c92d84e09 Add Borg 2 "prune --stats" flag change to NEWS (#1010).
All checks were successful
build / test (push) Successful in 9m59s
build / docs (push) Successful in 1m33s
2025-03-15 10:02:47 -07:00
1d94fb501f Conditionally pass --stats to prune based on Borg version (#1010).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: #1026
2025-03-15 16:59:50 +00:00
Nish_
1b4c94ad1e Add feature toggle to pass --stats to prune on Borg 1, but not Borg 2
Signed-off-by: Nish_ <120EE0980@nitrkl.ac.in>
2025-03-15 09:56:14 +05:30
901e668c76 Document a database use case involving a temporary database client container (#1020).
All checks were successful
build / test (push) Successful in 7m37s
build / docs (push) Successful in 1m30s
2025-03-12 17:10:35 -07:00
bcb224a243 Claim another implemented ticket in NEWS (#821).
All checks were successful
build / test (push) Successful in 7m35s
build / docs (push) Successful in 1m25s
2025-03-12 14:31:13 -07:00
6b6e1e0336 Make the "configuration" command hook support "error" hooks and also pinging monitoring on failure (#790).
All checks were successful
build / test (push) Successful in 12m18s
build / docs (push) Successful in 1m53s
2025-03-12 14:13:29 -07:00
f5c9bc4fa9 Add a "not yet released" note on 2.0.0 in docs (#790).
All checks were successful
build / test (push) Successful in 7m15s
build / docs (push) Successful in 1m35s
2025-03-11 16:46:07 -07:00
cdd0e6f052 Fix incorrect kwarg in LVM hook (#790).
All checks were successful
build / test (push) Successful in 7m3s
build / docs (push) Successful in 1m36s
2025-03-11 14:42:25 -07:00
7bdbadbac2 Deprecate all "before_*", "after_*" and "on_error" command hooks in favor of more flexible "commands:" (#790).
Some checks failed
build / test (push) Failing after 15m7s
build / docs (push) Has been skipped
Reviewed-on: #1019
2025-03-11 21:22:33 +00:00
d3413e0907 Documentation clarification (#1019). 2025-03-11 14:20:42 -07:00
8a20ee7304 Fix typo in documentation (#1019). 2025-03-11 14:08:53 -07:00
325f53c286 Context tweaks + mention configuration upgrade in command hook documentation (#1019). 2025-03-11 14:07:06 -07:00
b4d24798bf More command hook documentation updates (#1019). 2025-03-11 13:03:58 -07:00
7965eb9de3 Correctly handle errors in command hooks (#1019). 2025-03-11 11:36:28 -07:00
8817364e6d Documentation on command hooks (#1019). 2025-03-10 22:38:48 -07:00
965740c778 Update version of command hooks since they didn't get released in 1.9.14 (#1019). 2025-03-10 10:37:09 -07:00
2a0319f02f Merge branch 'main' into unified-command-hooks. 2025-03-10 10:35:36 -07:00
fbdb09b87d Bump version for release.
All checks were successful
build / test (push) Successful in 6m42s
build / docs (push) Successful in 1m19s
2025-03-10 10:17:36 -07:00
bec5a0c0ca Fix end-to-end tests for Btrfs (#1023).
All checks were successful
build / test (push) Successful in 6m50s
build / docs (push) Successful in 1m38s
2025-03-10 10:15:23 -07:00
4ee7f72696 Fix an error in the Btrfs hook when attempting to snapshot a read-only subvolume (#1023).
Some checks failed
build / test (push) Failing after 6m54s
build / docs (push) Has been skipped
2025-03-09 23:04:55 -07:00
9941d7dc57 More docs and command hook context tweaks (#1019). 2025-03-09 17:01:46 -07:00
ec88bb2e9c Merge branch 'main' into unified-command-hooks. 2025-03-09 13:37:17 -07:00
68b6d01071 Fix a regression in which the "exclude_patterns" option didn't expand "~" (#1021).
All checks were successful
build / test (push) Successful in 7m11s
build / docs (push) Successful in 1m31s
2025-03-09 13:35:22 -07:00
b52339652f Initial command hooks documentation work (#1019). 2025-03-09 09:57:13 -07:00
4fd22b2df0 Merge branch 'main' into unified-command-hooks. 2025-03-08 21:02:04 -08:00
86b138e73b Clarify command hook documentation.
All checks were successful
build / test (push) Successful in 11m29s
build / docs (push) Successful in 1m44s
2025-03-08 21:00:58 -08:00
5ab766b51c Add a few more missing tests (#1019). 2025-03-08 20:55:13 -08:00
45c114973c Add missing test coverage for new/changed code (#1019). 2025-03-08 18:31:16 -08:00
6a96a78cf1 Fix existing tests (#1019). 2025-03-07 22:58:25 -08:00
e06c6740f2 Switch to context manager for running "dump_data_sources" before/after hooks (#790). 2025-03-07 10:33:39 -08:00
10bd1c7b41 Remove restore_data_source_dump as a command hook for now (#790). 2025-03-06 22:53:19 -08:00
d4f48a3a9e Initial work on unified command hooks (#790). 2025-03-06 11:23:24 -08:00
c76a108422 Link to Zabbix documentation from NEWS. 2025-03-06 10:37:00 -08:00
eb5dc128bf Fix incorrect test name (#1017).
All checks were successful
build / test (push) Successful in 7m10s
build / docs (push) Successful in 1m32s
2025-03-06 10:34:28 -08:00
1d486d024b Fix a regression in which some MariaDB/MySQL passwords were not escaped correctly (#1017).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-03-06 10:32:38 -08:00
5a8f27d75c Add single quotes around the MariaDB password (#1017).
All checks were successful
build / test (push) Successful in 11m51s
build / docs (push) Successful in 1m41s
Reviewed-on: #1017
2025-03-06 18:01:43 +00:00
a926b413bc Updating automated test, and fixing linting errors. 2025-03-06 09:00:33 -03:30
18ffd96d62 Add single quotes around the password.
When the DB password uses some special characters, the
defaults-extra-file can be incorrect. In the case of a password with
the # symbol, anything after that is considered a comment. The single
quotes around the password rectify this.
2025-03-05 22:51:41 -03:30
c0135864c2 With the PagerDuty monitoring hook, send borgmatic logs to PagerDuty so they show up in the incident UI (#409).
All checks were successful
build / test (push) Successful in 10m48s
build / docs (push) Successful in 2m50s
2025-03-04 08:55:09 -08:00
ddfd3c6ca1 Clarify Zabbix monitoring hook documentation about creating items (#936).
All checks were successful
build / test (push) Successful in 7m54s
build / docs (push) Successful in 1m40s
2025-03-03 16:02:22 -08:00
dbe82ff11e Bump version for release.
All checks were successful
build / test (push) Successful in 6m46s
build / docs (push) Successful in 1m14s
2025-03-03 10:21:15 -08:00
55c0ab1610 Add "tls" options to the MariaDB and MySQL database hooks.
All checks were successful
build / test (push) Successful in 10m58s
build / docs (push) Successful in 1m43s
2025-03-03 10:07:03 -08:00
1f86100f26 NEWS wording tweaks. 2025-03-02 20:10:20 -08:00
2a16ffab1b When ctrl-C is pressed, ensure Borg actually exits (#1015).
All checks were successful
build / test (push) Successful in 7m0s
build / docs (push) Successful in 1m38s
2025-03-02 10:32:57 -08:00
4b2f7e03af Fix broken "config generate" (#975).
All checks were successful
build / test (push) Successful in 6m52s
build / docs (push) Successful in 1m42s
2025-03-01 21:02:32 -08:00
024006f4c0 Title case Borg.
Some checks failed
build / test (push) Failing after 4m35s
build / docs (push) Has been skipped
2025-03-01 20:56:40 -08:00
4c71e600ca Expand a little on the specifics of backups of an LVM volume (#1014).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: #1014
2025-03-02 04:55:13 +00:00
114f5702b2 Expand a little on the specifics of backups of an LVM volume. 2025-03-02 14:22:57 +11:00
54afe87a9f Add a "compression" option to the PostgreSQL database hook (#975).
Some checks failed
build / test (push) Failing after 4m32s
build / docs (push) Has been skipped
2025-03-01 17:29:16 -08:00
25b6a49df7 Send database passwords to MongoDB via anonymous pipe (#1013).
All checks were successful
build / test (push) Successful in 6m27s
build / docs (push) Successful in 1m26s
2025-03-01 10:04:04 -08:00
b97372adf2 Add MariaDB and MySQL anonymous pipe to NEWS (#1009).
All checks were successful
build / test (push) Successful in 6m42s
build / docs (push) Successful in 1m25s
2025-03-01 08:49:42 -08:00
6bc9a592d9 Send MariaDB and MySQL passwords via anonymous pipe instead of environment variable (#1009).
All checks were successful
build / test (push) Successful in 11m27s
build / docs (push) Successful in 1m49s
Reviewed-on: #1011
2025-03-01 03:33:08 +00:00
839862cff0 Update documentation link text about providing database passwords from external sources (#1009). 2025-02-28 19:31:22 -08:00
06b065cb09 Add missing test coverage (#1009). 2025-02-28 18:28:09 -08:00
1e5c256d54 Get tests passing again (#1009). 2025-02-28 14:40:00 -08:00
baf5fec78d If the user supplies their own --defaults-extra-file, include it from the one we generate (#1009). 2025-02-28 10:53:17 -08:00
48a4fbaa89 Add missing test coverage for defaults file function (#1009). 2025-02-28 09:21:01 -08:00
1e274d7153 Add some missing test mocking (#1009). 2025-02-28 08:59:38 -08:00
c41b743819 Get existing unit tests passing (#1009). 2025-02-28 08:37:03 -08:00
36d0073375 Send MySQL passwords via anonymous pipe instead of environment variable (#1009). 2025-02-27 10:42:47 -08:00
0bd418836e Send MariaDB passwords via anonymous pipe instead of environment variable (#1009) 2025-02-27 10:15:45 -08:00
923fa7d82f Include contributors of closed tickets in "recent contributors" documentation.
All checks were successful
build / test (push) Successful in 7m15s
build / docs (push) Successful in 1m32s
2025-02-27 09:23:08 -08:00
dce0528057 In the Zabbix monitoring hook, support Zabbix 7.2's authentication changes (#1003).
All checks were successful
build / test (push) Successful in 11m21s
build / docs (push) Successful in 1m35s
2025-02-26 22:33:01 -08:00
8a6c6c84d2 Add Uptime Kuma "verify_tls" option to NEWS.
All checks were successful
build / test (push) Successful in 6m32s
build / docs (push) Successful in 24s
2025-02-24 11:30:16 -08:00
1e21c8f97b
Add "verify_tls" option to Uptime Kuma hook.
Merge pull request #90 from columbarius/uptimekuma-verify-tls
2025-02-24 11:28:18 -08:00
columbarius
2eab74a521 Add "verify_tls" option to Uptime Kuma hook. 2025-02-24 20:12:47 +01:00
3bca686707 Fix a ZFS error during snapshot cleanup (#1001).
All checks were successful
build / test (push) Successful in 6m38s
build / docs (push) Successful in 1m13s
2025-02-23 17:01:35 -08:00
8854b9ad20 Backing out a ZFS change that hasn't been confirmed working quite yet.
Some checks failed
build / test (push) Failing after 1s
build / docs (push) Has been skipped
2025-02-23 15:49:12 -08:00
bcc463688a When getting all ZFS dataset mount points, deduplicate and filter out "none".
Some checks failed
build / test (push) Failing after 23s
build / docs (push) Has been skipped
2025-02-23 15:46:39 -08:00
596305e3de Bump version for release.
All checks were successful
build / test (push) Successful in 6m34s
build / docs (push) Successful in 1m11s
2025-02-23 09:59:53 -08:00
c462f0c84c Fix Python < 3.12 compatibility issue (#1005).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-02-23 09:59:19 -08:00
4f0142c3c5 Fix Python < 3.12 compatibility issue (#1005).
All checks were successful
build / test (push) Successful in 8m48s
build / docs (push) Successful in 1m29s
2025-02-23 09:09:47 -08:00
4f88018558 Bump version for release.
All checks were successful
build / test (push) Successful in 6m17s
build / docs (push) Successful in 1m21s
2025-02-22 14:39:45 -08:00
3642687ab5 Fix broken tests (#999).
All checks were successful
build / test (push) Successful in 6m20s
build / docs (push) Successful in 1m17s
2025-02-22 14:32:32 -08:00
5d9c111910 Fix a runtime directory error from a conflict between "extra_borg_options" and special file detection (#999).
Some checks failed
build / test (push) Failing after 2m0s
build / docs (push) Has been skipped
2025-02-22 14:26:21 -08:00
3cf19dd1b0 Send the "encryption_passphrase" option to Borg via an anonymous pipe (#998).
All checks were successful
build / test (push) Successful in 6m50s
build / docs (push) Successful in 1m30s
Reviewed-on: #998
2025-02-22 17:57:37 +00:00
ad3392ca15 Ignore the BORG_PASSCOMMAND environment variable when the "encryption_passphase" option is set. 2025-02-22 09:55:07 -08:00
087b7f5c7b Merge branch 'main' into passphrase-via-file-descriptor 2025-02-22 09:27:39 -08:00
34bb09e9be Document Zabbix server version compatibility (#1003).
All checks were successful
build / test (push) Successful in 6m33s
build / docs (push) Successful in 1m30s
2025-02-22 09:26:08 -08:00
a61eba8c79 Add PR number to NEWS item. 2025-02-21 22:30:31 -08:00
2280bb26b6 Fix a few tests to mock more accurately. 2025-02-21 22:08:08 -08:00
4ee2603fef Merge branch 'main' into passphrase-via-file-descriptor 2025-02-21 20:26:48 -08:00
cc2ede70ac Fix ZFS mount errors (#1001).
All checks were successful
build / test (push) Successful in 8m26s
build / docs (push) Successful in 1m29s
Reviewed-on: #1002
2025-02-22 04:13:35 +00:00
02d8ecd66e Document the root pattern requirement for snapshotting (#1001). 2025-02-21 18:08:34 -08:00
9ba78fa33b Don't try to unmount empty directories (#1001). 2025-02-21 17:59:45 -08:00
a3e34d63e9 Remove debugging prints (#1001). 2025-02-21 16:36:12 -08:00
bc25ac4eea Fix Btrfs end-to-end-test (#1001). 2025-02-21 16:32:07 -08:00
e69c686abf Get all unit/integration tests passing (#1001). 2025-02-21 11:32:57 -08:00
0210bf76bc Fix ZFS and Btrfs tests (#1001). 2025-02-20 22:58:05 -08:00
e69cce7e51 Document ZFS snapshotting exclusion of "canmount=off" datasets (#1001). 2025-02-20 14:04:23 -08:00
3655e8784a Add NEWS items for filesystem hook fixes/changes (#1001). 2025-02-20 13:25:09 -08:00
58aed0892c Initial work on fixing ZFS mount errors (#1001). 2025-02-19 22:49:14 -08:00
0e65169503 Improve clarity of comments and variable names of runtime directory exclude detection logic (#999).
All checks were successful
build / test (push) Successful in 9m1s
build / docs (push) Successful in 1m48s
2025-02-17 14:12:55 -08:00
07ecc0ffd6 Send the "encryption_passphrase" option to Borg via an anonymous pipe. 2025-02-17 11:03:36 -08:00
37ad398aff Add a ticket number to NEWS for (some of) the credential hook work.
All checks were successful
build / test (push) Successful in 8m51s
build / docs (push) Successful in 1m40s
2025-02-16 09:12:52 -08:00
056dfc6d33 Add Btrfs "/" subvolume fix to NEWS.
All checks were successful
build / test (push) Successful in 6m33s
build / docs (push) Successful in 1m34s
2025-02-15 09:56:46 -08:00
bf850b9d38
Fix path handling error when handling btrfs '/' subvolume.
Merge pull request #89 from dmitry-t7ko/btrfs-root-submodule-fix
2025-02-15 09:49:13 -08:00
7f22612bf1 Add credential loading from file, KeePassXC, and Docker/Podman secrets.
All checks were successful
build / test (push) Successful in 8m40s
build / docs (push) Successful in 1m31s
Reviewed-on: #994
2025-02-15 04:20:11 +00:00
e02a0e6322 Support working directory for container and file credential hooks. 2025-02-14 19:35:12 -08:00
2ca23b629c Add end-to-end tests for new credential hooks, along with some related configuration options. 2025-02-14 15:33:30 -08:00
b283e379d0 Actually pass the current configuration to credential hooks. 2025-02-14 10:15:52 -08:00
5dda9c8ee5 Add unit tests for new credential hooks. 2025-02-13 16:38:50 -08:00
Dmitrii Tishchenko
653d8c0946 Remove unneeded 'continue' 2025-02-13 21:44:45 +00:00
Dmitrii Tishchenko
92e87d839d Fix path handling error when handling btrfs '/' submodule 2025-02-13 17:12:23 +00:00
d6cf48544a Get existing tests passing. 2025-02-12 22:49:16 -08:00
8745b9939d Add documentation for new credential hooks. 2025-02-12 21:44:17 -08:00
5661b67cde Merge branch 'main' into keepassxc-docker-podman-file-credentials 2025-02-12 09:14:49 -08:00
aa4a9de3b2 Fix the "create" action to omit the repository label prefix from Borg's output when databases are enabled (#996).
All checks were successful
build / test (push) Successful in 8m19s
build / docs (push) Successful in 1m38s
2025-02-12 09:12:59 -08:00
f9ea45493d Add missing dev0 tag to version. 2025-02-11 23:00:26 -08:00
a0ba5b673b Add credential loading from file, KeePassXC, and Docker/Podman secrets. 2025-02-11 22:54:07 -08:00
50096296da Revamp systemd credential syntax to be more consistent with constants (#966).
All checks were successful
build / test (push) Successful in 8m19s
build / docs (push) Successful in 1m39s
2025-02-10 22:01:23 -08:00
3bc14ba364 Bump version for release. 2025-02-10 14:21:33 -08:00
c9c6913547 Add a "!credential" tag for loading systemd credentials into borgmatic configuration (#966).
All checks were successful
build / test (push) Successful in 6m1s
build / docs (push) Successful in 1m28s
Reviewed-on: #993
2025-02-10 22:18:43 +00:00
779f51f40a Fix favicon on non-home pages.
All checks were successful
build / test (push) Successful in 7m41s
build / docs (push) Successful in 1m34s
2025-02-10 13:24:27 -08:00
24b846e9ca Additional test coverage (#966). 2025-02-10 10:05:51 -08:00
73fe29b055 Add additional test coverage for credential tag (#966). 2025-02-10 09:52:07 -08:00
775385e688 Get unit tests passing again (#966). 2025-02-09 22:44:38 -08:00
efdbee934a Update documentation to describe delayed !credential tag approach (#966). 2025-02-09 15:27:58 -08:00
49719dc309 Load credentials from database hooks (#966). 2025-02-09 11:35:26 -08:00
b7e3ee8277 Revamped the credentials to load them much closer to where they're used (#966). 2025-02-09 11:12:40 -08:00
97fe1a2c50 Flake fixes (#966). 2025-02-08 19:28:03 -08:00
66abf38b39 Add end-to-end tests for the systemd credential hook (#966). 2025-02-08 17:50:59 -08:00
5baf091853 Add automated tests for the systemd credential hook (#966). 2025-02-08 10:42:11 -08:00
c5abcc1fdf Add documentation for the "!credential" tag (#966). 2025-02-07 16:04:10 -08:00
9a9a8fd1c6 Add a "!credential" tag for loading systemd credentials into borgmatic configuration (#966). 2025-02-07 14:09:26 -08:00
ab9e8d06ee Add a delayed logging handler that delays anything logged before logging is actually configured.
All checks were successful
build / test (push) Successful in 8m26s
build / docs (push) Successful in 1m47s
2025-02-07 09:50:05 -08:00
5a2cd1b261 Add support for Python 3.13.
All checks were successful
build / test (push) Successful in 5m14s
build / docs (push) Successful in 1m24s
2025-02-06 14:21:36 -08:00
ffaa99ba15 With the "max_duration" option or the "--max-duration" flag, run the archives and repository checks separately so they don't interfere with one another (#988).
All checks were successful
build / test (push) Successful in 7m14s
build / docs (push) Successful in 1m18s
2025-02-06 11:52:16 -08:00
5dc0b08f22 Fix the log message code to avoid using Python 3.10+ logging features (#989).
All checks were successful
build / test (push) Successful in 5m55s
build / docs (push) Successful in 2m26s
2025-02-04 11:51:39 -08:00
23009e22aa When both "encryption_passcommand" and "encryption_passphrase" are configured, prefer "encryption_passphrase" even if it's an empty value (#987).
All checks were successful
build / test (push) Successful in 6m27s
build / docs (push) Successful in 1m43s
2025-02-03 23:20:31 -08:00
6cfa10fb7e Fix a "list" action error when the "encryption_passcommand" option is set (#987).
Some checks failed
build / test (push) Successful in 8m21s
build / docs (push) Has been cancelled
2025-02-03 23:11:59 -08:00
d29d0bc1c6 NEWS wording tweaks for clarity.
All checks were successful
build / test (push) Successful in 6m37s
build / docs (push) Successful in 1m30s
2025-02-03 11:22:54 -08:00
c3f4f94190 Bump version for release. 2025-02-03 11:20:13 -08:00
b2d61ade4e Change the default value for the "--original-hostname" flag from "localhost" to no host specified (#985).
Some checks failed
build / test (push) Successful in 7m49s
build / docs (push) Has been cancelled
2025-02-03 11:17:21 -08:00
cca9039863 Move the passcommand logic out of a hook to prevent future security issues (e.g., passphrase exfiltration attacks) if a user invokes a credential hook from an arbitrary configuration value (#961).
All checks were successful
build / test (push) Successful in 8m55s
build / docs (push) Successful in 2m8s
2025-01-31 22:15:53 -08:00
afcf253318 Fix flake errors (#961).
All checks were successful
build / test (push) Successful in 5m56s
build / docs (push) Successful in 1m58s
2025-01-31 10:27:20 -08:00
76533c7db5 Add a clarifying comment to the NEWS entry (#961).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-01-31 10:26:05 -08:00
0073366dfc Add a passcommand hook so borgmatic can collect the encryption passphrase once and pass it to Borg multiple times (#961).
Some checks failed
build / test (push) Failing after 4m12s
build / docs (push) Has been skipped
Reviewed-on: #984
2025-01-31 18:13:38 +00:00
13acaa47e4 Add an end-to-end test for the passcommand hook (#961). 2025-01-30 22:50:13 -08:00
cf326a98a5 Add test coverage for new code (#961). 2025-01-30 21:29:52 -08:00
355eef186e Get existing tests passing again (#961). 2025-01-30 20:18:03 -08:00
c392e4914c Add documentation (#961). 2025-01-30 10:20:24 -08:00
8fed8e0695 Add a passcommand hook to NEWS (#961). 2025-01-30 09:55:32 -08:00
52189490a2 Docstring typo (#961). 2025-01-30 09:48:55 -08:00
26b44699ba Add a passphrase hook so borgmatic can collect the encryption passphrase once and pass it to Borg multiple times (#961). 2025-01-30 09:35:20 -08:00
09933c3dc7 Log the repository path or label on every relevant log message, not just some logs (#635).
All checks were successful
build / test (push) Successful in 5m18s
build / docs (push) Successful in 1m11s
Reviewed-on: #980
2025-01-29 18:39:49 +00:00
c702dca8da Merge branch 'main' into log-repository-everywhere 2025-01-29 10:31:30 -08:00
62003c58ea Fix the Btrfs hook to support subvolumes with names like "@home", different from their mount points (#983).
All checks were successful
build / test (push) Successful in 6m44s
build / docs (push) Successful in 1m55s
2025-01-29 09:46:39 -08:00
67c22e464a Code formatting (#635). 2025-01-29 08:00:42 -08:00
5a9066940f Add monitoring end-to-end tests (#635). 2025-01-28 23:06:22 -08:00
61f0987051 Merge branch 'main' into log-repository-everywhere 2025-01-27 22:03:35 -08:00
63c39be55f Fix flaking issues (#635). 2025-01-27 12:28:36 -08:00
7e344e6e0a Complete test coverage for new code (#635). 2025-01-27 12:25:28 -08:00
b02ff8b6e5 Fix "spot" check file count delta error (#981).
All checks were successful
build / test (push) Successful in 4m38s
build / docs (push) Successful in 1m14s
2025-01-27 10:51:06 -08:00
b6ff242d3a Fix for borgmatic "exclude_patterns" and "exclude_from" recursing into excluded subdirectories (#982).
All checks were successful
build / test (push) Successful in 6m37s
build / docs (push) Successful in 2m2s
2025-01-27 10:07:19 -08:00
71f1819f05 Some additional test coverage (#635). 2025-01-27 09:27:12 -08:00
31b6e21139 Fix end-to-end tests and update more log messages (#635). 2025-01-26 19:03:40 -08:00
7d56641f56 Get existing unit tests passing (#635). 2025-01-26 12:13:29 -08:00
1ad6be2077 Add missing test coverage and fix incorrect test expectations (#855).
All checks were successful
build / test (push) Successful in 6m18s
build / docs (push) Successful in 1m58s
2025-01-26 09:29:54 -08:00
803361b850 Some text fixes (#635). 2025-01-26 09:12:18 -08:00
e0059de711 Add log prefix context manager to make prefix cleanup/restoration easier (#635). 2025-01-25 21:56:41 -08:00
b9ec9bb873 Don't prefix command output (like Borg output) with the global log prefix (#635). 2025-01-25 14:49:39 -08:00
8c5db19490 Code formatting (#635). 2025-01-25 14:14:48 -08:00
cc7e01be68 Log the repository path or label on every relevant log message, not just some logs (#635). 2025-01-25 14:01:25 -08:00
1232ba8045 Revert "Log the repository path or label on every relevant log message, not just some logs (#635)."
All checks were successful
build / test (push) Successful in 4m4s
build / docs (push) Successful in 7s
This reverts commit 90c1161a8c3c52474f76ee0a96808ea5f0b21719.
2025-01-25 13:57:56 -08:00
90c1161a8c Log the repository path or label on every relevant log message, not just some logs (#635).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-01-25 13:55:58 -08:00
02451a8b30 Further database container dump documentation clarifications (#978).
All checks were successful
build / test (push) Successful in 3m55s
build / docs (push) Successful in 59s
2025-01-25 09:17:13 -08:00
730350b31a Fix incorrect option name within schema description.
All checks were successful
build / test (push) Successful in 3m56s
build / docs (push) Successful in 1m6s
2025-01-25 08:04:13 -08:00
203e1f4e99 Bump version for release. 2025-01-25 08:01:34 -08:00
4c35a564ef Fix root patterns so they don't have an invalid "sh:" prefix before getting passed to Borg (#979).
All checks were successful
build / test (push) Successful in 5m44s
build / docs (push) Successful in 1m38s
2025-01-25 07:59:53 -08:00
7551810ea6 Clarify/correct documentation about dumping databases when using containers (#978).
All checks were successful
build / test (push) Successful in 6m26s
build / docs (push) Successful in 1m39s
2025-01-24 14:31:38 -08:00
ce523eeed6 Add a blurb about recent contributors.
All checks were successful
build / test (push) Successful in 11m15s
build / docs (push) Successful in 1m15s
2025-01-23 15:11:54 -08:00
3c0def6d6d Expand the recent contributors documentation section to ticket submitters.
All checks were successful
build / test (push) Successful in 4m12s
build / docs (push) Successful in 1m1s
2025-01-23 14:41:26 -08:00
f08014e3be Code formatting.
All checks were successful
build / test (push) Successful in 4m23s
build / docs (push) Successful in 1m36s
2025-01-23 12:11:27 -08:00
86ad93676d Bump version for release. 2025-01-23 12:09:20 -08:00
e1825d2bcb Add #977 to NEWS. 2025-01-23 12:08:34 -08:00
92b8c0230e Fix exclude patterns parsing to support pattern styles (#977).
Some checks failed
build / test (push) Failing after 3m37s
build / docs (push) Has been skipped
Reviewed-on: #976
2025-01-23 20:06:11 +00:00
Pavel Andreev
73c196aa70 Fix according to review comments 2025-01-23 19:49:10 +00:00
Pavel Andreev
5d390d7953 Fix patterns parsing 2025-01-23 15:58:43 +00:00
ffb342780b Link to Sentry's DSN documentation (#855).
All checks were successful
build / test (push) Successful in 4m24s
build / docs (push) Successful in 1m42s
2025-01-21 17:28:32 -08:00
9871267f97 Add a Sentry monitoring hook (#855).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-01-21 17:23:56 -08:00
914c2b17e9 Add a Sentry monitoring hook (#855). 2025-01-21 17:23:18 -08:00
804455ac9f Fix for "exclude_from" files being completely ignored (#971).
All checks were successful
build / test (push) Successful in 5m44s
build / docs (push) Successful in 1m34s
2025-01-19 10:27:13 -08:00
4fe0fd1576 Fix version number in NEWS.
All checks were successful
build / test (push) Successful in 4m3s
build / docs (push) Successful in 1m34s
2025-01-18 09:55:03 -08:00
e3d40125cb Fix for a "spot" check error when a filename in the most recent archive contains a newline (#968).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2025-01-18 09:54:30 -08:00
e66df22a6e Fix for an error when a blank line occurs in the configured patterns or excludes (#970).
Some checks failed
build / test (push) Failing after 3m15s
build / docs (push) Has been skipped
2025-01-18 09:25:29 -08:00
199 changed files with 11214 additions and 3529 deletions

111
NEWS
View File

@ -1,3 +1,114 @@
2.0.0.dev0
* #345: Add a "key import" action to import a repository key from backup.
* #422: Add home directory expansion to file-based and KeePassXC credential hooks.
* #790, #821: Deprecate all "before_*", "after_*" and "on_error" command hooks in favor of more
flexible "commands:". See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #790: BREAKING: For both new and deprecated command hooks, run a configured "after" hook even if
an error occurs first. This allows you to perform cleanup steps that correspond to "before"
preparation commands—even when something goes wrong.
* #790: BREAKING: Run all command hooks (both new and deprecated) respecting the
"working_directory" option if configured, meaning that hook commands are run in that directory.
* #836: Add a custom command option for the SQLite hook.
* #1010: When using Borg 2, don't pass the "--stats" flag to "borg prune".
* #1020: Document a database use case involving a temporary database client container:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
* #1037: Fix an error with the "extract" action when both a remote repository and a
"working_directory" are used.
1.9.14
* #409: With the PagerDuty monitoring hook, send borgmatic logs to PagerDuty so they show up in the
incident UI. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook
* #936: Clarify Zabbix monitoring hook documentation about creating items:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#zabbix-hook
* #1017: Fix a regression in which some MariaDB/MySQL passwords were not escaped correctly.
* #1021: Fix a regression in which the "exclude_patterns" option didn't expand "~" (the user's
home directory). This fix means that all "patterns" and "patterns_from" also now expand "~".
* #1023: Fix an error in the Btrfs hook when attempting to snapshot a read-only subvolume. Now,
read-only subvolumes are ignored since Btrfs can't actually snapshot them.
1.9.13
* #975: Add a "compression" option to the PostgreSQL database hook.
* #1001: Fix a ZFS error during snapshot cleanup.
* #1003: In the Zabbix monitoring hook, support Zabbix 7.2's authentication changes.
* #1009: Send database passwords to MariaDB and MySQL via anonymous pipe, which is more secure than
using an environment variable.
* #1013: Send database passwords to MongoDB via anonymous pipe, which is more secure than using
"--password" on the command-line!
* #1015: When ctrl-C is pressed, more strongly encourage Borg to actually exit.
* Add a "verify_tls" option to the Uptime Kuma monitoring hook for disabling TLS verification.
* Add "tls" options to the MariaDB and MySQL database hooks to enable or disable TLS encryption
between client and server.
1.9.12
* #1005: Fix the credential hooks to avoid using Python 3.12+ string features. Now borgmatic will
work with Python 3.9, 3.10, and 3.11 again.
1.9.11
* #795: Add credential loading from file, KeePassXC, and Docker/Podman secrets. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
* #996: Fix the "create" action to omit the repository label prefix from Borg's output when
databases are enabled.
* #998: Send the "encryption_passphrase" option to Borg via an anonymous pipe, which is more secure
than using an environment variable.
* #999: Fix a runtime directory error from a conflict between "extra_borg_options" and special file
detection.
* #1001: For the ZFS, Btrfs, and LVM hooks, only make snapshots for root patterns that come from
a borgmatic configuration option (e.g. "source_directories")—not from other hooks within
borgmatic.
* #1001: Fix a ZFS/LVM error due to colliding snapshot mount points for nested datasets or logical
volumes.
* #1001: Don't try to snapshot ZFS datasets that have the "canmount=off" property.
* Fix another error in the Btrfs hook when a subvolume mounted at "/" is configured in borgmatic's
source directories.
1.9.10
* #966: Add a "{credential ...}" syntax for loading systemd credentials into borgmatic
configuration files. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
* #987: Fix a "list" action error when the "encryption_passcommand" option is set.
* #987: When both "encryption_passcommand" and "encryption_passphrase" are configured, prefer
"encryption_passphrase" even if it's an empty value.
* #988: With the "max_duration" option or the "--max-duration" flag, run the archives and
repository checks separately so they don't interfere with one another. Previously, borgmatic
refused to run checks in this situation.
* #989: Fix the log message code to avoid using Python 3.10+ logging features. Now borgmatic will
work with Python 3.9 again.
* Capture and delay any log records produced before logging is fully configured, so early log
records don't get lost.
* Add support for Python 3.13.
1.9.9
* #635: Log the repository path or label on every relevant log message, not just some logs.
* #961: When the "encryption_passcommand" option is set, call the command once from borgmatic to
collect the encryption passphrase and then pass it to Borg multiple times. See the documentation
for more information: https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
* #981: Fix a "spot" check file count delta error.
* #982: Fix for borgmatic "exclude_patterns" and "exclude_from" recursing into excluded
subdirectories.
* #983: Fix the Btrfs hook to support subvolumes with names like "@home" different from their
mount points.
* #985: Change the default value for the "--original-hostname" flag from "localhost" to no host
specified. This way, the "restore" action works without a hostname if there's a single matching
database dump.
1.9.8
* #979: Fix root patterns so they don't have an invalid "sh:" prefix before getting passed to Borg.
* Expand the recent contributors documentation section to include ticket submitters—not just code
contributors—because there are multiple ways to contribute to the project! See:
https://torsion.org/borgmatic/#recent-contributors
1.9.7
* #855: Add a Sentry monitoring hook. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#sentry-hook
* #968: Fix for a "spot" check error when a filename in the most recent archive contains a newline.
* #970: Fix for an error when there's a blank line in the configured patterns or excludes.
* #971: Fix for "exclude_from" files being completely ignored.
* #977: Fix for "exclude_patterns" and "exclude_from" not supporting explicit pattern styles (e.g.,
"sh:" or "re:").
1.9.6
* #959: Fix an error in the Btrfs hook when a subvolume mounted at "/" is configured in borgmatic's
source directories.

View File

@ -56,6 +56,8 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
## Integrations
### Data
<a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
@ -65,6 +67,11 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
<a href="https://btrfs.readthedocs.io/"><img src="docs/static/btrfs.png" alt="Btrfs" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://sourceware.org/lvm2/"><img src="docs/static/lvm.png" alt="LVM" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://rclone.org"><img src="docs/static/rclone.png" alt="rclone" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
### Monitoring
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://uptime.kuma.pet/"><img src="docs/static/uptimekuma.png" alt="Uptime Kuma" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
@ -75,7 +82,15 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
<a href="https://grafana.com/oss/loki/"><img src="docs/static/loki.png" alt="Loki" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://github.com/caronc/apprise/wiki"><img src="docs/static/apprise.png" alt="Apprise" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.zabbix.com/"><img src="docs/static/zabbix.png" alt="Zabbix" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://sentry.io/"><img src="docs/static/sentry.png" alt="Sentry" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
### Credentials
<a href="https://systemd.io/"><img src="docs/static/systemd.png" alt="Sentry" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.docker.com/"><img src="docs/static/docker.png" alt="Docker" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://podman.io/"><img src="docs/static/podman.png" alt="Podman" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://keepassxc.org/"><img src="docs/static/keepassxc.png" alt="Podman" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
## Getting started
@ -164,4 +179,8 @@ info on cloning source code, running tests, etc.
### Recent contributors
Thanks to all borgmatic contributors! There are multiple ways to contribute to
this project, so the following includes those who have fixed bugs, contributed
features, *or* filed tickets.
{% include borgmatic/contributors.html %}

View File

@ -22,9 +22,7 @@ def run_borg(
if borg_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, borg_arguments.repository
):
logger.info(
f'{repository.get("label", repository["path"])}: Running arbitrary Borg command'
)
logger.info('Running arbitrary Borg command')
archive_name = borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],
borg_arguments.archive,

View File

@ -21,9 +21,7 @@ def run_break_lock(
if break_lock_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, break_lock_arguments.repository
):
logger.info(
f'{repository.get("label", repository["path"])}: Breaking repository and cache locks'
)
logger.info('Breaking repository and cache locks')
borgmatic.borg.break_lock.break_lock(
repository['path'],
config,

View File

@ -16,7 +16,7 @@ def run_change_passphrase(
remote_path,
):
'''
Run the "key change-passprhase" action for the given repository.
Run the "key change-passphrase" action for the given repository.
'''
if (
change_passphrase_arguments.repository is None
@ -24,9 +24,7 @@ def run_change_passphrase(
repository, change_passphrase_arguments.repository
)
):
logger.info(
f'{repository.get("label", repository["path"])}: Changing repository passphrase'
)
logger.info('Changing repository passphrase')
borgmatic.borg.change_passphrase.change_passphrase(
repository['path'],
config,

View File

@ -363,7 +363,6 @@ def collect_spot_check_source_paths(
borgmatic.hooks.dispatch.call_hooks(
'use_streaming',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
).values()
)
@ -387,13 +386,12 @@ def collect_spot_check_source_paths(
stream_processes=stream_processes,
)
)
borg_environment = borgmatic.borg.environment.make_environment(config)
working_directory = borgmatic.config.paths.get_working_directory(config)
paths_output = borgmatic.execute.execute_command_and_capture_output(
create_flags + create_positional_arguments,
capture_stderr=True,
extra_environment=borg_environment,
environment=borgmatic.borg.environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
@ -401,7 +399,7 @@ def collect_spot_check_source_paths(
paths = tuple(
path_line.split(' ', 1)[1]
for path_line in paths_output.split('\n')
for path_line in paths_output.splitlines()
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
@ -443,7 +441,7 @@ def collect_spot_check_archive_paths(
config,
local_borg_version,
global_arguments,
path_format='{type} {path}{NL}', # noqa: FS003
path_format='{type} {path}{NUL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
)
@ -468,15 +466,14 @@ def compare_spot_check_hashes(
global_arguments,
local_path,
remote_path,
log_prefix,
source_paths,
):
'''
Given a repository configuration dict, the name of the latest archive, a configuration dict, the
local Borg version, global arguments as an argparse.Namespace instance, the local Borg path, the
remote Borg path, a log label, and spot check source paths, compare the hashes for a sampling of
the source paths with hashes from corresponding paths in the given archive. Return a sequence of
the paths that fail that hash comparison.
remote Borg path, and spot check source paths, compare the hashes for a sampling of the source
paths with hashes from corresponding paths in the given archive. Return a sequence of the paths
that fail that hash comparison.
'''
# Based on the configured sample percentage, come up with a list of random sample files from the
# source directories.
@ -492,7 +489,7 @@ def compare_spot_check_hashes(
if os.path.exists(os.path.join(working_directory or '', source_path))
}
logger.debug(
f'{log_prefix}: Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
f'Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
)
source_sample_paths_iterator = iter(source_sample_paths)
@ -540,7 +537,7 @@ def compare_spot_check_hashes(
local_borg_version,
global_arguments,
list_paths=source_sample_paths_subset,
path_format='{xxh64} {path}{NL}', # noqa: FS003
path_format='{xxh64} {path}{NUL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
)
@ -580,8 +577,7 @@ def spot_check(
disk to those stored in the latest archive. If any differences are beyond configured tolerances,
then the check fails.
'''
log_prefix = f'{repository.get("label", repository["path"])}'
logger.debug(f'{log_prefix}: Running spot check')
logger.debug('Running spot check')
try:
spot_check_config = next(
@ -604,7 +600,7 @@ def spot_check(
remote_path,
borgmatic_runtime_directory,
)
logger.debug(f'{log_prefix}: {len(source_paths)} total source paths for spot check')
logger.debug(f'{len(source_paths)} total source paths for spot check')
archive = borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],
@ -615,7 +611,7 @@ def spot_check(
local_path,
remote_path,
)
logger.debug(f'{log_prefix}: Using archive {archive} for spot check')
logger.debug(f'Using archive {archive} for spot check')
archive_paths = collect_spot_check_archive_paths(
repository,
@ -627,11 +623,11 @@ def spot_check(
remote_path,
borgmatic_runtime_directory,
)
logger.debug(f'{log_prefix}: {len(archive_paths)} total archive paths for spot check')
logger.debug(f'{len(archive_paths)} total archive paths for spot check')
if len(source_paths) == 0:
logger.debug(
f'{log_prefix}: Paths in latest archive but not source paths: {", ".join(set(archive_paths)) or "none"}'
f'Paths in latest archive but not source paths: {", ".join(set(archive_paths)) or "none"}'
)
raise ValueError(
'Spot check failed: There are no source paths to compare against the archive'
@ -644,10 +640,10 @@ def spot_check(
if count_delta_percentage > spot_check_config['count_tolerance_percentage']:
rootless_source_paths = set(path.lstrip(os.path.sep) for path in source_paths)
logger.debug(
f'{log_prefix}: Paths in source paths but not latest archive: {", ".join(rootless_source_paths - set(archive_paths)) or "none"}'
f'Paths in source paths but not latest archive: {", ".join(rootless_source_paths - set(archive_paths)) or "none"}'
)
logger.debug(
f'{log_prefix}: Paths in latest archive but not source paths: {", ".join(set(archive_paths) - rootless_source_paths) or "none"}'
f'Paths in latest archive but not source paths: {", ".join(set(archive_paths) - rootless_source_paths) or "none"}'
)
raise ValueError(
f'Spot check failed: {count_delta_percentage:.2f}% file count delta between source paths and latest archive (tolerance is {spot_check_config["count_tolerance_percentage"]}%)'
@ -661,25 +657,24 @@ def spot_check(
global_arguments,
local_path,
remote_path,
log_prefix,
source_paths,
)
# Error if the percentage of failing hashes exceeds the configured tolerance percentage.
logger.debug(f'{log_prefix}: {len(failing_paths)} non-matching spot check hashes')
logger.debug(f'{len(failing_paths)} non-matching spot check hashes')
data_tolerance_percentage = spot_check_config['data_tolerance_percentage']
failing_percentage = (len(failing_paths) / len(source_paths)) * 100
if failing_percentage > data_tolerance_percentage:
logger.debug(
f'{log_prefix}: Source paths with data not matching the latest archive: {", ".join(failing_paths)}'
f'Source paths with data not matching the latest archive: {", ".join(failing_paths)}'
)
raise ValueError(
f'Spot check failed: {failing_percentage:.2f}% of source paths with data not matching the latest archive (tolerance is {data_tolerance_percentage}%)'
)
logger.info(
f'{log_prefix}: Spot check passed with a {count_delta_percentage:.2f}% file count delta and a {failing_percentage:.2f}% file data delta'
f'Spot check passed with a {count_delta_percentage:.2f}% file count delta and a {failing_percentage:.2f}% file data delta'
)
@ -687,7 +682,6 @@ def run_check(
config_filename,
repository,
config,
hook_context,
local_borg_version,
check_arguments,
global_arguments,
@ -704,17 +698,7 @@ def run_check(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_check'),
config.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
**hook_context,
)
log_prefix = repository.get('label', repository['path'])
logger.info(f'{log_prefix}: Running consistency checks')
logger.info('Running consistency checks')
repository_id = borgmatic.borg.check.get_repository_id(
repository['path'],
@ -767,9 +751,7 @@ def run_check(
write_check_time(make_check_time_path(config, repository_id, 'extract'))
if 'spot' in checks:
with borgmatic.config.paths.Runtime_directory(
config, log_prefix
) as borgmatic_runtime_directory:
with borgmatic.config.paths.Runtime_directory(config) as borgmatic_runtime_directory:
spot_check(
repository,
config,
@ -780,12 +762,3 @@ def run_check(
borgmatic_runtime_directory,
)
write_check_time(make_check_time_path(config, repository_id, 'spot'))
borgmatic.hooks.command.execute_hook(
config.get('after_check'),
config.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
**hook_context,
)

View File

@ -12,7 +12,6 @@ def run_compact(
config_filename,
repository,
config,
hook_context,
local_borg_version,
compact_arguments,
global_arguments,
@ -28,18 +27,8 @@ def run_compact(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_compact'),
config.get('umask'),
config_filename,
'pre-compact',
global_arguments.dry_run,
**hook_context,
)
if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
logger.info(
f'{repository.get("label", repository["path"])}: Compacting segments{dry_run_label}'
)
logger.info(f'Compacting segments{dry_run_label}')
borgmatic.borg.compact.compact_segments(
global_arguments.dry_run,
repository['path'],
@ -53,14 +42,4 @@ def run_compact(
threshold=compact_arguments.threshold,
)
else: # pragma: nocover
logger.info(
f'{repository.get("label", repository["path"])}: Skipping compact (only available/needed in Borg 1.2+)'
)
borgmatic.hooks.command.execute_hook(
config.get('after_compact'),
config.get('umask'),
config_filename,
'post-compact',
global_arguments.dry_run,
**hook_context,
)
logger.info('Skipping compact (only available/needed in Borg 1.2+)')

View File

@ -45,7 +45,6 @@ def get_config_paths(archive_name, bootstrap_arguments, global_arguments, local_
# still want to support reading the manifest from previously created archives as well.
with borgmatic.config.paths.Runtime_directory(
{'user_runtime_directory': bootstrap_arguments.user_runtime_directory},
bootstrap_arguments.repository,
) as borgmatic_runtime_directory:
for base_directory in (
'borgmatic',

View File

@ -15,7 +15,7 @@ import borgmatic.hooks.dispatch
logger = logging.getLogger(__name__)
def parse_pattern(pattern_line):
def parse_pattern(pattern_line, default_style=borgmatic.borg.pattern.Pattern_style.NONE):
'''
Given a Borg pattern as a string, parse it into a borgmatic.borg.pattern.Pattern instance and
return it.
@ -23,18 +23,20 @@ def parse_pattern(pattern_line):
try:
(pattern_type, remainder) = pattern_line.split(' ', maxsplit=1)
except ValueError:
raise ValueError('Invalid pattern:', pattern_line)
raise ValueError(f'Invalid pattern: {pattern_line}')
try:
(pattern_style, path) = remainder.split(':', maxsplit=1)
(parsed_pattern_style, path) = remainder.split(':', maxsplit=1)
pattern_style = borgmatic.borg.pattern.Pattern_style(parsed_pattern_style)
except ValueError:
pattern_style = ''
pattern_style = default_style
path = remainder
return borgmatic.borg.pattern.Pattern(
path,
borgmatic.borg.pattern.Pattern_type(pattern_type),
borgmatic.borg.pattern.Pattern_style(pattern_style),
source=borgmatic.borg.pattern.Pattern_source.CONFIG,
)
@ -50,18 +52,20 @@ def collect_patterns(config):
try:
return (
tuple(
borgmatic.borg.pattern.Pattern(source_directory)
borgmatic.borg.pattern.Pattern(
source_directory, source=borgmatic.borg.pattern.Pattern_source.CONFIG
)
for source_directory in config.get('source_directories', ())
)
+ tuple(
parse_pattern(pattern_line.strip())
for pattern_line in config.get('patterns', ())
if not pattern_line.lstrip().startswith('#')
if pattern_line.strip()
)
+ tuple(
borgmatic.borg.pattern.Pattern(
exclude_line.strip(),
borgmatic.borg.pattern.Pattern_type.EXCLUDE,
parse_pattern(
f'{borgmatic.borg.pattern.Pattern_type.NO_RECURSE.value} {exclude_line.strip()}',
borgmatic.borg.pattern.Pattern_style.FNMATCH,
)
for exclude_line in config.get('exclude_patterns', ())
@ -71,22 +75,23 @@ def collect_patterns(config):
for filename in config.get('patterns_from', ())
for pattern_line in open(filename).readlines()
if not pattern_line.lstrip().startswith('#')
if pattern_line.strip()
)
+ tuple(
borgmatic.borg.pattern.Pattern(
exclude_line.strip(),
borgmatic.borg.pattern.Pattern_type.EXCLUDE,
parse_pattern(
f'{borgmatic.borg.pattern.Pattern_type.NO_RECURSE.value} {exclude_line.strip()}',
borgmatic.borg.pattern.Pattern_style.FNMATCH,
)
for filename in config.get('excludes_from', ())
for filename in config.get('exclude_from', ())
for exclude_line in open(filename).readlines()
if not exclude_line.lstrip().startswith('#')
if exclude_line.strip()
)
)
except (FileNotFoundError, OSError) as error:
logger.debug(error)
raise ValueError(f'Cannot read patterns_from/excludes_from file: {error.filename}')
raise ValueError(f'Cannot read patterns_from/exclude_from file: {error.filename}')
def expand_directory(directory, working_directory):
@ -125,8 +130,11 @@ def expand_directory(directory, working_directory):
def expand_patterns(patterns, working_directory=None, skip_paths=None):
'''
Given a sequence of borgmatic.borg.pattern.Pattern instances and an optional working directory,
expand tildes and globs in each root pattern. Return all the resulting patterns (not just the
root patterns) as a tuple.
expand tildes and globs in each root pattern and expand just tildes in each non-root pattern.
The idea is that non-root patterns may be regular expressions or other pattern styles containing
"*" that borgmatic should not expand as a shell glob.
Return all the resulting patterns as a tuple.
If a set of paths are given to skip, then don't expand any patterns matching them.
'''
@ -142,12 +150,21 @@ def expand_patterns(patterns, working_directory=None, skip_paths=None):
pattern.type,
pattern.style,
pattern.device,
pattern.source,
)
for expanded_path in expand_directory(pattern.path, working_directory)
)
if pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
and pattern.path not in (skip_paths or ())
else (pattern,)
else (
borgmatic.borg.pattern.Pattern(
os.path.expanduser(pattern.path),
pattern.type,
pattern.style,
pattern.device,
pattern.source,
),
)
)
for pattern in patterns
)
@ -176,6 +193,7 @@ def device_map_patterns(patterns, working_directory=None):
and os.path.exists(full_path)
else None
),
source=pattern.source,
)
for pattern in patterns
for full_path in (os.path.join(working_directory or '', pattern.path),)
@ -254,7 +272,6 @@ def run_create(
repository,
config,
config_paths,
hook_context,
local_borg_version,
create_arguments,
global_arguments,
@ -272,26 +289,13 @@ def run_create(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_backup'),
config.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
**hook_context,
)
log_prefix = repository.get('label', repository['path'])
logger.info(f'{log_prefix}: Creating archive{dry_run_label}')
logger.info(f'Creating archive{dry_run_label}')
working_directory = borgmatic.config.paths.get_working_directory(config)
with borgmatic.config.paths.Runtime_directory(
config, log_prefix
) as borgmatic_runtime_directory:
with borgmatic.config.paths.Runtime_directory(config) as borgmatic_runtime_directory:
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
global_arguments.dry_run,
@ -300,7 +304,6 @@ def run_create(
active_dumps = borgmatic.hooks.dispatch.call_hooks(
'dump_data_sources',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
config_paths,
borgmatic_runtime_directory,
@ -337,17 +340,7 @@ def run_create(
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps',
config,
config_filename,
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
global_arguments.dry_run,
)
borgmatic.hooks.command.execute_hook(
config.get('after_backup'),
config.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
**hook_context,
)

View File

@ -23,7 +23,7 @@ def run_delete(
if delete_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, delete_arguments.repository
):
logger.answer(f'{repository.get("label", repository["path"])}: Deleting archives')
logger.answer('Deleting archives')
archive_name = (
borgmatic.borg.repo_list.resolve_archive_name(

View File

@ -21,7 +21,7 @@ def run_export_key(
if export_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_arguments.repository
):
logger.info(f'{repository.get("label", repository["path"])}: Exporting repository key')
logger.info('Exporting repository key')
borgmatic.borg.export_key.export_key(
repository['path'],
config,

View File

@ -22,9 +22,7 @@ def run_export_tar(
if export_tar_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_tar_arguments.repository
):
logger.info(
f'{repository["path"]}: Exporting archive {export_tar_arguments.archive} as tar file'
)
logger.info(f'Exporting archive {export_tar_arguments.archive} as tar file')
borgmatic.borg.export_tar.export_tar_archive(
global_arguments.dry_run,
repository['path'],

View File

@ -12,7 +12,6 @@ def run_extract(
config_filename,
repository,
config,
hook_context,
local_borg_version,
extract_arguments,
global_arguments,
@ -22,20 +21,10 @@ def run_extract(
'''
Run the "extract" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
config.get('before_extract'),
config.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
**hook_context,
)
if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, extract_arguments.repository
):
logger.info(
f'{repository.get("label", repository["path"])}: Extracting archive {extract_arguments.archive}'
)
logger.info(f'Extracting archive {extract_arguments.archive}')
borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
repository['path'],
@ -58,11 +47,3 @@ def run_extract(
strip_components=extract_arguments.strip_components,
progress=extract_arguments.progress,
)
borgmatic.hooks.command.execute_hook(
config.get('after_extract'),
config.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,33 @@
import logging
import borgmatic.borg.import_key
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_import_key(
repository,
config,
local_borg_version,
import_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "key import" action for the given repository.
'''
if import_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, import_arguments.repository
):
logger.info('Importing repository key')
borgmatic.borg.import_key.import_key(
repository['path'],
config,
local_borg_version,
import_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -27,9 +27,7 @@ def run_info(
repository, info_arguments.repository
):
if not info_arguments.json:
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying archive summary information'
)
logger.answer('Displaying archive summary information')
archive_name = borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],
info_arguments.archive,

View File

@ -27,9 +27,9 @@ def run_list(
):
if not list_arguments.json:
if list_arguments.find_paths: # pragma: no cover
logger.answer(f'{repository.get("label", repository["path"])}: Searching archives')
logger.answer('Searching archives')
elif not list_arguments.archive: # pragma: no cover
logger.answer(f'{repository.get("label", repository["path"])}: Listing archives')
logger.answer('Listing archives')
archive_name = borgmatic.borg.repo_list.resolve_archive_name(
repository['path'],

View File

@ -23,11 +23,9 @@ def run_mount(
repository, mount_arguments.repository
):
if mount_arguments.archive:
logger.info(
f'{repository.get("label", repository["path"])}: Mounting archive {mount_arguments.archive}'
)
logger.info(f'Mounting archive {mount_arguments.archive}')
else: # pragma: nocover
logger.info(f'{repository.get("label", repository["path"])}: Mounting repository')
logger.info('Mounting repository')
borgmatic.borg.mount.mount_archive(
repository['path'],

View File

@ -11,7 +11,6 @@ def run_prune(
config_filename,
repository,
config,
hook_context,
local_borg_version,
prune_arguments,
global_arguments,
@ -27,15 +26,7 @@ def run_prune(
):
return
borgmatic.hooks.command.execute_hook(
config.get('before_prune'),
config.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository.get("label", repository["path"])}: Pruning archives{dry_run_label}')
logger.info(f'Pruning archives{dry_run_label}')
borgmatic.borg.prune.prune_archives(
global_arguments.dry_run,
repository['path'],
@ -46,11 +37,3 @@ def run_prune(
local_path=local_path,
remote_path=remote_path,
)
borgmatic.hooks.command.execute_hook(
config.get('after_prune'),
config.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
**hook_context,
)

View File

@ -23,7 +23,7 @@ def run_repo_create(
):
return
logger.info(f'{repository.get("label", repository["path"])}: Creating repository')
logger.info('Creating repository')
borgmatic.borg.repo_create.create_repository(
global_arguments.dry_run,
repository['path'],

View File

@ -21,8 +21,7 @@ def run_repo_delete(
repository, repo_delete_arguments.repository
):
logger.answer(
f'{repository.get("label", repository["path"])}: Deleting repository'
+ (' cache' if repo_delete_arguments.cache_only else '')
'Deleting repository' + (' cache' if repo_delete_arguments.cache_only else '')
)
borgmatic.borg.repo_delete.delete_repository(

View File

@ -25,9 +25,7 @@ def run_repo_info(
repository, repo_info_arguments.repository
):
if not repo_info_arguments.json:
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying repository summary information'
)
logger.answer('Displaying repository summary information')
json_output = borgmatic.borg.repo_info.display_repository_info(
repository['path'],

View File

@ -25,7 +25,7 @@ def run_repo_list(
repository, repo_list_arguments.repository
):
if not repo_list_arguments.json:
logger.answer(f'{repository.get("label", repository["path"])}: Listing repository')
logger.answer('Listing repository')
json_output = borgmatic.borg.repo_list.list_repository(
repository['path'],

View File

@ -57,7 +57,7 @@ def render_dump_metadata(dump):
Given a Dump instance, make a display string describing it for use in log messages.
'''
name = 'unspecified' if dump.data_source_name is UNSPECIFIED else dump.data_source_name
hostname = dump.hostname or 'localhost'
hostname = dump.hostname or UNSPECIFIED
port = None if dump.port is UNSPECIFIED else dump.port
if port:
@ -71,10 +71,10 @@ def render_dump_metadata(dump):
return metadata
def get_configured_data_source(config, restore_dump, log_prefix):
def get_configured_data_source(config, restore_dump):
'''
Search in the given configuration dict for dumps corresponding to the given dump to restore. If
there are multiple matches, error. Log using the given log prefix.
there are multiple matches, error.
Return the found data source as a data source configuration dict or None if not found.
'''
@ -91,7 +91,6 @@ def get_configured_data_source(config, restore_dump, log_prefix):
borgmatic.hooks.dispatch.call_hook(
function_name='get_default_port',
config=config,
log_prefix=log_prefix,
hook_name=hook_name,
),
)
@ -173,14 +172,11 @@ def restore_single_dump(
Dump(hook_name, data_source['name'], data_source.get('hostname'), data_source.get('port'))
)
logger.info(
f'{repository.get("label", repository["path"])}: Restoring data source {dump_metadata}'
)
logger.info(f'Restoring data source {dump_metadata}')
dump_patterns = borgmatic.hooks.dispatch.call_hooks(
'make_data_source_dump_patterns',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
data_source['name'],
@ -227,7 +223,6 @@ def restore_single_dump(
borgmatic.hooks.dispatch.call_hook(
function_name='restore_data_source_dump',
config=config,
log_prefix=repository['path'],
hook_name=hook_name,
data_source=data_source,
dry_run=global_arguments.dry_run,
@ -319,7 +314,7 @@ def collect_dumps_from_archive(
break
else:
logger.warning(
f'{repository}: Ignoring invalid data source dump path "{dump_path}" in archive {archive}'
f'Ignoring invalid data source dump path "{dump_path}" in archive {archive}'
)
return dumps_from_archive
@ -348,12 +343,15 @@ def get_dumps_to_restore(restore_arguments, dumps_from_archive):
else UNSPECIFIED
),
data_source_name=name,
hostname=restore_arguments.original_hostname or 'localhost',
hostname=restore_arguments.original_hostname or UNSPECIFIED,
port=restore_arguments.original_port,
)
for name in restore_arguments.data_sources
for name in restore_arguments.data_sources or (UNSPECIFIED,)
}
if restore_arguments.data_sources
if restore_arguments.hook
or restore_arguments.data_sources
or restore_arguments.original_hostname
or restore_arguments.original_port
else {
Dump(
hook_name=UNSPECIFIED,
@ -444,16 +442,12 @@ def run_restore(
):
return
log_prefix = repository.get('label', repository['path'])
logger.info(f'{log_prefix}: Restoring data sources from archive {restore_arguments.archive}')
logger.info(f'Restoring data sources from archive {restore_arguments.archive}')
with borgmatic.config.paths.Runtime_directory(
config, log_prefix
) as borgmatic_runtime_directory:
with borgmatic.config.paths.Runtime_directory(config) as borgmatic_runtime_directory:
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
global_arguments.dry_run,
@ -494,7 +488,6 @@ def run_restore(
found_data_source = get_configured_data_source(
config,
restore_dump,
log_prefix=repository['path'],
)
# For a dump that wasn't found via an exact match in the configuration, try to fallback
@ -503,7 +496,6 @@ def run_restore(
found_data_source = get_configured_data_source(
config,
Dump(restore_dump.hook_name, 'all', restore_dump.hostname, restore_dump.port),
log_prefix=repository['path'],
)
if not found_data_source:
@ -531,7 +523,6 @@ def run_restore(
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps',
config,
repository['path'],
borgmatic.hooks.dispatch.Hook_type.DATA_SOURCE,
borgmatic_runtime_directory,
global_arguments.dry_run,

View File

@ -17,9 +17,7 @@ def run_transfer(
'''
Run the "transfer" action for the given repository.
'''
logger.info(
f'{repository.get("label", repository["path"])}: Transferring archives to repository'
)
logger.info('Transferring archives to repository')
borgmatic.borg.transfer.transfer_archives(
global_arguments.dry_run,
repository['path'],

View File

@ -61,7 +61,7 @@ def run_arbitrary_borg(
tuple(shlex.quote(part) for part in full_command),
output_file=DO_NOT_CAPTURE,
shell=True,
extra_environment=dict(
environment=dict(
(environment.make_environment(config) or {}),
**{
'BORG_REPO': repository_path,

View File

@ -34,10 +34,9 @@ def break_lock(
+ flags.make_repository_flags(repository_path, local_borg_version)
)
borg_environment = environment.make_environment(config)
execute_command(
full_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -41,7 +41,7 @@ def change_passphrase(
)
if global_arguments.dry_run:
logger.info(f'{repository_path}: Skipping change password (dry run)')
logger.info('Skipping change password (dry run)')
return
# If the original passphrase is set programmatically, then Borg won't prompt for a new one! So
@ -56,7 +56,7 @@ def change_passphrase(
full_command,
output_file=borgmatic.execute.DO_NOT_CAPTURE,
output_log_level=logging.ANSWER,
extra_environment=environment.make_environment(config_without_passphrase),
environment=environment.make_environment(config_without_passphrase),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -64,15 +64,11 @@ def make_check_name_flags(checks, archive_filter_flags):
('--repository-only',)
However, if both "repository" and "archives" are in checks, then omit them from the returned
flags because Borg does both checks by default. If "data" is in checks, that implies "archives".
However, if both "repository" and "archives" are in checks, then omit the "only" flags from the
returned flags because Borg does both checks by default. Note that a "data" check only works
along with an "archives" check.
'''
if 'data' in checks:
data_flags = ('--verify-data',)
checks.update({'archives'})
else:
data_flags = ()
data_flags = ('--verify-data',) if 'data' in checks else ()
common_flags = (archive_filter_flags if 'archives' in checks else ()) + data_flags
if {'repository', 'archives'}.issubset(checks):
@ -142,51 +138,51 @@ def check_archives(
except StopIteration:
repository_check_config = {}
if check_arguments.max_duration and 'archives' in checks:
raise ValueError('The archives check cannot run when the --max-duration flag is used')
if repository_check_config.get('max_duration') and 'archives' in checks:
raise ValueError(
'The archives check cannot run when the repository check has the max_duration option set'
)
max_duration = check_arguments.max_duration or repository_check_config.get('max_duration')
umask = config.get('umask')
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
full_command = (
(local_path, 'check')
+ (('--repair',) if check_arguments.repair else ())
+ (('--max-duration', str(max_duration)) if max_duration else ())
+ make_check_name_flags(checks, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if check_arguments.progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
working_directory = borgmatic.config.paths.get_working_directory(config)
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
if check_arguments.repair or check_arguments.progress:
if 'data' in checks:
checks.add('archives')
grouped_checks = (checks,)
# If max_duration is set, then archives and repository checks need to be run separately, as Borg
# doesn't support --max-duration along with an archives checks.
if max_duration and 'archives' in checks and 'repository' in checks:
checks.remove('repository')
grouped_checks = (checks, {'repository'})
for checks_subset in grouped_checks:
full_command = (
(local_path, 'check')
+ (('--repair',) if check_arguments.repair else ())
+ (
('--max-duration', str(max_duration))
if max_duration and 'repository' in checks_subset
else ()
)
+ make_check_name_flags(checks_subset, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if check_arguments.progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
extra_environment=borg_environment,
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
else:
execute_command(
full_command,
extra_environment=borg_environment,
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
output_file=(
DO_NOT_CAPTURE if check_arguments.repair or check_arguments.progress else None
),
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@ -43,13 +43,13 @@ def compact_segments(
)
if dry_run:
logging.info(f'{repository_path}: Skipping compact (dry run)')
logging.info('Skipping compact (dry run)')
return
execute_command(
full_command,
output_log_level=logging.INFO,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -20,14 +20,12 @@ from borgmatic.execute import (
logger = logging.getLogger(__name__)
def write_patterns_file(patterns, borgmatic_runtime_directory, log_prefix, patterns_file=None):
def write_patterns_file(patterns, borgmatic_runtime_directory, patterns_file=None):
'''
Given a sequence of patterns as borgmatic.borg.pattern.Pattern instances, write them to a named
temporary file in the given borgmatic runtime directory and return the file object so it can
continue to exist on disk as long as the caller needs it.
Use the given log prefix in any logging.
If an optional open pattern file is given, append to it instead of making a new temporary file.
Return None if no patterns are provided.
'''
@ -36,14 +34,16 @@ def write_patterns_file(patterns, borgmatic_runtime_directory, log_prefix, patte
if patterns_file is None:
patterns_file = tempfile.NamedTemporaryFile('w', dir=borgmatic_runtime_directory)
operation_name = 'Writing'
else:
patterns_file.write('\n')
operation_name = 'Appending'
patterns_output = '\n'.join(
f'{pattern.type.value} {pattern.style.value}{":" if pattern.style.value else ""}{pattern.path}'
for pattern in patterns
)
logger.debug(f'{log_prefix}: Writing patterns to {patterns_file.name}:\n{patterns_output}')
logger.debug(f'{operation_name} patterns to {patterns_file.name}:\n{patterns_output}')
patterns_file.write(patterns_output)
patterns_file.flush()
@ -122,52 +122,63 @@ def collect_special_file_paths(
config,
local_path,
working_directory,
borg_environment,
borgmatic_runtime_directory,
):
'''
Given a dry-run flag, a Borg create command as a tuple, a configuration dict, a local Borg path,
a working directory, a dict of environment variables to pass to Borg, and the borgmatic runtime
directory, collect the paths for any special files (character devices, block devices, and named
pipes / FIFOs) that Borg would encounter during a create. These are all paths that could cause
Borg to hang if its --read-special flag is used.
a working directory, and the borgmatic runtime directory, collect the paths for any special
files (character devices, block devices, and named pipes / FIFOs) that Borg would encounter
during a create. These are all paths that could cause Borg to hang if its --read-special flag is
used.
Skip looking for special files in the given borgmatic runtime directory, as borgmatic creates
its own special files there for database dumps. And if the borgmatic runtime directory is
configured to be excluded from the files Borg backs up, error, because this means Borg won't be
able to consume any database dumps and therefore borgmatic will hang.
its own special files there for database dumps and we don't want those omitted.
Additionally, if the borgmatic runtime directory is not contained somewhere in the files Borg
plans to backup, that means the user must have excluded the runtime directory (e.g. via
"exclude_patterns" or similar). Therefore, raise, because this means Borg won't be able to
consume any database dumps and therefore borgmatic will hang when it tries to do so.
'''
# Omit "--exclude-nodump" from the Borg dry run command, because that flag causes Borg to open
# files including any named pipe we've created.
# files including any named pipe we've created. And omit "--filter" because that can break the
# paths output parsing below such that path lines no longer start with th expected "- ".
paths_output = execute_command_and_capture_output(
tuple(argument for argument in create_command if argument != '--exclude-nodump')
flags.omit_flag_and_value(flags.omit_flag(create_command, '--exclude-nodump'), '--filter')
+ ('--dry-run', '--list'),
capture_stderr=True,
working_directory=working_directory,
extra_environment=borg_environment,
environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
# These are all the individual files that Borg is planning to backup as determined by the Borg
# create dry run above.
paths = tuple(
path_line.split(' ', 1)[1]
for path_line in paths_output.split('\n')
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
skip_paths = {}
# These are the subset of those files that contain the borgmatic runtime directory.
paths_containing_runtime_directory = {}
if os.path.exists(borgmatic_runtime_directory):
skip_paths = {
paths_containing_runtime_directory = {
path for path in paths if any_parent_directories(path, (borgmatic_runtime_directory,))
}
if not skip_paths and not dry_run:
# If no paths to backup contain the runtime directory, it must've been excluded.
if not paths_containing_runtime_directory and not dry_run:
raise ValueError(
f'The runtime directory {os.path.normpath(borgmatic_runtime_directory)} overlaps with the configured excludes or patterns with excludes. Please ensure the runtime directory is not excluded.'
)
return tuple(
path for path in paths if special_file(path, working_directory) if path not in skip_paths
path
for path in paths
if special_file(path, working_directory)
if path not in paths_containing_runtime_directory
)
@ -217,9 +228,7 @@ def make_base_create_command(
if config.get('source_directories_must_exist', False):
check_all_root_patterns_exist(patterns)
patterns_file = write_patterns_file(
patterns, borgmatic_runtime_directory, log_prefix=repository_path
)
patterns_file = write_patterns_file(patterns, borgmatic_runtime_directory)
checkpoint_interval = config.get('checkpoint_interval', None)
checkpoint_volume = config.get('checkpoint_volume', None)
chunker_params = config.get('chunker_params', None)
@ -299,19 +308,17 @@ def make_base_create_command(
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
if stream_processes and not config.get('read_special'):
logger.warning(
f'{repository_path}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
'Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
borg_environment = environment.make_environment(config)
working_directory = borgmatic.config.paths.get_working_directory(config)
logger.debug(f'{repository_path}: Collecting special file paths')
logger.debug('Collecting special file paths')
special_file_paths = collect_special_file_paths(
dry_run,
create_flags + create_positional_arguments,
config,
local_path,
working_directory,
borg_environment,
borgmatic_runtime_directory=borgmatic_runtime_directory,
)
@ -322,19 +329,19 @@ def make_base_create_command(
placeholder=' ...',
)
logger.warning(
f'{repository_path}: Excluding special files to prevent Borg from hanging: {truncated_special_file_paths}'
f'Excluding special files to prevent Borg from hanging: {truncated_special_file_paths}'
)
patterns_file = write_patterns_file(
tuple(
borgmatic.borg.pattern.Pattern(
special_file_path,
borgmatic.borg.pattern.Pattern_type.EXCLUDE,
borgmatic.borg.pattern.Pattern_type.NO_RECURSE,
borgmatic.borg.pattern.Pattern_style.FNMATCH,
source=borgmatic.borg.pattern.Pattern_source.INTERNAL,
)
for special_file_path in special_file_paths
),
borgmatic_runtime_directory,
log_prefix=repository_path,
patterns_file=patterns_file,
)
@ -399,8 +406,6 @@ def create_archive(
# the terminal directly.
output_file = DO_NOT_CAPTURE if progress else None
borg_environment = environment.make_environment(config)
create_flags += (
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ())
@ -417,7 +422,7 @@ def create_archive(
output_log_level,
output_file,
working_directory=working_directory,
extra_environment=borg_environment,
environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
@ -425,7 +430,7 @@ def create_archive(
return execute_command_and_capture_output(
create_flags + create_positional_arguments,
working_directory=working_directory,
extra_environment=borg_environment,
environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
@ -435,7 +440,7 @@ def create_archive(
output_log_level,
output_file,
working_directory=working_directory,
extra_environment=borg_environment,
environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)

View File

@ -128,7 +128,7 @@ def delete_archives(
borgmatic.execute.execute_command(
command,
output_log_level=logging.ANSWER,
extra_environment=borgmatic.borg.environment.make_environment(config),
environment=borgmatic.borg.environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -1,5 +1,8 @@
import os
import borgmatic.borg.passcommand
import borgmatic.hooks.credential.parse
OPTION_TO_ENVIRONMENT_VARIABLE = {
'borg_base_directory': 'BORG_BASE_DIR',
'borg_config_directory': 'BORG_CONFIG_DIR',
@ -7,8 +10,6 @@ OPTION_TO_ENVIRONMENT_VARIABLE = {
'borg_files_cache_ttl': 'BORG_FILES_CACHE_TTL',
'borg_security_directory': 'BORG_SECURITY_DIR',
'borg_keys_directory': 'BORG_KEYS_DIR',
'encryption_passcommand': 'BORG_PASSCOMMAND',
'encryption_passphrase': 'BORG_PASSPHRASE',
'ssh_command': 'BORG_RSH',
'temporary_directory': 'TMPDIR',
}
@ -25,17 +26,59 @@ DEFAULT_BOOL_OPTION_TO_UPPERCASE_ENVIRONMENT_VARIABLE = {
def make_environment(config):
'''
Given a borgmatic configuration dict, return its options converted to a Borg environment
variable dict.
Given a borgmatic configuration dict, convert it to a Borg environment variable dict, merge it
with a copy of the current environment variables, and return the result.
Do not reuse this environment across multiple Borg invocations, because it can include
references to resources like anonymous pipes for passphrases—which can only be consumed once.
Here's how native Borg precedence works for a few of the environment variables:
1. BORG_PASSPHRASE, if set, is used first.
2. BORG_PASSCOMMAND is used only if BORG_PASSPHRASE isn't set.
3. BORG_PASSPHRASE_FD is used only if neither of the above are set.
In borgmatic, we want to simulate this precedence order, but there are some additional
complications. First, values can come from either configuration or from environment variables
set outside borgmatic; configured options should take precedence. Second, when borgmatic gets a
passphrase—directly from configuration or indirectly via a credential hook or a passcommand—we
want to pass that passphrase to Borg via an anonymous pipe (+ BORG_PASSPHRASE_FD), since that's
more secure than using an environment variable (BORG_PASSPHRASE).
'''
environment = {}
environment = dict(os.environ)
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
value = config.get(option_name)
if value:
if value is not None:
environment[environment_variable_name] = str(value)
if 'encryption_passphrase' in config:
environment.pop('BORG_PASSPHRASE', None)
environment.pop('BORG_PASSCOMMAND', None)
if 'encryption_passcommand' in config:
environment.pop('BORG_PASSCOMMAND', None)
passphrase = borgmatic.hooks.credential.parse.resolve_credential(
config.get('encryption_passphrase'), config
)
if passphrase is None:
passphrase = borgmatic.borg.passcommand.get_passphrase_from_passcommand(config)
# If there's a passphrase (from configuration, from a configured credential, or from a
# configured passcommand), send it to Borg via an anonymous pipe.
if passphrase is not None:
read_file_descriptor, write_file_descriptor = os.pipe()
os.write(write_file_descriptor, passphrase.encode('utf-8'))
os.close(write_file_descriptor)
# This plus subprocess.Popen(..., close_fds=False) in execute.py is necessary for the Borg
# child process to inherit the file descriptor.
os.set_inheritable(read_file_descriptor, True)
environment['BORG_PASSPHRASE_FD'] = str(read_file_descriptor)
for (
option_name,
environment_variable_name,

View File

@ -60,14 +60,14 @@ def export_key(
)
if global_arguments.dry_run:
logger.info(f'{repository_path}: Skipping key export (dry run)')
logger.info('Skipping key export (dry run)')
return
execute_command(
full_command,
output_file=output_file,
output_log_level=logging.ANSWER,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -63,14 +63,14 @@ def export_tar_archive(
output_log_level = logging.INFO
if dry_run:
logging.info(f'{repository_path}: Skipping export to tar file (dry run)')
logging.info('Skipping export to tar file (dry run)')
return
execute_command(
full_command,
output_file=DO_NOT_CAPTURE if destination_path == '-' else None,
output_log_level=output_log_level,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -44,7 +44,6 @@ def extract_last_archive_dry_run(
return
list_flag = ('--list',) if logger.isEnabledFor(logging.DEBUG) else ()
borg_environment = environment.make_environment(config)
full_extract_command = (
(local_path, 'extract', '--dry-run')
+ (('--remote-path', remote_path) if remote_path else ())
@ -59,7 +58,7 @@ def extract_last_archive_dry_run(
execute_command(
full_extract_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
@ -135,16 +134,13 @@ def extract_archive(
# Make the repository path absolute so the destination directory used below via changing
# the working directory doesn't prevent Borg from finding the repo. But also apply the
# user's configured working directory (if any) to the repo path.
borgmatic.config.validate.normalize_repository_path(
os.path.join(working_directory or '', repository)
),
borgmatic.config.validate.normalize_repository_path(repository, working_directory),
archive,
local_borg_version,
)
+ (tuple(paths) if paths else ())
)
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
full_destination_path = (
os.path.join(working_directory or '', destination_path) if destination_path else None
@ -156,7 +152,7 @@ def extract_archive(
return execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=full_destination_path,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@ -168,7 +164,7 @@ def extract_archive(
full_command,
output_file=subprocess.PIPE,
run_to_completion=False,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=full_destination_path,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@ -178,7 +174,7 @@ def extract_archive(
# if the restore paths don't exist in the archive.
execute_command(
full_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=full_destination_path,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@ -17,6 +17,7 @@ class Feature(Enum):
MATCH_ARCHIVES = 11
EXCLUDED_FILES_MINUS = 12
ARCHIVE_SERIES = 13
NO_PRUNE_STATS = 14
FEATURE_TO_MINIMUM_BORG_VERSION = {
@ -33,6 +34,7 @@ FEATURE_TO_MINIMUM_BORG_VERSION = {
Feature.MATCH_ARCHIVES: parse('2.0.0b3'), # borg --match-archives
Feature.EXCLUDED_FILES_MINUS: parse('2.0.0b5'), # --list --filter uses "-" for excludes
Feature.ARCHIVE_SERIES: parse('2.0.0b11'), # identically named archives form a series
Feature.NO_PRUNE_STATS: parse('2.0.0b10'), # prune --stats is not available
}

View File

@ -156,3 +156,44 @@ def warn_for_aggressive_archive_flags(json_command, json_output):
logger.debug(f'Cannot parse JSON output from archive command: {error}')
except (TypeError, KeyError):
logger.debug('Cannot parse JSON output from archive command: No "archives" key found')
def omit_flag(arguments, flag):
'''
Given a sequence of Borg command-line arguments, return them with the given (valueless) flag
omitted. For instance, if the flag is "--flag" and arguments is:
('borg', 'create', '--flag', '--other-flag')
... then return:
('borg', 'create', '--other-flag')
'''
return tuple(argument for argument in arguments if argument != flag)
def omit_flag_and_value(arguments, flag):
'''
Given a sequence of Borg command-line arguments, return them with the given flag and its
corresponding value omitted. For instance, if the flag is "--flag" and arguments is:
('borg', 'create', '--flag', 'value', '--other-flag')
... or:
('borg', 'create', '--flag=value', '--other-flag')
... then return:
('borg', 'create', '--other-flag')
'''
# This works by zipping together a list of overlapping pairwise arguments. E.g., ('one', 'two',
# 'three', 'four') becomes ((None, 'one'), ('one, 'two'), ('two', 'three'), ('three', 'four')).
# This makes it easy to "look back" at the previous arguments so we can exclude both a flag and
# its value.
return tuple(
argument
for (previous_argument, argument) in zip((None,) + arguments, arguments)
if flag not in (previous_argument, argument)
if not argument.startswith(f'{flag}=')
)

View File

@ -0,0 +1,70 @@
import logging
import os
import borgmatic.config.paths
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def import_key(
repository_path,
config,
local_borg_version,
import_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, import
arguments, and optional local and remote Borg paths, import the repository key from the
path indicated in the import arguments.
If the path is empty or "-", then read the key from stdin.
Raise ValueError if the path is given and it does not exist.
'''
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
working_directory = borgmatic.config.paths.get_working_directory(config)
if import_arguments.path and import_arguments.path != '-':
if not os.path.exists(os.path.join(working_directory or '', import_arguments.path)):
raise ValueError(f'Path {import_arguments.path} does not exist. Aborting.')
input_file = None
else:
input_file = DO_NOT_CAPTURE
full_command = (
(local_path, 'key', 'import')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('paper', import_arguments.paper)
+ flags.make_repository_flags(
repository_path,
local_borg_version,
)
+ ((import_arguments.path,) if input_file is None else ())
)
if global_arguments.dry_run:
logger.info('Skipping key import (dry run)')
return
execute_command(
full_command,
input_file=input_file,
output_log_level=logging.INFO,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -102,7 +102,7 @@ def display_archives_info(
json_info = execute_command_and_capture_output(
json_command,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@ -116,7 +116,7 @@ def display_archives_info(
execute_command(
main_command,
output_log_level=logging.ANSWER,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@ -106,8 +106,6 @@ def capture_archive_listing(
format to use for the output, and local and remote Borg paths, capture the
output of listing that archive and return it as a list of file paths.
'''
borg_environment = environment.make_environment(config)
return tuple(
execute_command_and_capture_output(
make_list_command(
@ -120,19 +118,19 @@ def capture_archive_listing(
paths=[path for path in list_paths] if list_paths else None,
find_paths=None,
json=None,
format=path_format or '{path}{NL}', # noqa: FS003
format=path_format or '{path}{NUL}', # noqa: FS003
),
global_arguments,
local_path,
remote_path,
),
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
.strip('\n')
.split('\n')
.strip('\0')
.split('\0')
)
@ -194,7 +192,6 @@ def list_archive(
'The --json flag on the list action is not supported when using the --archive/--find flags.'
)
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
# If there are any paths to find (and there's not a single archive already selected), start by
@ -224,20 +221,20 @@ def list_archive(
local_path,
remote_path,
),
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
.strip('\n')
.split('\n')
.splitlines()
)
else:
archive_lines = (list_arguments.archive,)
# For each archive listed by Borg, run list on the contents of that archive.
for archive in archive_lines:
logger.answer(f'{repository_path}: Listing archive {archive}')
logger.answer(f'Listing archive {archive}')
archive_arguments = copy.copy(list_arguments)
archive_arguments.archive = archive
@ -260,7 +257,7 @@ def list_archive(
execute_command(
main_command,
output_log_level=logging.ANSWER,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@ -59,7 +59,6 @@ def mount_archive(
+ (tuple(mount_arguments.paths) if mount_arguments.paths else ())
)
borg_environment = environment.make_environment(config)
working_directory = borgmatic.config.paths.get_working_directory(config)
# Don't capture the output when foreground mode is used so that ctrl-C can work properly.
@ -67,7 +66,7 @@ def mount_archive(
execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
@ -76,7 +75,7 @@ def mount_archive(
execute_command(
full_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -0,0 +1,40 @@
import functools
import logging
import shlex
import borgmatic.config.paths
import borgmatic.execute
logger = logging.getLogger(__name__)
@functools.cache
def run_passcommand(passcommand, working_directory):
'''
Run the given passcommand using the given working directory and return the passphrase produced
by the command.
Cache the results so that the passcommand only needs to run—and potentially prompt the user—once
per borgmatic invocation.
'''
return borgmatic.execute.execute_command_and_capture_output(
shlex.split(passcommand),
working_directory=working_directory,
)
def get_passphrase_from_passcommand(config):
'''
Given the configuration dict, call the configured passcommand to produce and return an
encryption passphrase. In effect, we're doing an end-run around Borg by invoking its passcommand
ourselves. This allows us to pass the resulting passphrase to multiple different Borg
invocations without the user having to be prompted multiple times.
If no passcommand is configured, then return None.
'''
passcommand = config.get('encryption_passcommand')
if not passcommand:
return None
return run_passcommand(passcommand, borgmatic.config.paths.get_working_directory(config))

View File

@ -20,12 +20,31 @@ class Pattern_style(enum.Enum):
PATH_FULL_MATCH = 'pf'
class Pattern_source(enum.Enum):
'''
Where the pattern came from within borgmatic. This is important because certain use cases (like
filesystem snapshotting) only want to consider patterns that the user actually put in a
configuration file and not patterns from other sources.
'''
# The pattern is from a borgmatic configuration option, e.g. listed in "source_directories".
CONFIG = 'config'
# The pattern is generated internally within borgmatic, e.g. for special file excludes.
INTERNAL = 'internal'
# The pattern originates from within a borgmatic hook, e.g. a database hook that adds its dump
# directory.
HOOK = 'hook'
Pattern = collections.namedtuple(
'Pattern',
('path', 'type', 'style', 'device'),
('path', 'type', 'style', 'device', 'source'),
defaults=(
Pattern_type.ROOT,
Pattern_style.NONE,
None,
Pattern_source.HOOK,
),
)

View File

@ -75,7 +75,13 @@ def prune_archives(
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--stats',) if prune_arguments.stats and not dry_run else ())
+ (
('--stats',)
if prune_arguments.stats
and not dry_run
and not feature.available(feature.Feature.NO_PRUNE_STATS, local_borg_version)
else ()
)
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ flags.make_flags_from_arguments(
prune_arguments,
@ -96,7 +102,7 @@ def prune_archives(
execute_command(
full_command,
output_log_level=output_log_level,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -57,7 +57,7 @@ def create_repository(
f'Requested encryption mode "{encryption_mode}" does not match existing repository encryption mode "{repository_encryption_mode}"'
)
logger.info(f'{repository_path}: Repository already exists. Skipping creation.')
logger.info('Repository already exists. Skipping creation.')
return
except subprocess.CalledProcessError as error:
if error.returncode not in REPO_INFO_REPOSITORY_NOT_FOUND_EXIT_CODES:
@ -91,14 +91,14 @@ def create_repository(
)
if dry_run:
logging.info(f'{repository_path}: Skipping repository creation (dry run)')
logging.info('Skipping repository creation (dry run)')
return
# Do not capture output here, so as to support interactive prompts.
execute_command(
repo_create_command,
output_file=DO_NOT_CAPTURE,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -88,7 +88,7 @@ def delete_repository(
if repo_delete_arguments.force or repo_delete_arguments.cache_only
else borgmatic.execute.DO_NOT_CAPTURE
),
extra_environment=borgmatic.borg.environment.make_environment(config),
environment=borgmatic.borg.environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -50,14 +50,13 @@ def display_repository_info(
+ flags.make_repository_flags(repository_path, local_borg_version)
)
extra_environment = environment.make_environment(config)
working_directory = borgmatic.config.paths.get_working_directory(config)
borg_exit_codes = config.get('borg_exit_codes')
if repo_info_arguments.json:
return execute_command_and_capture_output(
full_command,
extra_environment=extra_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@ -66,7 +65,7 @@ def display_repository_info(
execute_command(
full_command,
output_log_level=logging.ANSWER,
extra_environment=extra_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@ -49,7 +49,7 @@ def resolve_archive_name(
output = execute_command_and_capture_output(
full_command,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
@ -59,7 +59,7 @@ def resolve_archive_name(
except IndexError:
raise ValueError('No archives found in the repository')
logger.debug(f'{repository_path}: Latest archive is {latest_archive}')
logger.debug(f'Latest archive is {latest_archive}')
return latest_archive
@ -140,7 +140,6 @@ def list_repository(
return JSON output).
'''
borgmatic.logger.add_custom_log_levels()
borg_environment = environment.make_environment(config)
main_command = make_repo_list_command(
repository_path,
@ -165,7 +164,7 @@ def list_repository(
json_listing = execute_command_and_capture_output(
json_command,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
@ -179,7 +178,7 @@ def list_repository(
execute_command(
main_command,
output_log_level=logging.ANSWER,
extra_environment=borg_environment,
environment=environment.make_environment(config),
working_directory=working_directory,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,

View File

@ -57,7 +57,7 @@ def transfer_archives(
full_command,
output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -21,7 +21,7 @@ def local_borg_version(config, local_path='borg'):
)
output = execute_command_and_capture_output(
full_command,
extra_environment=environment.make_environment(config),
environment=environment.make_environment(config),
working_directory=borgmatic.config.paths.get_working_directory(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),

View File

@ -349,12 +349,12 @@ def make_parsers():
global_group.add_argument(
'--log-file-format',
type=str,
help='Log format string used for log messages written to the log file',
help='Python format string used for log messages written to the log file',
)
global_group.add_argument(
'--log-json',
action='store_true',
help='Write log messages and console output as one JSON object per log line instead of formatted text',
help='Write Borg log messages and console output as one JSON object per log line instead of formatted text',
)
global_group.add_argument(
'--override',
@ -547,7 +547,7 @@ def make_parsers():
dest='stats',
default=False,
action='store_true',
help='Display statistics of the pruned archive',
help='Display statistics of the pruned archive [Borg 1 only]',
)
prune_group.add_argument(
'--list', dest='list_archives', action='store_true', help='List archives kept/pruned'
@ -1479,6 +1479,31 @@ def make_parsers():
'-h', '--help', action='help', help='Show this help message and exit'
)
key_import_parser = key_parsers.add_parser(
'import',
help='Import a copy of the repository key from backup',
description='Import a copy of the repository key from backup',
add_help=False,
)
key_import_group = key_import_parser.add_argument_group('key import arguments')
key_import_group.add_argument(
'--paper',
action='store_true',
help='Import interactively from a backup done with --paper',
)
key_import_group.add_argument(
'--repository',
help='Path of repository to import the key from, defaults to the configured repository if there is only one, quoted globs supported',
)
key_import_group.add_argument(
'--path',
metavar='PATH',
help='Path to import the key from backup, defaults to stdin',
)
key_import_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
key_change_passphrase_parser = key_parsers.add_parser(
'change-passphrase',
help='Change the passphrase protecting the repository key',

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,6 @@
import collections
import io
import itertools
import os
import re
@ -24,41 +25,65 @@ def insert_newline_before_comment(config, field_name):
def get_properties(schema):
'''
Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
potential properties, returned their merged properties instead.
potential properties, returned their merged properties instead (interleaved so the first
properties of each sub-schema come first). The idea is that the user should see all possible
options even if they're not all possible together.
'''
if 'oneOf' in schema:
return dict(
collections.ChainMap(*[sub_schema['properties'] for sub_schema in schema['oneOf']])
item
for item in itertools.chain(
*itertools.zip_longest(
*[sub_schema['properties'].items() for sub_schema in schema['oneOf']]
)
)
if item is not None
)
return schema['properties']
def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
def schema_to_sample_configuration(schema, source_config=None, level=0, parent_is_sequence=False):
'''
Given a loaded configuration schema, generate and return sample config for it. Include comments
for each option based on the schema "description".
Given a loaded configuration schema and a source configuration, generate and return sample
config for the schema. Include comments for each option based on the schema "description".
If a source config is given, walk it alongside the given schema so that both can be taken into
account when commenting out particular options in add_comments_to_configuration_object().
'''
schema_type = schema.get('type')
example = schema.get('example')
if example is not None:
return example
if schema_type == 'array' or (isinstance(schema_type, list) and 'array' in schema_type):
config = ruamel.yaml.comments.CommentedSeq(
[schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
[
schema_to_sample_configuration(
schema['items'], source_config, level, parent_is_sequence=True
)
]
)
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
elif schema_type == 'object' or (isinstance(schema_type, list) and 'object' in schema_type):
if source_config and isinstance(source_config, list) and isinstance(source_config[0], dict):
source_config = dict(collections.ChainMap(*source_config))
config = ruamel.yaml.comments.CommentedMap(
[
(field_name, schema_to_sample_configuration(sub_schema, level + 1))
(
field_name,
schema_to_sample_configuration(
sub_schema, (source_config or {}).get(field_name, {}), level + 1
),
)
for field_name, sub_schema in get_properties(schema).items()
]
)
indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
add_comments_to_configuration_object(
config, schema, indent=indent, skip_first=parent_is_sequence
config, schema, source_config, indent=indent, skip_first=parent_is_sequence
)
else:
raise ValueError(f'Schema at level {level} is unsupported: {schema}')
@ -178,14 +203,21 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
return
REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
DEFAULT_KEYS = {'source_directories', 'repositories', 'keep_daily'}
COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
def add_comments_to_configuration_object(config, schema, indent=0, skip_first=False):
def add_comments_to_configuration_object(
config, schema, source_config=None, indent=0, skip_first=False
):
'''
Using descriptions from a schema as a source, add those descriptions as comments to the given
config mapping, before each field. Indent the comment the given number of characters.
configuration dict, putting them before each field. Indent the comment the given number of
characters.
And a sentinel for commenting out options that are neither in DEFAULT_KEYS nor the the given
source configuration dict. The idea is that any options used in the source configuration should
stay active in the generated configuration.
'''
for index, field_name in enumerate(config.keys()):
if skip_first and index == 0:
@ -194,10 +226,12 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
field_schema = get_properties(schema).get(field_name, {})
description = field_schema.get('description', '').strip()
# If this is an optional key, add an indicator to the comment flagging it to be commented
# If this isn't a default key, add an indicator to the comment flagging it to be commented
# out from the sample configuration. This sentinel is consumed by downstream processing that
# does the actual commenting out.
if field_name not in REQUIRED_KEYS:
if field_name not in DEFAULT_KEYS and (
source_config is None or field_name not in source_config
):
description = (
'\n'.join((description, COMMENTED_OUT_SENTINEL))
if description
@ -217,21 +251,6 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
RUAMEL_YAML_COMMENTS_INDEX = 1
def remove_commented_out_sentinel(config, field_name):
'''
Given a configuration CommentedMap and a top-level field name in it, remove any "commented out"
sentinel found at the end of its YAML comments. This prevents the given field name from getting
commented out by downstream processing that consumes the sentinel.
'''
try:
last_comment_value = config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX][-1].value
except KeyError:
return
if last_comment_value == f'# {COMMENTED_OUT_SENTINEL}\n':
config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX].pop()
def merge_source_configuration_into_destination(destination_config, source_config):
'''
Deep merge the given source configuration dict into the destination configuration CommentedMap,
@ -246,12 +265,6 @@ def merge_source_configuration_into_destination(destination_config, source_confi
return source_config
for field_name, source_value in source_config.items():
# Since this key/value is from the source configuration, leave it uncommented and remove any
# sentinel that would cause it to get commented out.
remove_commented_out_sentinel(
ruamel.yaml.comments.CommentedMap(destination_config), field_name
)
# This is a mapping. Recurse for this key/value.
if isinstance(source_value, collections.abc.Mapping):
destination_config[field_name] = merge_source_configuration_into_destination(
@ -297,7 +310,7 @@ def generate_sample_configuration(
normalize.normalize(source_filename, source_config)
destination_config = merge_source_configuration_into_destination(
schema_to_sample_configuration(schema), source_config
schema_to_sample_configuration(schema, source_config), source_config
)
if dry_run:

View File

@ -69,7 +69,7 @@ def include_configuration(loader, filename_node, include_directory, config_paths
]
raise ValueError(
'!include value is not supported; use a single filename or a list of filenames'
'The value given for the !include tag is invalid; use a single filename or a list of filenames instead'
)

View File

@ -58,6 +58,90 @@ def normalize_sections(config_filename, config):
return []
def make_command_hook_deprecation_log(config_filename, option_name): # pragma: no cover
'''
Given a configuration filename and the name of a configuration option, return a deprecation
warning log for it.
'''
return logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: {option_name} is deprecated and support will be removed from a future release. Use commands: instead.',
)
)
def normalize_commands(config_filename, config):
'''
Given a configuration filename and a configuration dict, transform any "before_*"- and
"after_*"-style command hooks into "commands:".
'''
logs = []
# Normalize "before_actions" and "after_actions".
for preposition in ('before', 'after'):
option_name = f'{preposition}_actions'
commands = config.pop(option_name, None)
if commands:
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
config.setdefault('commands', []).append(
{
preposition: 'repository',
'run': commands,
}
)
# Normalize "before_backup", "before_prune", "after_backup", "after_prune", etc.
for action_name in ('create', 'prune', 'compact', 'check', 'extract'):
for preposition in ('before', 'after'):
option_name = f'{preposition}_{"backup" if action_name == "create" else action_name}'
commands = config.pop(option_name, None)
if not commands:
continue
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
config.setdefault('commands', []).append(
{
preposition: 'action',
'when': [action_name],
'run': commands,
}
)
# Normalize "on_error".
commands = config.pop('on_error', None)
if commands:
logs.append(make_command_hook_deprecation_log(config_filename, 'on_error'))
config.setdefault('commands', []).append(
{
'after': 'error',
'when': ['create', 'prune', 'compact', 'check'],
'run': commands,
}
)
# Normalize "before_everything" and "after_everything".
for preposition in ('before', 'after'):
option_name = f'{preposition}_everything'
commands = config.pop(option_name, None)
if commands:
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
config.setdefault('commands', []).append(
{
preposition: 'everything',
'when': ['create'],
'run': commands,
}
)
return logs
def normalize(config_filename, config):
'''
Given a configuration filename and a configuration dict of its loaded contents, apply particular
@ -67,6 +151,7 @@ def normalize(config_filename, config):
Raise ValueError the configuration cannot be normalized.
'''
logs = normalize_sections(config_filename, config)
logs += normalize_commands(config_filename, config)
if config.get('borgmatic_source_directory'):
logs.append(

View File

@ -76,14 +76,13 @@ class Runtime_directory:
automatically gets cleaned up as necessary.
'''
def __init__(self, config, log_prefix):
def __init__(self, config):
'''
Given a configuration dict and a log prefix, determine the borgmatic runtime directory,
creating a secure, temporary directory within it if necessary. Defaults to
$XDG_RUNTIME_DIR/./borgmatic or $RUNTIME_DIRECTORY/./borgmatic or
$TMPDIR/borgmatic-[random]/./borgmatic or $TEMP/borgmatic-[random]/./borgmatic or
/tmp/borgmatic-[random]/./borgmatic where "[random]" is a randomly generated string intended
to avoid path collisions.
Given a configuration dict determine the borgmatic runtime directory, creating a secure,
temporary directory within it if necessary. Defaults to $XDG_RUNTIME_DIR/./borgmatic or
$RUNTIME_DIRECTORY/./borgmatic or $TMPDIR/borgmatic-[random]/./borgmatic or
$TEMP/borgmatic-[random]/./borgmatic or /tmp/borgmatic-[random]/./borgmatic where "[random]"
is a randomly generated string intended to avoid path collisions.
If XDG_RUNTIME_DIR or RUNTIME_DIRECTORY is set and already ends in "/borgmatic", then don't
tack on a second "/borgmatic" path component.
@ -127,7 +126,7 @@ class Runtime_directory:
)
os.makedirs(self.runtime_path, mode=0o700, exist_ok=True)
logger.debug(f'{log_prefix}: Using runtime directory {os.path.normpath(self.runtime_path)}')
logger.debug(f'Using runtime directory {os.path.normpath(self.runtime_path)}')
def __enter__(self):
'''
@ -135,7 +134,7 @@ class Runtime_directory:
'''
return self.runtime_path
def __exit__(self, exception, value, traceback):
def __exit__(self, exception_type, exception, traceback):
'''
Delete any temporary directory that was created as part of initialization.
'''

View File

@ -205,8 +205,8 @@ properties:
description: |
Deprecated. Only used for locating database dumps and bootstrap
metadata within backup archives created prior to deprecation.
Replaced by borgmatic_runtime_directory and
borgmatic_state_directory. Defaults to ~/.borgmatic
Replaced by user_runtime_directory and user_state_directory.
Defaults to ~/.borgmatic
example: /tmp/borgmatic
user_runtime_directory:
type: string
@ -250,7 +250,7 @@ properties:
repositories that were initialized with passphrase/repokey/keyfile
encryption. Quote the value if it contains punctuation, so it parses
correctly. And backslash any quote or backslash literals as well.
Defaults to not set.
Defaults to not set. Supports the "{credential ...}" syntax.
example: "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~"
checkpoint_interval:
type: integer
@ -632,8 +632,8 @@ properties:
long-running repository check into multiple
partial checks. Defaults to no interruption. Only
applies to the "repository" check, does not check
the repository index, and is not compatible with a
simultaneous "archives" check or "--repair" flag.
the repository index and is not compatible with
the "--repair" flag.
example: 3600
- required:
- name
@ -796,8 +796,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before all
the actions for each repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before all the actions for each
repository.
example:
- "echo Starting actions."
before_backup:
@ -805,8 +806,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
creating a backup, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before creating a backup, run once
per repository.
example:
- "echo Starting a backup."
before_prune:
@ -814,8 +816,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
pruning, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before pruning, run once per
repository.
example:
- "echo Starting pruning."
before_compact:
@ -823,8 +826,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
compaction, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before compaction, run once per
repository.
example:
- "echo Starting compaction."
before_check:
@ -832,8 +836,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
consistency checks, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before consistency checks, run once
per repository.
example:
- "echo Starting checks."
before_extract:
@ -841,8 +846,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
extracting a backup, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before extracting a backup, run once
per repository.
example:
- "echo Starting extracting."
after_backup:
@ -850,8 +856,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
creating a backup, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after creating a backup, run once per
repository.
example:
- "echo Finished a backup."
after_compact:
@ -859,8 +866,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
compaction, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after compaction, run once per
repository.
example:
- "echo Finished compaction."
after_prune:
@ -868,8 +876,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
pruning, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after pruning, run once per
repository.
example:
- "echo Finished pruning."
after_check:
@ -877,8 +886,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
consistency checks, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after consistency checks, run once
per repository.
example:
- "echo Finished checks."
after_extract:
@ -886,8 +896,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
extracting a backup, run once per repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after extracting a backup, run once
per repository.
example:
- "echo Finished extracting."
after_actions:
@ -895,8 +906,9 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after all
actions for each repository.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after all actions for each
repository.
example:
- "echo Finished actions."
on_error:
@ -904,9 +916,10 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute when an
exception occurs during a "create", "prune", "compact", or "check"
action or an associated before/after hook.
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute when an exception occurs during a
"create", "prune", "compact", or "check" action or an associated
before/after hook.
example:
- "echo Error during create/prune/compact/check."
before_everything:
@ -914,10 +927,10 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute before
running all actions (if one of them is "create"). These are
collected from all configuration files and then run once before all
of them (prior to all actions).
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute before running all actions (if one of
them is "create"). These are collected from all configuration files
and then run once before all of them (prior to all actions).
example:
- "echo Starting actions."
after_everything:
@ -925,12 +938,148 @@ properties:
items:
type: string
description: |
List of one or more shell commands or scripts to execute after
running all actions (if one of them is "create"). These are
collected from all configuration files and then run once after all
of them (after any action).
Deprecated. Use "commands:" instead. List of one or more shell
commands or scripts to execute after running all actions (if one of
them is "create"). These are collected from all configuration files
and then run once after all of them (after any action).
example:
- "echo Completed actions."
commands:
type: array
items:
type: object
oneOf:
- required: [before, run]
additionalProperties: false
properties:
before:
type: string
enum:
- action
- repository
- configuration
- everything
description: |
Name for the point in borgmatic's execution that
the commands should be run before (required if
"after" isn't set):
* "action" runs before each action for each
repository.
* "repository" runs before all actions for each
repository.
* "configuration" runs before all actions and
repositories in the current configuration file.
* "everything" runs before all configuration
files.
example: action
when:
type: array
items:
type: string
enum:
- repo-create
- transfer
- prune
- compact
- create
- check
- delete
- extract
- config
- export-tar
- mount
- umount
- repo-delete
- restore
- repo-list
- list
- repo-info
- info
- break-lock
- key
- borg
description: |
List of actions for which the commands will be
run. Defaults to running for all actions.
example: [create, prune, compact, check]
run:
type: array
items:
type: string
description: |
List of one or more shell commands or scripts to
run when this command hook is triggered. Required.
example:
- "echo Doing stuff."
- required: [after, run]
additionalProperties: false
properties:
after:
type: string
enum:
- action
- repository
- configuration
- everything
- error
description: |
Name for the point in borgmatic's execution that
the commands should be run after (required if
"before" isn't set):
* "action" runs after each action for each
repository.
* "repository" runs after all actions for each
repository.
* "configuration" runs after all actions and
repositories in the current configuration file.
* "everything" runs after all configuration
files.
* "error" runs after an error occurs.
example: action
when:
type: array
items:
type: string
enum:
- repo-create
- transfer
- prune
- compact
- create
- check
- delete
- extract
- config
- export-tar
- mount
- umount
- repo-delete
- restore
- repo-list
- list
- repo-info
- info
- break-lock
- key
- borg
description: |
Only trigger the hook when borgmatic is run with
particular actions listed here. Defaults to
running for all actions.
example: [create, prune, compact, check]
run:
type: array
items:
type: string
description: |
List of one or more shell commands or scripts to
run when this command hook is triggered. Required.
example:
- "echo Doing stuff."
description: |
List of one or more command hooks to execute, triggered at
particular points during borgmatic's execution. For each command
hook, specify one of "before" or "after", not both.
bootstrap:
type: object
properties:
@ -989,13 +1138,15 @@ properties:
Username with which to connect to the database. Defaults
to the username of the current user. You probably want
to specify the "postgres" superuser here when the
database name is "all".
database name is "all". Supports the "{credential ...}"
syntax.
example: dbuser
restore_username:
type: string
description: |
Username with which to restore the database. Defaults to
the "username" option.
the "username" option. Supports the "{credential ...}"
syntax.
example: dbuser
password:
type: string
@ -1003,13 +1154,15 @@ properties:
Password with which to connect to the database. Omitting
a password will only work if PostgreSQL is configured to
trust the configured username without a password or you
create a ~/.pgpass file.
create a ~/.pgpass file. Supports the "{credential ...}"
syntax.
example: trustsome1
restore_password:
type: string
description: |
Password with which to connect to the restore database.
Defaults to the "password" option.
Defaults to the "password" option. Supports the
"{credential ...}" syntax.
example: trustsome1
no_owner:
type: boolean
@ -1036,6 +1189,18 @@ properties:
individual databases. See the pg_dump documentation for
more about formats.
example: directory
compression:
type: ["string", "integer"]
description: |
Database dump compression level (integer) or method
("gzip", "lz4", "zstd", or "none") and optional
colon-separated detail. Defaults to moderate "gzip" for
"custom" and "directory" formats and no compression for
the "plain" format. Compression is not supported for the
"tar" format. Be aware that Borg does its own
compression as well, so you may not need it in both
places.
example: none
ssl_mode:
type: string
enum: ['disable', 'allow', 'prefer',
@ -1072,11 +1237,11 @@ properties:
Command to use instead of "pg_dump" or "pg_dumpall".
This can be used to run a specific pg_dump version
(e.g., one inside a running container). If you run it
from within a container, make sure to mount your
host's ".borgmatic" folder into the container using
the same directory structure. Defaults to "pg_dump"
for single database dump or "pg_dumpall" to dump all
databases.
from within a container, make sure to mount the path in
the "user_runtime_directory" option from the host into
the container at the same location. Defaults to
"pg_dump" for single database dump or "pg_dumpall" to
dump all databases.
example: docker exec my_pg_container pg_dump
pg_restore_command:
type: string
@ -1169,13 +1334,15 @@ properties:
type: string
description: |
Username with which to connect to the database. Defaults
to the username of the current user.
to the username of the current user. Supports the
"{credential ...}" syntax.
example: dbuser
restore_username:
type: string
description: |
Username with which to restore the database. Defaults to
the "username" option.
the "username" option. Supports the "{credential ...}"
syntax.
example: dbuser
password:
type: string
@ -1183,16 +1350,39 @@ properties:
Password with which to connect to the database. Omitting
a password will only work if MariaDB is configured to
trust the configured username without a password.
Supports the "{credential ...}" syntax.
example: trustsome1
restore_password:
type: string
description: |
Password with which to connect to the restore database.
Defaults to the "password" option. Supports the
"{credential ...}" syntax.
example: trustsome1
tls:
type: boolean
description: |
Whether to TLS-encrypt data transmitted between the
client and server. The default varies based on the
MariaDB version.
example: false
restore_tls:
type: boolean
description: |
Whether to TLS-encrypt data transmitted between the
client and restore server. The default varies based on
the MariaDB version.
example: false
mariadb_dump_command:
type: string
description: |
Command to use instead of "mariadb-dump". This can be
used to run a specific mariadb_dump version (e.g., one
inside a running container). If you run it from within
a container, make sure to mount your host's
".borgmatic" folder into the container using the same
directory structure. Defaults to "mariadb-dump".
inside a running container). If you run it from within a
container, make sure to mount the path in the
"user_runtime_directory" option from the host into the
container at the same location. Defaults to
"mariadb-dump".
example: docker exec mariadb_container mariadb-dump
mariadb_command:
type: string
@ -1201,12 +1391,6 @@ properties:
run a specific mariadb version (e.g., one inside a
running container). Defaults to "mariadb".
example: docker exec mariadb_container mariadb
restore_password:
type: string
description: |
Password with which to connect to the restore database.
Defaults to the "password" option.
example: trustsome1
format:
type: string
enum: ['sql']
@ -1295,13 +1479,15 @@ properties:
type: string
description: |
Username with which to connect to the database. Defaults
to the username of the current user.
to the username of the current user. Supports the
"{credential ...}" syntax.
example: dbuser
restore_username:
type: string
description: |
Username with which to restore the database. Defaults to
the "username" option.
the "username" option. Supports the "{credential ...}"
syntax.
example: dbuser
password:
type: string
@ -1309,22 +1495,38 @@ properties:
Password with which to connect to the database. Omitting
a password will only work if MySQL is configured to
trust the configured username without a password.
Supports the "{credential ...}" syntax.
example: trustsome1
restore_password:
type: string
description: |
Password with which to connect to the restore database.
Defaults to the "password" option.
Defaults to the "password" option. Supports the
"{credential ...}" syntax.
example: trustsome1
tls:
type: boolean
description: |
Whether to TLS-encrypt data transmitted between the
client and server. The default varies based on the
MySQL installation.
example: false
restore_tls:
type: boolean
description: |
Whether to TLS-encrypt data transmitted between the
client and restore server. The default varies based on
the MySQL installation.
example: false
mysql_dump_command:
type: string
description: |
Command to use instead of "mysqldump". This can be
used to run a specific mysql_dump version (e.g., one
inside a running container). If you run it from within
a container, make sure to mount your host's
".borgmatic" folder into the container using the same
directory structure. Defaults to "mysqldump".
Command to use instead of "mysqldump". This can be used
to run a specific mysql_dump version (e.g., one inside a
running container). If you run it from within a
container, make sure to mount the path in the
"user_runtime_directory" option from the host into the
container at the same location. Defaults to "mysqldump".
example: docker exec mysql_container mysqldump
mysql_command:
type: string
@ -1411,6 +1613,24 @@ properties:
Path to the SQLite database file to restore to. Defaults
to the "path" option.
example: /var/lib/sqlite/users.db
sqlite_command:
type: string
description: |
Command to use instead of "sqlite3". This can be used to
run a specific sqlite3 version (e.g., one inside a
running container). If you run it from within a
container, make sure to mount the path in the
"user_runtime_directory" option from the host into the
container at the same location. Defaults to "sqlite3".
example: docker exec sqlite_container sqlite3
sqlite_restore_command:
type: string
description: |
Command to run when restoring a database instead
of "sqlite3". This can be used to run a specific
sqlite3 version (e.g., one inside a running container).
Defaults to "sqlite3".
example: docker exec sqlite_container sqlite3
mongodb_databases:
type: array
items:
@ -1451,25 +1671,29 @@ properties:
type: string
description: |
Username with which to connect to the database. Skip it
if no authentication is needed.
if no authentication is needed. Supports the
"{credential ...}" syntax.
example: dbuser
restore_username:
type: string
description: |
Username with which to restore the database. Defaults to
the "username" option.
the "username" option. Supports the "{credential ...}"
syntax.
example: dbuser
password:
type: string
description: |
Password with which to connect to the database. Skip it
if no authentication is needed.
if no authentication is needed. Supports the
"{credential ...}" syntax.
example: trustsome1
restore_password:
type: string
description: |
Password with which to connect to the restore database.
Defaults to the "password" option.
Defaults to the "password" option. Supports the
"{credential ...}" syntax.
example: trustsome1
authentication_database:
type: string
@ -1528,18 +1752,20 @@ properties:
username:
type: string
description: |
The username used for authentication.
The username used for authentication. Supports the
"{credential ...}" syntax.
example: testuser
password:
type: string
description: |
The password used for authentication.
The password used for authentication. Supports the
"{credential ...}" syntax.
example: fakepassword
access_token:
type: string
description: |
An ntfy access token to authenticate with instead of
username/password.
username/password. Supports the "{credential ...}" syntax.
example: tk_AgQdq7mVBoFD37zQVN29RhuMzNIz2
start:
type: object
@ -1634,14 +1860,16 @@ properties:
token:
type: string
description: |
Your application's API token.
Your application's API token. Supports the "{credential
...}" syntax.
example: 7ms6TXHpTokTou2P6x4SodDeentHRa
user:
type: string
description: |
Your user/group key (or that of your target user), viewable
when logged into your dashboard: often referred to as
Your user/group key (or that of your target user), viewable
when logged into your dashboard: often referred to as
USER_KEY in Pushover documentation and code examples.
Supports the "{credential ...}" syntax.
example: hwRwoWsXMBWwgrSecfa9EfPey55WSN
start:
type: object
@ -1887,6 +2115,8 @@ properties:
zabbix:
type: object
additionalProperties: false
required:
- server
properties:
itemid:
type: integer
@ -1909,25 +2139,26 @@ properties:
server:
type: string
description: |
The address of your Zabbix instance.
The API endpoint URL of your Zabbix instance, usually ending
with "/api_jsonrpc.php". Required.
example: https://zabbix.your-domain.com
username:
type: string
description: |
The username used for authentication. Not needed if using
an API key.
an API key. Supports the "{credential ...}" syntax.
example: testuser
password:
type: string
description: |
The password used for authentication. Not needed if using
an API key.
an API key. Supports the "{credential ...}" syntax.
example: fakepassword
api_key:
type: string
description: |
The API key used for authentication. Not needed if using
an username/password.
The API key used for authentication. Not needed if using an
username/password. Supports the "{credential ...}" syntax.
example: fakekey
start:
type: object
@ -2180,6 +2411,12 @@ properties:
- start
- finish
- fail
verify_tls:
type: boolean
description: |
Verify the TLS certificate of the push URL host. Defaults to
true.
example: false
description: |
Configuration for a monitoring integration with Uptime Kuma using
the Push monitor type.
@ -2207,9 +2444,15 @@ properties:
integration_key:
type: string
description: |
PagerDuty integration key used to notify PagerDuty
when a backup errors.
PagerDuty integration key used to notify PagerDuty when a
backup errors. Supports the "{credential ...}" syntax.
example: a177cad45bd374409f78906a810a3074
send_logs:
type: boolean
description: |
Send borgmatic logs to PagerDuty when a backup errors.
Defaults to true.
example: false
description: |
Configuration for a monitoring integration with PagerDuty. Create an
account at https://www.pagerduty.com if you'd like to use this
@ -2259,7 +2502,45 @@ properties:
can send the logs to a self-hosted instance or create an account at
https://grafana.com/auth/sign-up/create-user. See borgmatic
monitoring documentation for details.
sentry:
type: object
required: ['data_source_name_url', 'monitor_slug']
additionalProperties: false
properties:
data_source_name_url:
type: string
description: |
Sentry Data Source Name (DSN) URL, associated with a
particular Sentry project. Used to construct a cron URL,
notified when a backup begins, ends, or errors.
example: https://5f80ec@o294220.ingest.us.sentry.io/203069
monitor_slug:
type: string
description: |
Sentry monitor slug, associated with a particular Sentry
project monitor. Used along with the data source name URL to
construct a cron URL.
example: mymonitor
states:
type: array
items:
type: string
enum:
- start
- finish
- fail
uniqueItems: true
description: |
List of one or more monitoring states to ping for: "start",
"finish", and/or "fail". Defaults to pinging for all states.
example:
- start
- finish
description: |
Configuration for a monitoring integration with Sentry. You can use
a self-hosted instance via https://develop.sentry.dev/self-hosted/
or create a cloud-hosted account at https://sentry.io. See borgmatic
monitoring documentation for details.
zfs:
type: ["object", "null"]
additionalProperties: false
@ -2344,3 +2625,25 @@ properties:
description: |
Configuration for integration with Linux LVM (Logical Volume
Manager).
container:
type: object
additionalProperties: false
properties:
secrets_directory:
type: string
description: |
Secrets directory to use instead of "/run/secrets".
example: /path/to/secrets
description: |
Configuration for integration with Docker or Podman secrets.
keepassxc:
type: object
additionalProperties: false
properties:
keepassxc_cli_command:
type: string
description: |
Command to use instead of "keepassxc-cli".
example: /usr/local/bin/keepassxc-cli
description: |
Configuration for integration with the KeePassXC password manager.

View File

@ -88,8 +88,9 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
'''
Given the path to a config filename in YAML format, the path to a schema filename in a YAML
rendition of JSON Schema format, a sequence of configuration file override strings in the form
of "option.suboption=value", return the parsed configuration as a data structure of nested dicts
and lists corresponding to the schema. Example return value:
of "option.suboption=value", and whether to resolve environment variables, return the parsed
configuration as a data structure of nested dicts and lists corresponding to the schema. Example
return value:
{
'source_directories': ['/home', '/etc'],
@ -124,6 +125,7 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
validator = jsonschema.Draft7Validator(schema)
except AttributeError: # pragma: no cover
validator = jsonschema.Draft4Validator(schema)
validation_errors = tuple(validator.iter_errors(config))
if validation_errors:
@ -136,16 +138,22 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
return config, config_paths, logs
def normalize_repository_path(repository):
def normalize_repository_path(repository, base=None):
'''
Given a repository path, return the absolute path of it (for local repositories).
Optionally, use a base path for resolving relative paths, e.g. to the configured working directory.
'''
# A colon in the repository could mean that it's either a file:// URL or a remote repository.
# If it's a remote repository, we don't want to normalize it. If it's a file:// URL, we do.
if ':' not in repository:
return os.path.abspath(repository)
return (
os.path.abspath(os.path.join(base, repository)) if base else os.path.abspath(repository)
)
elif repository.startswith('file://'):
return os.path.abspath(repository.partition('file://')[-1])
local_path = repository.partition('file://')[-1]
return (
os.path.abspath(os.path.join(base, local_path)) if base else os.path.abspath(local_path)
)
else:
return repository

View File

@ -1,11 +1,12 @@
import collections
import enum
import logging
import os
import select
import subprocess
import textwrap
import borgmatic.logger
logger = logging.getLogger(__name__)
@ -241,6 +242,9 @@ def mask_command_secrets(full_command):
MAX_LOGGED_COMMAND_LENGTH = 1000
PREFIXES_OF_ENVIRONMENT_VARIABLES_TO_LOG = ('BORG_', 'PG', 'MARIADB_', 'MYSQL_')
def log_command(full_command, input_file=None, output_file=None, environment=None):
'''
Log the given command (a sequence of command/argument strings), along with its input/output file
@ -249,14 +253,21 @@ def log_command(full_command, input_file=None, output_file=None, environment=Non
logger.debug(
textwrap.shorten(
' '.join(
tuple(f'{key}=***' for key in (environment or {}).keys())
tuple(
f'{key}=***'
for key in (environment or {}).keys()
if any(
key.startswith(prefix)
for prefix in PREFIXES_OF_ENVIRONMENT_VARIABLES_TO_LOG
)
)
+ mask_command_secrets(full_command)
),
width=MAX_LOGGED_COMMAND_LENGTH,
placeholder=' ...',
)
+ (f" < {getattr(input_file, 'name', '')}" if input_file else '')
+ (f" > {getattr(output_file, 'name', '')}" if output_file else '')
+ (f" < {getattr(input_file, 'name', input_file)}" if input_file else '')
+ (f" > {getattr(output_file, 'name', output_file)}" if output_file else '')
)
@ -272,7 +283,7 @@ def execute_command(
output_file=None,
input_file=None,
shell=False,
extra_environment=None,
environment=None,
working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
@ -282,18 +293,17 @@ def execute_command(
Execute the given command (a sequence of command/argument strings) and log its output at the
given log level. If an open output file object is given, then write stdout to the file and only
log stderr. If an open input file object is given, then read stdin from the file. If shell is
True, execute the command within a shell. If an extra environment dict is given, then use it to
augment the current environment, and pass the result into the command. If a working directory is
given, use that as the present working directory when running the command. If a Borg local path
is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning
instead of an error. But if Borg exit codes are given as a sequence of exit code configuration
dicts, then use that configuration to decide what's an error and what's a warning. If run to
completion is False, then return the process for the command without executing it to completion.
True, execute the command within a shell. If an environment variables dict is given, then pass
it into the command. If a working directory is given, use that as the present working directory
when running the command. If a Borg local path is given, and the command matches it (regardless
of arguments), treat exit code 1 as a warning instead of an error. But if Borg exit codes are
given as a sequence of exit code configuration dicts, then use that configuration to decide
what's an error and what's a warning. If run to completion is False, then return the process for
the command without executing it to completion.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
log_command(full_command, input_file, output_file, extra_environment)
environment = {**os.environ, **extra_environment} if extra_environment else None
log_command(full_command, input_file, output_file, environment)
do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command
@ -305,52 +315,58 @@ def execute_command(
shell=shell,
env=environment,
cwd=working_directory,
# Necessary for passing credentials via anonymous pipe.
close_fds=False,
)
if not run_to_completion:
return process
log_outputs(
(process,),
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
)
with borgmatic.logger.Log_prefix(None): # Log command output without any prefix.
log_outputs(
(process,),
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
)
def execute_command_and_capture_output(
full_command,
input_file=None,
capture_stderr=False,
shell=False,
extra_environment=None,
environment=None,
working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
):
'''
Execute the given command (a sequence of command/argument strings), capturing and returning its
output (stdout). If capture stderr is True, then capture and return stderr in addition to
stdout. If shell is True, execute the command within a shell. If an extra environment dict is
given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the
command. If a Borg local path is given, and the command matches it (regardless of arguments),
treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a
sequence of exit code configuration dicts, then use that configuration to decide what's an error
and what's a warning.
output (stdout). If an input file descriptor is given, then pipe it to the command's stdin. If
capture stderr is True, then capture and return stderr in addition to stdout. If shell is True,
execute the command within a shell. If an environment variables dict is given, then pass it into
the command. If a working directory is given, use that as the present working directory when
running the command. If a Borg local path is given, and the command matches it (regardless of
arguments), treat exit code 1 as a warning instead of an error. But if Borg exit codes are given
as a sequence of exit code configuration dicts, then use that configuration to decide what's an
error and what's a warning.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
log_command(full_command, environment=extra_environment)
environment = {**os.environ, **extra_environment} if extra_environment else None
log_command(full_command, input_file, environment=environment)
command = ' '.join(full_command) if shell else full_command
try:
output = subprocess.check_output(
command,
stdin=input_file,
stderr=subprocess.STDOUT if capture_stderr else None,
shell=shell,
env=environment,
cwd=working_directory,
# Necessary for passing credentials via anonymous pipe.
close_fds=False,
)
except subprocess.CalledProcessError as error:
if (
@ -370,7 +386,7 @@ def execute_command_with_processes(
output_file=None,
input_file=None,
shell=False,
extra_environment=None,
environment=None,
working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
@ -384,19 +400,17 @@ def execute_command_with_processes(
If an open output file object is given, then write stdout to the file and only log stderr. But
if output log level is None, instead suppress logging and return the captured output for (only)
the given command. If an open input file object is given, then read stdin from the file. If
shell is True, execute the command within a shell. If an extra environment dict is given, then
use it to augment the current environment, and pass the result into the command. If a working
directory is given, use that as the present working directory when running the command. If a
Borg local path is given, then for any matching command or process (regardless of arguments),
treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a
sequence of exit code configuration dicts, then use that configuration to decide what's an error
and what's a warning.
shell is True, execute the command within a shell. If an environment variables dict is given,
then pass it into the command. If a working directory is given, use that as the present working
directory when running the command. If a Borg local path is given, then for any matching command
or process (regardless of arguments), treat exit code 1 as a warning instead of an error. But if
Borg exit codes are given as a sequence of exit code configuration dicts, then use that
configuration to decide what's an error and what's a warning.
Raise subprocesses.CalledProcessError if an error occurs while running the command or in the
upstream process.
'''
log_command(full_command, input_file, output_file, extra_environment)
environment = {**os.environ, **extra_environment} if extra_environment else None
log_command(full_command, input_file, output_file, environment)
do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command
@ -411,6 +425,8 @@ def execute_command_with_processes(
shell=shell,
env=environment,
cwd=working_directory,
# Necessary for passing credentials via anonymous pipe.
close_fds=False,
)
except (subprocess.CalledProcessError, OSError):
# Something has gone wrong. So vent each process' output buffer to prevent it from hanging.
@ -421,13 +437,14 @@ def execute_command_with_processes(
process.kill()
raise
captured_outputs = log_outputs(
tuple(processes) + (command_process,),
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
)
with borgmatic.logger.Log_prefix(None): # Log command output without any prefix.
captured_outputs = log_outputs(
tuple(processes) + (command_process,),
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
)
if output_log_level is None:
return captured_outputs.get(command_process)

View File

@ -2,9 +2,11 @@ import logging
import os
import re
import shlex
import subprocess
import sys
import borgmatic.execute
import borgmatic.logger
logger = logging.getLogger(__name__)
@ -12,7 +14,7 @@ logger = logging.getLogger(__name__)
SOFT_FAIL_EXIT_CODE = 75
def interpolate_context(config_filename, hook_description, command, context):
def interpolate_context(hook_description, command, context):
'''
Given a config filename, a hook description, a single hook command, and a dict of context
names/values, interpolate the values by "{name}" into the command and return the result.
@ -22,7 +24,7 @@ def interpolate_context(config_filename, hook_description, command, context):
for unsupported_variable in re.findall(r'{\w+}', command):
logger.warning(
f"{config_filename}: Variable '{unsupported_variable}' is not supported in {hook_description} hook"
f"Variable '{unsupported_variable}' is not supported in {hook_description} hook"
)
return command
@ -30,71 +32,201 @@ def interpolate_context(config_filename, hook_description, command, context):
def make_environment(current_environment, sys_module=sys):
'''
Given the existing system environment as a map from environment variable name to value, return
(in the same form) any extra environment variables that should be used when running command
hooks.
Given the existing system environment as a map from environment variable name to value, return a
copy of it, augmented with any extra environment variables that should be used when running
command hooks.
'''
environment = dict(current_environment)
# Detect whether we're running within a PyInstaller bundle. If so, set or clear LD_LIBRARY_PATH
# based on the value of LD_LIBRARY_PATH_ORIG. This prevents library version information errors.
if getattr(sys_module, 'frozen', False) and hasattr(sys_module, '_MEIPASS'):
return {'LD_LIBRARY_PATH': current_environment.get('LD_LIBRARY_PATH_ORIG', '')}
environment['LD_LIBRARY_PATH'] = environment.get('LD_LIBRARY_PATH_ORIG', '')
return {}
return environment
def execute_hook(commands, umask, config_filename, description, dry_run, **context):
def filter_hooks(command_hooks, before=None, after=None, hook_name=None, action_names=None):
'''
Given a list of hook commands to execute, a umask to execute with (or None), a config filename,
a hook description, and whether this is a dry run, run the given commands. Or, don't run them
if this is a dry run.
Given a sequence of command hook dicts from configuration and one or more filters (before name,
after name, calling hook name, or a sequence of action names), filter down the command hooks to
just the ones that match the given filters.
'''
return tuple(
hook_config
for hook_config in command_hooks or ()
for config_action_names in (hook_config.get('when'),)
if before is None or hook_config.get('before') == before
if after is None or hook_config.get('after') == after
if action_names is None
or config_action_names is None
or set(config_action_names or ()).intersection(set(action_names))
)
def execute_hooks(command_hooks, umask, working_directory, dry_run, **context):
'''
Given a sequence of command hook dicts from configuration, a umask to execute with (or None), a
working directory to execute with, and whether this is a dry run, run the commands for each
hook. Or don't run them if this is a dry run.
The context contains optional values interpolated by name into the hook commands.
Raise ValueError if the umask cannot be parsed.
Raise ValueError if the umask cannot be parsed or a hook is invalid.
Raise subprocesses.CalledProcessError if an error occurs in a hook.
'''
if not commands:
logger.debug(f'{config_filename}: No commands to run for {description} hook')
return
borgmatic.logger.add_custom_log_levels()
dry_run_label = ' (dry run; not actually running hooks)' if dry_run else ''
context['configuration_filename'] = config_filename
commands = [
interpolate_context(config_filename, description, command, context) for command in commands
]
for hook_config in command_hooks:
commands = hook_config.get('run')
if len(commands) == 1:
logger.info(f'{config_filename}: Running command for {description} hook{dry_run_label}')
else:
logger.info(
f'{config_filename}: Running {len(commands)} commands for {description} hook{dry_run_label}',
)
if 'before' in hook_config:
description = f'before {hook_config.get("before")}'
elif 'after' in hook_config:
description = f'after {hook_config.get("after")}'
else:
raise ValueError(f'Invalid hook configuration: {hook_config}')
if umask:
parsed_umask = int(str(umask), 8)
logger.debug(f'{config_filename}: Set hook umask to {oct(parsed_umask)}')
original_umask = os.umask(parsed_umask)
else:
original_umask = None
if not commands:
logger.debug(f'No commands to run for {description} hook')
continue
try:
for command in commands:
if dry_run:
continue
commands = [interpolate_context(description, command, context) for command in commands]
borgmatic.execute.execute_command(
[command],
output_log_level=(logging.ERROR if description == 'on-error' else logging.WARNING),
shell=True,
extra_environment=make_environment(os.environ),
if len(commands) == 1:
logger.info(f'Running {description} command hook{dry_run_label}')
else:
logger.info(
f'Running {len(commands)} commands for {description} hook{dry_run_label}',
)
finally:
if original_umask:
os.umask(original_umask)
if umask:
parsed_umask = int(str(umask), 8)
logger.debug(f'Setting hook umask to {oct(parsed_umask)}')
original_umask = os.umask(parsed_umask)
else:
original_umask = None
try:
for command in commands:
if dry_run:
continue
borgmatic.execute.execute_command(
[command],
output_log_level=(
logging.ERROR if hook_config.get('after') == 'error' else logging.ANSWER
),
shell=True,
environment=make_environment(os.environ),
working_directory=working_directory,
)
finally:
if original_umask:
os.umask(original_umask)
def considered_soft_failure(config_filename, error):
class Before_after_hooks:
'''
A Python context manager for executing command hooks both before and after the wrapped code.
Example use as a context manager:
with borgmatic.hooks.command.Before_after_hooks(
command_hooks=config.get('commands'),
before_after='do_stuff',
umask=config.get('umask'),
dry_run=dry_run,
hook_name='myhook',
):
do()
some()
stuff()
With that context manager in place, "before" command hooks execute before the wrapped code runs,
and "after" command hooks execute after the wrapped code completes.
'''
def __init__(
self,
command_hooks,
before_after,
umask,
working_directory,
dry_run,
hook_name=None,
action_names=None,
**context,
):
'''
Given a sequence of command hook configuration dicts, the before/after name, a umask to run
commands with, a working directory to run commands with, a dry run flag, the name of the
calling hook, a sequence of action names, and any context for the executed commands, save
those data points for use below.
'''
self.command_hooks = command_hooks
self.before_after = before_after
self.umask = umask
self.working_directory = working_directory
self.dry_run = dry_run
self.hook_name = hook_name
self.action_names = action_names
self.context = context
def __enter__(self):
'''
Run the configured "before" command hooks that match the initialized data points.
'''
try:
execute_hooks(
borgmatic.hooks.command.filter_hooks(
self.command_hooks,
before=self.before_after,
hook_name=self.hook_name,
action_names=self.action_names,
),
self.umask,
self.working_directory,
self.dry_run,
**self.context,
)
except (OSError, subprocess.CalledProcessError) as error:
if considered_soft_failure(error):
return
# Trigger the after hook manually, since raising here will prevent it from being run
# otherwise.
self.__exit__(None, None, None)
raise ValueError(f'Error running before {self.before_after} hook: {error}')
def __exit__(self, exception_type, exception, traceback):
'''
Run the configured "after" command hooks that match the initialized data points.
'''
try:
execute_hooks(
borgmatic.hooks.command.filter_hooks(
self.command_hooks,
after=self.before_after,
hook_name=self.hook_name,
action_names=self.action_names,
),
self.umask,
self.working_directory,
self.dry_run,
**self.context,
)
except (OSError, subprocess.CalledProcessError) as error:
if considered_soft_failure(error):
return
raise ValueError(f'Error running after {self.before_after} hook: {error}')
def considered_soft_failure(error):
'''
Given a configuration filename and an exception object, return whether the exception object
represents a subprocess.CalledProcessError with a return code of SOFT_FAIL_EXIT_CODE. If so,
@ -106,7 +238,7 @@ def considered_soft_failure(config_filename, error):
if exit_code == SOFT_FAIL_EXIT_CODE:
logger.info(
f'{config_filename}: Command hook exited with soft failure exit code ({SOFT_FAIL_EXIT_CODE}); skipping remaining repository actions',
f'Command hook exited with soft failure exit code ({SOFT_FAIL_EXIT_CODE}); skipping remaining repository actions',
)
return True

View File

View File

@ -0,0 +1,43 @@
import logging
import os
import re
logger = logging.getLogger(__name__)
SECRET_NAME_PATTERN = re.compile(r'^\w+$')
DEFAULT_SECRETS_DIRECTORY = '/run/secrets'
def load_credential(hook_config, config, credential_parameters):
'''
Given the hook configuration dict, the configuration dict, and a credential parameters tuple
containing a secret name to load, read the secret from the corresponding container secrets file
and return it.
Raise ValueError if the credential parameters is not one element, the secret name is invalid, or
the secret file cannot be read.
'''
try:
(secret_name,) = credential_parameters
except ValueError:
name = ' '.join(credential_parameters)
raise ValueError(f'Cannot load invalid secret name: "{name}"')
if not SECRET_NAME_PATTERN.match(secret_name):
raise ValueError(f'Cannot load invalid secret name: "{secret_name}"')
try:
with open(
os.path.join(
config.get('working_directory', ''),
(hook_config or {}).get('secrets_directory', DEFAULT_SECRETS_DIRECTORY),
secret_name,
)
) as secret_file:
return secret_file.read().rstrip(os.linesep)
except (FileNotFoundError, OSError) as error:
logger.warning(error)
raise ValueError(f'Cannot load secret "{secret_name}" from file: {error.filename}')

View File

@ -0,0 +1,32 @@
import logging
import os
logger = logging.getLogger(__name__)
def load_credential(hook_config, config, credential_parameters):
'''
Given the hook configuration dict, the configuration dict, and a credential parameters tuple
containing a credential path to load, load the credential from file and return it.
Raise ValueError if the credential parameters is not one element or the secret file cannot be
read.
'''
try:
(credential_path,) = credential_parameters
except ValueError:
name = ' '.join(credential_parameters)
raise ValueError(f'Cannot load invalid credential: "{name}"')
expanded_credential_path = os.path.expanduser(credential_path)
try:
with open(
os.path.join(config.get('working_directory', ''), expanded_credential_path)
) as credential_file:
return credential_file.read().rstrip(os.linesep)
except (FileNotFoundError, OSError) as error:
logger.warning(error)
raise ValueError(f'Cannot load credential file: {error.filename}')

View File

@ -0,0 +1,44 @@
import logging
import os
import shlex
import borgmatic.execute
logger = logging.getLogger(__name__)
def load_credential(hook_config, config, credential_parameters):
'''
Given the hook configuration dict, the configuration dict, and a credential parameters tuple
containing a KeePassXC database path and an attribute name to load, run keepassxc-cli to fetch
the corresponidng KeePassXC credential and return it.
Raise ValueError if keepassxc-cli can't retrieve the credential.
'''
try:
(database_path, attribute_name) = credential_parameters
except ValueError:
path_and_name = ' '.join(credential_parameters)
raise ValueError(
f'Cannot load credential with invalid KeePassXC database path and attribute name: "{path_and_name}"'
)
expanded_database_path = os.path.expanduser(database_path)
if not os.path.exists(expanded_database_path):
raise ValueError(
f'Cannot load credential because KeePassXC database path does not exist: {database_path}'
)
return borgmatic.execute.execute_command_and_capture_output(
tuple(shlex.split((hook_config or {}).get('keepassxc_cli_command', 'keepassxc-cli')))
+ (
'show',
'--show-protected',
'--attributes',
'Password',
expanded_database_path,
attribute_name,
)
).rstrip(os.linesep)

View File

@ -0,0 +1,124 @@
import functools
import re
import shlex
import borgmatic.hooks.dispatch
IS_A_HOOK = False
class Hash_adapter:
'''
A Hash_adapter instance wraps an unhashable object and pretends it's hashable. This is intended
for passing to a @functools.cache-decorated function to prevent it from complaining that an
argument is unhashable. It should only be used for arguments that you don't want to actually
impact the cache hashing, because Hash_adapter doesn't actually hash the object's contents.
Example usage:
@functools.cache
def func(a, b):
print(a, b.actual_value)
return a
func(5, Hash_adapter({1: 2, 3: 4})) # Calls func(), prints, and returns.
func(5, Hash_adapter({1: 2, 3: 4})) # Hits the cache and just returns the value.
func(5, Hash_adapter({5: 6, 7: 8})) # Also uses cache, since the Hash_adapter is ignored.
In the above function, the "b" value is one that has been wrapped with Hash_adappter, and
therefore "b.actual_value" is necessary to access the original value.
'''
def __init__(self, actual_value):
self.actual_value = actual_value
def __eq__(self, other):
return True
def __hash__(self):
return 0
UNHASHABLE_TYPES = (dict, list, set)
def cache_ignoring_unhashable_arguments(function):
'''
A function decorator that caches calls to the decorated function but ignores any unhashable
arguments when performing cache lookups. This is intended to be a drop-in replacement for
functools.cache.
Example usage:
@cache_ignoring_unhashable_arguments
def func(a, b):
print(a, b)
return a
func(5, {1: 2, 3: 4}) # Calls func(), prints, and returns.
func(5, {1: 2, 3: 4}) # Hits the cache and just returns the value.
func(5, {5: 6, 7: 8}) # Also uses cache, since the unhashable value (the dict) is ignored.
'''
@functools.cache
def cached_function(*args, **kwargs):
return function(
*(arg.actual_value if isinstance(arg, Hash_adapter) else arg for arg in args),
**{
key: value.actual_value if isinstance(value, Hash_adapter) else value
for (key, value) in kwargs.items()
},
)
@functools.wraps(function)
def wrapper_function(*args, **kwargs):
return cached_function(
*(Hash_adapter(arg) if isinstance(arg, UNHASHABLE_TYPES) else arg for arg in args),
**{
key: Hash_adapter(value) if isinstance(value, UNHASHABLE_TYPES) else value
for (key, value) in kwargs.items()
},
)
wrapper_function.cache_clear = cached_function.cache_clear
return wrapper_function
CREDENTIAL_PATTERN = re.compile(r'\{credential( +(?P<hook_and_parameters>.*))?\}')
@cache_ignoring_unhashable_arguments
def resolve_credential(value, config):
'''
Given a configuration value containing a string like "{credential hookname credentialname}" and
a configuration dict, resolve the credential by calling the relevant hook to get the actual
credential value. If the given value does not actually contain a credential tag, then return it
unchanged.
Cache the value (ignoring the config for purposes of caching), so repeated calls to this
function don't need to load the credential repeatedly.
Raise ValueError if the config could not be parsed or the credential could not be loaded.
'''
if value is None:
return value
matcher = CREDENTIAL_PATTERN.match(value)
if not matcher:
return value
hook_and_parameters = matcher.group('hook_and_parameters')
if not hook_and_parameters:
raise ValueError(f'Cannot load credential with invalid syntax "{value}"')
(hook_name, *credential_parameters) = shlex.split(hook_and_parameters)
if not credential_parameters:
raise ValueError(f'Cannot load credential with invalid syntax "{value}"')
return borgmatic.hooks.dispatch.call_hook(
'load_credential', config, hook_name, tuple(credential_parameters)
)

View File

@ -0,0 +1,43 @@
import logging
import os
import re
logger = logging.getLogger(__name__)
CREDENTIAL_NAME_PATTERN = re.compile(r'^\w+$')
def load_credential(hook_config, config, credential_parameters):
'''
Given the hook configuration dict, the configuration dict, and a credential parameters tuple
containing a credential name to load, read the credential from the corresponding systemd
credential file and return it.
Raise ValueError if the systemd CREDENTIALS_DIRECTORY environment variable is not set, the
credential name is invalid, or the credential file cannot be read.
'''
try:
(credential_name,) = credential_parameters
except ValueError:
name = ' '.join(credential_parameters)
raise ValueError(f'Cannot load invalid credential name: "{name}"')
credentials_directory = os.environ.get('CREDENTIALS_DIRECTORY')
if not credentials_directory:
raise ValueError(
f'Cannot load credential "{credential_name}" because the systemd CREDENTIALS_DIRECTORY environment variable is not set'
)
if not CREDENTIAL_NAME_PATTERN.match(credential_name):
raise ValueError(f'Cannot load invalid credential name "{credential_name}"')
try:
with open(os.path.join(credentials_directory, credential_name)) as credential_file:
return credential_file.read().rstrip(os.linesep)
except (FileNotFoundError, OSError) as error:
logger.warning(error)
raise ValueError(f'Cannot load credential "{credential_name}" from file: {error.filename}')

View File

@ -10,7 +10,7 @@ import borgmatic.config.paths
logger = logging.getLogger(__name__)
def use_streaming(hook_config, config, log_prefix): # pragma: no cover
def use_streaming(hook_config, config): # pragma: no cover
'''
Return whether dump streaming is used for this hook. (Spoiler: It isn't.)
'''
@ -20,18 +20,17 @@ def use_streaming(hook_config, config, log_prefix): # pragma: no cover
def dump_data_sources(
hook_config,
config,
log_prefix,
config_paths,
borgmatic_runtime_directory,
patterns,
dry_run,
):
'''
Given a bootstrap configuration dict, a configuration dict, a log prefix, the borgmatic
configuration file paths, the borgmatic runtime directory, the configured patterns, and whether
this is a dry run, create a borgmatic manifest file to store the paths of the configuration
files used to create the archive. But skip this if the bootstrap store_config_files option is
False or if this is a dry run.
Given a bootstrap configuration dict, a configuration dict, the borgmatic configuration file
paths, the borgmatic runtime directory, the configured patterns, and whether this is a dry run,
create a borgmatic manifest file to store the paths of the configuration files used to create
the archive. But skip this if the bootstrap store_config_files option is False or if this is a
dry run.
Return an empty sequence, since there are no ongoing dump processes from this hook.
'''
@ -56,19 +55,27 @@ def dump_data_sources(
manifest_file,
)
patterns.extend(borgmatic.borg.pattern.Pattern(config_path) for config_path in config_paths)
patterns.extend(
borgmatic.borg.pattern.Pattern(
config_path, source=borgmatic.borg.pattern.Pattern_source.HOOK
)
for config_path in config_paths
)
patterns.append(
borgmatic.borg.pattern.Pattern(os.path.join(borgmatic_runtime_directory, 'bootstrap'))
borgmatic.borg.pattern.Pattern(
os.path.join(borgmatic_runtime_directory, 'bootstrap'),
source=borgmatic.borg.pattern.Pattern_source.HOOK,
)
)
return []
def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_directory, dry_run):
def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, dry_run):
'''
Given a bootstrap configuration dict, a configuration dict, a log prefix, the borgmatic runtime
directory, and whether this is a dry run, then remove the manifest file created above. If this
is a dry run, then don't actually remove anything.
Given a bootstrap configuration dict, a configuration dict, the borgmatic runtime directory, and
whether this is a dry run, then remove the manifest file created above. If this is a dry run,
then don't actually remove anything.
'''
dry_run_label = ' (dry run; not actually removing anything)' if dry_run else ''
@ -79,14 +86,12 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
'bootstrap',
)
logger.debug(
f'{log_prefix}: Looking for bootstrap manifest files to remove in {manifest_glob}{dry_run_label}'
f'Looking for bootstrap manifest files to remove in {manifest_glob}{dry_run_label}'
)
for manifest_directory in glob.glob(manifest_glob):
manifest_file_path = os.path.join(manifest_directory, 'manifest.json')
logger.debug(
f'{log_prefix}: Removing bootstrap manifest at {manifest_file_path}{dry_run_label}'
)
logger.debug(f'Removing bootstrap manifest at {manifest_file_path}{dry_run_label}')
if dry_run:
continue
@ -103,7 +108,7 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
def make_data_source_dump_patterns(
hook_config, config, log_prefix, borgmatic_runtime_directory, name=None
hook_config, config, borgmatic_runtime_directory, name=None
): # pragma: no cover
'''
Restores are implemented via the separate, purpose-specific "bootstrap" action rather than the
@ -115,7 +120,6 @@ def make_data_source_dump_patterns(
def restore_data_source_dump(
hook_config,
config,
log_prefix,
data_source,
dry_run,
extract_process,

View File

@ -14,16 +14,16 @@ import borgmatic.hooks.data_source.snapshot
logger = logging.getLogger(__name__)
def use_streaming(hook_config, config, log_prefix): # pragma: no cover
def use_streaming(hook_config, config): # pragma: no cover
'''
Return whether dump streaming is used for this hook. (Spoiler: It isn't.)
'''
return False
def get_filesystem_mount_points(findmnt_command):
def get_subvolume_mount_points(findmnt_command):
'''
Given a findmnt command to run, get all top-level Btrfs filesystem mount points.
Given a findmnt command to run, get all sorted Btrfs subvolume mount points.
'''
findmnt_output = borgmatic.execute.execute_command_and_capture_output(
tuple(findmnt_command.split(' '))
@ -37,7 +37,7 @@ def get_filesystem_mount_points(findmnt_command):
try:
return tuple(
filesystem['target'] for filesystem in json.loads(findmnt_output)['filesystems']
sorted(filesystem['target'] for filesystem in json.loads(findmnt_output)['filesystems'])
)
except json.JSONDecodeError as error:
raise ValueError(f'Invalid {findmnt_command} JSON output: {error}')
@ -45,35 +45,48 @@ def get_filesystem_mount_points(findmnt_command):
raise ValueError(f'Invalid {findmnt_command} output: Missing key "{error}"')
def get_subvolumes_for_filesystem(btrfs_command, filesystem_mount_point):
'''
Given a Btrfs command to run and a Btrfs filesystem mount point, get the sorted subvolumes for
that filesystem. Include the filesystem itself.
'''
btrfs_output = borgmatic.execute.execute_command_and_capture_output(
Subvolume = collections.namedtuple('Subvolume', ('path', 'contained_patterns'), defaults=((),))
def get_subvolume_property(btrfs_command, subvolume_path, property_name):
output = borgmatic.execute.execute_command_and_capture_output(
tuple(btrfs_command.split(' '))
+ (
'subvolume',
'list',
filesystem_mount_point,
)
'property',
'get',
'-t', # Type.
'subvol',
subvolume_path,
property_name,
),
)
if not filesystem_mount_point.strip():
return ()
try:
value = output.strip().split('=')[1]
except IndexError:
raise ValueError(f'Invalid {btrfs_command} property output')
return (filesystem_mount_point,) + tuple(
sorted(
subvolume_path
for line in btrfs_output.splitlines()
for subvolume_subpath in (line.rstrip().split(' ')[-1],)
for subvolume_path in (os.path.join(filesystem_mount_point, subvolume_subpath),)
if subvolume_subpath.strip()
)
)
return {
'true': True,
'false': False,
}.get(value, value)
Subvolume = collections.namedtuple('Subvolume', ('path', 'contained_patterns'), defaults=((),))
def omit_read_only_subvolume_mount_points(btrfs_command, subvolume_paths):
'''
Given a Btrfs command to run and a sequence of Btrfs subvolume mount points, filter them down to
just those that are read-write. The idea is that Btrfs can't actually snapshot a read-only
subvolume, so we should just ignore them.
'''
retained_subvolume_paths = []
for subvolume_path in subvolume_paths:
if get_subvolume_property(btrfs_command, subvolume_path, 'ro'):
logger.debug(f'Ignoring Btrfs subvolume {subvolume_path} because it is read-only')
else:
retained_subvolume_paths.append(subvolume_path)
return tuple(retained_subvolume_paths)
def get_subvolumes(btrfs_command, findmnt_command, patterns=None):
@ -82,30 +95,37 @@ def get_subvolumes(btrfs_command, findmnt_command, patterns=None):
between the current Btrfs filesystem and subvolume mount points and the paths of any patterns.
The idea is that these pattern paths represent the requested subvolumes to snapshot.
If patterns is None, then return all subvolumes, sorted by path.
Only include subvolumes that contain at least one root pattern sourced from borgmatic
configuration (as opposed to generated elsewhere in borgmatic). But if patterns is None, then
return all subvolumes instead, sorted by path.
Return the result as a sequence of matching subvolume mount points.
'''
candidate_patterns = set(patterns or ())
subvolumes = []
# For each filesystem mount point, find its subvolumes and match them against the given patterns
# to find the subvolumes to backup. And within this loop, sort the subvolumes from longest to
# shortest mount points, so longer mount points get a whack at the candidate pattern piñata
# before their parents do. (Patterns are consumed during this process, so no two subvolumes end
# up with the same contained patterns.)
for mount_point in get_filesystem_mount_points(findmnt_command):
# For each subvolume mount point, match it against the given patterns to find the subvolumes to
# backup. Sort the subvolumes from longest to shortest mount points, so longer mount points get
# a whack at the candidate pattern piñata before their parents do. (Patterns are consumed during
# this process, so no two subvolumes end up with the same contained patterns.)
for mount_point in reversed(
omit_read_only_subvolume_mount_points(
btrfs_command, get_subvolume_mount_points(findmnt_command)
)
):
subvolumes.extend(
Subvolume(subvolume_path, contained_patterns)
for subvolume_path in reversed(
get_subvolumes_for_filesystem(btrfs_command, mount_point)
)
Subvolume(mount_point, contained_patterns)
for contained_patterns in (
borgmatic.hooks.data_source.snapshot.get_contained_patterns(
subvolume_path, candidate_patterns
mount_point, candidate_patterns
),
)
if patterns is None or contained_patterns
if patterns is None
or any(
pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
and pattern.source == borgmatic.borg.pattern.Pattern_source.CONFIG
for pattern in contained_patterns
)
)
return tuple(sorted(subvolumes, key=lambda subvolume: subvolume.path))
@ -151,8 +171,9 @@ def make_snapshot_exclude_pattern(subvolume_path): # pragma: no cover
subvolume_path.lstrip(os.path.sep),
snapshot_directory,
),
borgmatic.borg.pattern.Pattern_type.EXCLUDE,
borgmatic.borg.pattern.Pattern_type.NO_RECURSE,
borgmatic.borg.pattern.Pattern_style.FNMATCH,
source=borgmatic.borg.pattern.Pattern_source.HOOK,
)
@ -185,6 +206,7 @@ def make_borg_snapshot_pattern(subvolume_path, pattern):
pattern.type,
pattern.style,
pattern.device,
source=borgmatic.borg.pattern.Pattern_source.HOOK,
)
@ -211,38 +233,37 @@ def snapshot_subvolume(btrfs_command, subvolume_path, snapshot_path): # pragma:
def dump_data_sources(
hook_config,
config,
log_prefix,
config_paths,
borgmatic_runtime_directory,
patterns,
dry_run,
):
'''
Given a Btrfs configuration dict, a configuration dict, a log prefix, the borgmatic
configuration file paths, the borgmatic runtime directory, the configured patterns, and whether
this is a dry run, auto-detect and snapshot any Btrfs subvolume mount points listed in the given
patterns. Also update those patterns, replacing subvolume mount points with corresponding
snapshot directories so they get stored in the Borg archive instead. Use the log prefix in any
log entries.
Given a Btrfs configuration dict, a configuration dict, the borgmatic configuration file paths,
the borgmatic runtime directory, the configured patterns, and whether this is a dry run,
auto-detect and snapshot any Btrfs subvolume mount points listed in the given patterns. Also
update those patterns, replacing subvolume mount points with corresponding snapshot directories
so they get stored in the Borg archive instead.
Return an empty sequence, since there are no ongoing dump processes from this hook.
If this is a dry run, then don't actually snapshot anything.
'''
dry_run_label = ' (dry run; not actually snapshotting anything)' if dry_run else ''
logger.info(f'{log_prefix}: Snapshotting Btrfs subvolumes{dry_run_label}')
logger.info(f'Snapshotting Btrfs subvolumes{dry_run_label}')
# Based on the configured patterns, determine Btrfs subvolumes to backup.
# Based on the configured patterns, determine Btrfs subvolumes to backup. Only consider those
# patterns that came from actual user configuration (as opposed to, say, other hooks).
btrfs_command = hook_config.get('btrfs_command', 'btrfs')
findmnt_command = hook_config.get('findmnt_command', 'findmnt')
subvolumes = get_subvolumes(btrfs_command, findmnt_command, patterns)
if not subvolumes:
logger.warning(f'{log_prefix}: No Btrfs subvolumes found to snapshot{dry_run_label}')
logger.warning(f'No Btrfs subvolumes found to snapshot{dry_run_label}')
# Snapshot each subvolume, rewriting patterns to use their snapshot paths.
for subvolume in subvolumes:
logger.debug(f'{log_prefix}: Creating Btrfs snapshot for {subvolume.path} subvolume')
logger.debug(f'Creating Btrfs snapshot for {subvolume.path} subvolume')
snapshot_path = make_snapshot_path(subvolume.path)
@ -280,12 +301,11 @@ def delete_snapshot(btrfs_command, snapshot_path): # pragma: no cover
)
def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_directory, dry_run):
def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, dry_run):
'''
Given a Btrfs configuration dict, a configuration dict, a log prefix, the borgmatic runtime
directory, and whether this is a dry run, delete any Btrfs snapshots created by borgmatic. Use
the log prefix in any log entries. If this is a dry run or Btrfs isn't configured in borgmatic's
configuration, then don't actually remove anything.
Given a Btrfs configuration dict, a configuration dict, the borgmatic runtime directory, and
whether this is a dry run, delete any Btrfs snapshots created by borgmatic. If this is a dry run
or Btrfs isn't configured in borgmatic's configuration, then don't actually remove anything.
'''
if hook_config is None:
return
@ -298,10 +318,10 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
try:
all_subvolumes = get_subvolumes(btrfs_command, findmnt_command)
except FileNotFoundError as error:
logger.debug(f'{log_prefix}: Could not find "{error.filename}" command')
logger.debug(f'Could not find "{error.filename}" command')
return
except subprocess.CalledProcessError as error:
logger.debug(f'{log_prefix}: {error}')
logger.debug(error)
return
# Reversing the sorted subvolumes ensures that we remove longer mount point paths of child
@ -313,14 +333,14 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
)
logger.debug(
f'{log_prefix}: Looking for snapshots to remove in {subvolume_snapshots_glob}{dry_run_label}'
f'Looking for snapshots to remove in {subvolume_snapshots_glob}{dry_run_label}'
)
for snapshot_path in glob.glob(subvolume_snapshots_glob):
if not os.path.isdir(snapshot_path):
continue
logger.debug(f'{log_prefix}: Deleting Btrfs snapshot {snapshot_path}{dry_run_label}')
logger.debug(f'Deleting Btrfs snapshot {snapshot_path}{dry_run_label}')
if dry_run:
continue
@ -328,19 +348,22 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
try:
delete_snapshot(btrfs_command, snapshot_path)
except FileNotFoundError:
logger.debug(f'{log_prefix}: Could not find "{btrfs_command}" command')
logger.debug(f'Could not find "{btrfs_command}" command')
return
except subprocess.CalledProcessError as error:
logger.debug(f'{log_prefix}: {error}')
logger.debug(error)
return
# Strip off the subvolume path from the end of the snapshot path and then delete the
# resulting directory.
shutil.rmtree(snapshot_path.rsplit(subvolume.path, 1)[0])
# Remove the snapshot parent directory if it still exists. (It might not exist if the
# snapshot was for "/".)
snapshot_parent_dir = snapshot_path.rsplit(subvolume.path, 1)[0]
if os.path.isdir(snapshot_parent_dir):
shutil.rmtree(snapshot_parent_dir)
def make_data_source_dump_patterns(
hook_config, config, log_prefix, borgmatic_runtime_directory, name=None
hook_config, config, borgmatic_runtime_directory, name=None
): # pragma: no cover
'''
Restores aren't implemented, because stored files can be extracted directly with "extract".
@ -351,7 +374,6 @@ def make_data_source_dump_patterns(
def restore_data_source_dump(
hook_config,
config,
log_prefix,
data_source,
dry_run,
extract_process,

View File

@ -46,14 +46,14 @@ def create_named_pipe_for_dump(dump_path):
os.mkfifo(dump_path, mode=0o600)
def remove_data_source_dumps(dump_path, data_source_type_name, log_prefix, dry_run):
def remove_data_source_dumps(dump_path, data_source_type_name, dry_run):
'''
Remove all data source dumps in the given dump directory path (including the directory itself).
If this is a dry run, then don't actually remove anything.
'''
dry_run_label = ' (dry run; not actually removing anything)' if dry_run else ''
logger.debug(f'{log_prefix}: Removing {data_source_type_name} data source dumps{dry_run_label}')
logger.debug(f'Removing {data_source_type_name} data source dumps{dry_run_label}')
if dry_run:
return

View File

@ -1,5 +1,6 @@
import collections
import glob
import hashlib
import json
import logging
import os
@ -14,7 +15,7 @@ import borgmatic.hooks.data_source.snapshot
logger = logging.getLogger(__name__)
def use_streaming(hook_config, config, log_prefix): # pragma: no cover
def use_streaming(hook_config, config): # pragma: no cover
'''
Return whether dump streaming is used for this hook. (Spoiler: It isn't.)
'''
@ -33,7 +34,9 @@ def get_logical_volumes(lsblk_command, patterns=None):
between the current LVM logical volume mount points and the paths of any patterns. The idea is
that these pattern paths represent the requested logical volumes to snapshot.
If patterns is None, include all logical volume mounts points, not just those in patterns.
Only include logical volumes that contain at least one root pattern sourced from borgmatic
configuration (as opposed to generated elsewhere in borgmatic). But if patterns is None, include
all logical volume mounts points instead, not just those in patterns.
Return the result as a sequence of Logical_volume instances.
'''
@ -72,7 +75,12 @@ def get_logical_volumes(lsblk_command, patterns=None):
device['mountpoint'], candidate_patterns
),
)
if not patterns or contained_patterns
if not patterns
or any(
pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
and pattern.source == borgmatic.borg.pattern.Pattern_source.CONFIG
for pattern in contained_patterns
)
)
except KeyError as error:
raise ValueError(f'Invalid {lsblk_command} output: Missing key "{error}"')
@ -124,10 +132,14 @@ def mount_snapshot(mount_command, snapshot_device, snapshot_mount_path): # prag
)
def make_borg_snapshot_pattern(pattern, normalized_runtime_directory):
MOUNT_POINT_HASH_LENGTH = 10
def make_borg_snapshot_pattern(pattern, logical_volume, normalized_runtime_directory):
'''
Given a Borg pattern as a borgmatic.borg.pattern.Pattern instance, return a new Pattern with its
path rewritten to be in a snapshot directory based on the given runtime directory.
Given a Borg pattern as a borgmatic.borg.pattern.Pattern instance and a Logical_volume
containing it, return a new Pattern with its path rewritten to be in a snapshot directory based
on both the given runtime directory and the given Logical_volume's mount point.
Move any initial caret in a regular expression pattern path to the beginning, so as not to break
the regular expression.
@ -142,6 +154,13 @@ def make_borg_snapshot_pattern(pattern, normalized_runtime_directory):
rewritten_path = initial_caret + os.path.join(
normalized_runtime_directory,
'lvm_snapshots',
# Including this hash prevents conflicts between snapshot patterns for different logical
# volumes. For instance, without this, snapshotting a logical volume at /var and another at
# /var/spool would result in overlapping snapshot patterns and therefore colliding mount
# attempts.
hashlib.shake_256(logical_volume.mount_point.encode('utf-8')).hexdigest(
MOUNT_POINT_HASH_LENGTH
),
'.', # Borg 1.4+ "slashdot" hack.
# Included so that the source directory ends up in the Borg archive at its "original" path.
pattern.path.lstrip('^').lstrip(os.path.sep),
@ -152,6 +171,7 @@ def make_borg_snapshot_pattern(pattern, normalized_runtime_directory):
pattern.type,
pattern.style,
pattern.device,
source=borgmatic.borg.pattern.Pattern_source.HOOK,
)
@ -161,28 +181,27 @@ DEFAULT_SNAPSHOT_SIZE = '10%ORIGIN'
def dump_data_sources(
hook_config,
config,
log_prefix,
config_paths,
borgmatic_runtime_directory,
patterns,
dry_run,
):
'''
Given an LVM configuration dict, a configuration dict, a log prefix, the borgmatic configuration
file paths, the borgmatic runtime directory, the configured patterns, and whether this is a dry
run, auto-detect and snapshot any LVM logical volume mount points listed in the given patterns.
Also update those patterns, replacing logical volume mount points with corresponding snapshot
directories so they get stored in the Borg archive instead. Use the log prefix in any log
entries.
Given an LVM configuration dict, a configuration dict, the borgmatic configuration file paths,
the borgmatic runtime directory, the configured patterns, and whether this is a dry run,
auto-detect and snapshot any LVM logical volume mount points listed in the given patterns. Also
update those patterns, replacing logical volume mount points with corresponding snapshot
directories so they get stored in the Borg archive instead.
Return an empty sequence, since there are no ongoing dump processes from this hook.
If this is a dry run, then don't actually snapshot anything.
'''
dry_run_label = ' (dry run; not actually snapshotting anything)' if dry_run else ''
logger.info(f'{log_prefix}: Snapshotting LVM logical volumes{dry_run_label}')
logger.info(f'Snapshotting LVM logical volumes{dry_run_label}')
# List logical volumes to get their mount points.
# List logical volumes to get their mount points, but only consider those patterns that came
# from actual user configuration (as opposed to, say, other hooks).
lsblk_command = hook_config.get('lsblk_command', 'lsblk')
requested_logical_volumes = get_logical_volumes(lsblk_command, patterns)
@ -191,12 +210,12 @@ def dump_data_sources(
normalized_runtime_directory = os.path.normpath(borgmatic_runtime_directory)
if not requested_logical_volumes:
logger.warning(f'{log_prefix}: No LVM logical volumes found to snapshot{dry_run_label}')
logger.warning(f'No LVM logical volumes found to snapshot{dry_run_label}')
for logical_volume in requested_logical_volumes:
snapshot_name = f'{logical_volume.name}_{snapshot_suffix}'
logger.debug(
f'{log_prefix}: Creating LVM snapshot {snapshot_name} of {logical_volume.mount_point}{dry_run_label}'
f'Creating LVM snapshot {snapshot_name} of {logical_volume.mount_point}{dry_run_label}'
)
if not dry_run:
@ -220,11 +239,14 @@ def dump_data_sources(
snapshot_mount_path = os.path.join(
normalized_runtime_directory,
'lvm_snapshots',
hashlib.shake_256(logical_volume.mount_point.encode('utf-8')).hexdigest(
MOUNT_POINT_HASH_LENGTH
),
logical_volume.mount_point.lstrip(os.path.sep),
)
logger.debug(
f'{log_prefix}: Mounting LVM snapshot {snapshot_name} at {snapshot_mount_path}{dry_run_label}'
f'Mounting LVM snapshot {snapshot_name} at {snapshot_mount_path}{dry_run_label}'
)
if dry_run:
@ -235,7 +257,9 @@ def dump_data_sources(
)
for pattern in logical_volume.contained_patterns:
snapshot_pattern = make_borg_snapshot_pattern(pattern, normalized_runtime_directory)
snapshot_pattern = make_borg_snapshot_pattern(
pattern, logical_volume, normalized_runtime_directory
)
# Attempt to update the pattern in place, since pattern order matters to Borg.
try:
@ -312,12 +336,12 @@ def get_snapshots(lvs_command, snapshot_name=None):
raise ValueError(f'Invalid {lvs_command} output: Missing key "{error}"')
def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_directory, dry_run):
def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, dry_run):
'''
Given an LVM configuration dict, a configuration dict, a log prefix, the borgmatic runtime
directory, and whether this is a dry run, unmount and delete any LVM snapshots created by
borgmatic. Use the log prefix in any log entries. If this is a dry run or LVM isn't configured
in borgmatic's configuration, then don't actually remove anything.
Given an LVM configuration dict, a configuration dict, the borgmatic runtime directory, and
whether this is a dry run, unmount and delete any LVM snapshots created by borgmatic. If this is
a dry run or LVM isn't configured in borgmatic's configuration, then don't actually remove
anything.
'''
if hook_config is None:
return
@ -328,10 +352,10 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
try:
logical_volumes = get_logical_volumes(hook_config.get('lsblk_command', 'lsblk'))
except FileNotFoundError as error:
logger.debug(f'{log_prefix}: Could not find "{error.filename}" command')
logger.debug(f'Could not find "{error.filename}" command')
return
except subprocess.CalledProcessError as error:
logger.debug(f'{log_prefix}: {error}')
logger.debug(error)
return
snapshots_glob = os.path.join(
@ -339,10 +363,9 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
os.path.normpath(borgmatic_runtime_directory),
),
'lvm_snapshots',
'*',
)
logger.debug(
f'{log_prefix}: Looking for snapshots to remove in {snapshots_glob}{dry_run_label}'
)
logger.debug(f'Looking for snapshots to remove in {snapshots_glob}{dry_run_label}')
umount_command = hook_config.get('umount_command', 'umount')
for snapshots_directory in glob.glob(snapshots_glob):
@ -353,7 +376,10 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
snapshot_mount_path = os.path.join(
snapshots_directory, logical_volume.mount_point.lstrip(os.path.sep)
)
if not os.path.isdir(snapshot_mount_path):
# If the snapshot mount path is empty, this is probably just a "shadow" of a nested
# logical volume and therefore there's nothing to unmount.
if not os.path.isdir(snapshot_mount_path) or not os.listdir(snapshot_mount_path):
continue
# This might fail if the directory is already mounted, but we swallow errors here since
@ -366,9 +392,7 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
if not os.path.isdir(snapshot_mount_path):
continue
logger.debug(
f'{log_prefix}: Unmounting LVM snapshot at {snapshot_mount_path}{dry_run_label}'
)
logger.debug(f'Unmounting LVM snapshot at {snapshot_mount_path}{dry_run_label}')
if dry_run:
continue
@ -376,11 +400,11 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
try:
unmount_snapshot(umount_command, snapshot_mount_path)
except FileNotFoundError:
logger.debug(f'{log_prefix}: Could not find "{umount_command}" command')
logger.debug(f'Could not find "{umount_command}" command')
return
except subprocess.CalledProcessError as error:
logger.debug(f'{log_prefix}: {error}')
return
logger.debug(error)
continue
if not dry_run:
shutil.rmtree(snapshots_directory)
@ -391,10 +415,10 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
try:
snapshots = get_snapshots(hook_config.get('lvs_command', 'lvs'))
except FileNotFoundError as error:
logger.debug(f'{log_prefix}: Could not find "{error.filename}" command')
logger.debug(f'Could not find "{error.filename}" command')
return
except subprocess.CalledProcessError as error:
logger.debug(f'{log_prefix}: {error}')
logger.debug(error)
return
for snapshot in snapshots:
@ -402,14 +426,14 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
if not snapshot.name.split('_')[-1].startswith(BORGMATIC_SNAPSHOT_PREFIX):
continue
logger.debug(f'{log_prefix}: Deleting LVM snapshot {snapshot.name}{dry_run_label}')
logger.debug(f'Deleting LVM snapshot {snapshot.name}{dry_run_label}')
if not dry_run:
remove_snapshot(lvremove_command, snapshot.device_path)
def make_data_source_dump_patterns(
hook_config, config, log_prefix, borgmatic_runtime_directory, name=None
hook_config, config, borgmatic_runtime_directory, name=None
): # pragma: no cover
'''
Restores aren't implemented, because stored files can be extracted directly with "extract".
@ -420,7 +444,6 @@ def make_data_source_dump_patterns(
def restore_data_source_dump(
hook_config,
config,
log_prefix,
data_source,
dry_run,
extract_process,

View File

<
@ -1,10 +1,12 @@
import copy
import logging
import os
import re
import shlex
import borgmatic.borg.pattern
import borgmatic.config.paths
import borgmatic.hooks.credential.parse
from borgmatic.execute import (
execute_command,
execute_command_and_capture_output,
@ -22,14 +24,92 @@ def make_dump_path(base_directory): # pragma: no cover
return dump.make_data_source_dump_path(base_directory, 'mariadb_databases')
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
DEFAULTS_EXTRA_FILE_FLAG_PATTERN = re.compile('^--defaults-extra-file=(?P<filename>.*)$')
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
def parse_extra_options(extra_options):
'''
Given a requested database config, return the corresponding sequence of database names to dump.
In the case of "all", query for the names of databases on the configured host and return them,
excluding any system databases that will cause problems during restore.
Given an extra options string, split the options into a tuple and return it. Additionally, if
the first option is "--defaults-extra-file=...", then remove it from the options and return the
filename.
So the return value is a tuple of: (parsed options, defaults extra filename).
The intent is to support downstream merging of multiple "--defaults-extra-file"s, as
MariaDB/MySQL only allows one at a time.
'''
split_extra_options = tuple(shlex.split(extra_options)) if extra_options else ()
if not split_extra_options:
return ((), None)
match = DEFAULTS_EXTRA_FILE_FLAG_PATTERN.match(split_extra_options[0])
if not match:
return (split_extra_options, None)
return (split_extra_options[1:], match.group('filename'))
def make_defaults_file_options(username=None, password=None, defaults_extra_filename=None):
'''
Given a database username and/or password, write it to an anonymous pipe and return the flags
for passing that file descriptor to an executed command. The idea is that this is a more secure
way to transmit credentials to a database client than using an environment variable.
If no username or password are given, then return the options for the given defaults extra
filename (if any). But if there is a username and/or password and a defaults extra filename is
given, then "!include" it from the generated file, effectively allowing multiple defaults extra
files.
Do not use the returned value for multiple different command invocations. That will not work
because each pipe is "used up" once read.
'''
escaped_password = None if password is None else password.replace('\\', '\\\\')
values = '\n'.join(
(
(f'user={username}' if username is not None else ''),
(f'password="{escaped_password}"' if escaped_password is not None else ''),
)
).strip()
if not values:
if defaults_extra_filename:
return (f'--defaults-extra-file={defaults_extra_filename}',)
return ()
fields_message = ' and '.join(
field_name
for field_name in (
(f'username ({username})' if username is not None else None),
('password' if password is not None else None),
)
if field_name is not None
)
include_message = f' (including {defaults_extra_filename})' if defaults_extra_filename else ''
logger.debug(f'Writing database {fields_message} to defaults extra file pipe{include_message}')
include = f'!include {defaults_extra_filename}\n' if defaults_extra_filename else ''
read_file_descriptor, write_file_descriptor = os.pipe()
os.write(write_file_descriptor, f'{include}[client]\n{values}'.encode('utf-8'))
os.close(write_file_descriptor)
# This plus subprocess.Popen(..., close_fds=False) in execute.py is necessary for the database
# client child process to inherit the file descriptor.
os.set_inheritable(read_file_descriptor, True)
return (f'--defaults-extra-file=/dev/fd/{read_file_descriptor}',)
def database_names_to_dump(database, config, username, password, environment, dry_run):
'''
Given a requested database config, a configuration dict, a database username and password, an
environment dict, and whether this is a dry run, return the corresponding sequence of database
names to dump. In the case of "all", query for the names of databases on the configured host and
return them, excluding any system databases that will cause problems during restore.
'''
if database['name'] != 'all':
return (database['name'],)
@ -39,20 +119,23 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
mariadb_show_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('mariadb_command') or 'mariadb')
)
extra_options, defaults_extra_filename = parse_extra_options(database.get('list_options'))
show_command = (
mariadb_show_command
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+ make_defaults_file_options(username, password, defaults_extra_filename)
+ extra_options
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ (('--ssl',) if database.get('tls') is True else ())
+ (('--skip-ssl',) if database.get('tls') is False else ())
+ ('--skip-column-names', '--batch')
+ ('--execute', 'show schemas')
)
logger.debug(f'{log_prefix}: Querying for "all" MariaDB databases to dump')
show_output = execute_command_and_capture_output(
show_command, extra_environment=extra_environment
)
logger.debug('Querying for "all" MariaDB databases to dump')
show_output = execute_command_and_capture_output(show_command, environment=environment)
return tuple(
show_name
@ -61,13 +144,23 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
)
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
def execute_dump_command(
database, log_prefix, dump_path, database_names, extra_environment, dry_run, dry_run_label
database,
config,
username,
password,
dump_path,
database_names,
environment,
dry_run,
dry_run_label,
):
'''
Kick off a dump for the given MariaDB database (provided as a configuration dict) to a named
pipe constructed from the given dump path and database name. Use the given log prefix in any
log entries.
pipe constructed from the given dump path and database name.
Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if
this is a dry run, then don't actually dump anything and return None.
@ -82,7 +175,7 @@ def execute_dump_command(
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of MariaDB database "{database_name}" to {dump_filename}'
f'Skipping duplicate dump of MariaDB database "{database_name}" to {dump_filename}'
)
return None
@ -90,22 +183,23 @@ def execute_dump_command(
shlex.quote(part)
for part in shlex.split(database.get('mariadb_dump_command') or 'mariadb-dump')
)
extra_options, defaults_extra_filename = parse_extra_options(database.get('options'))
dump_command = (
mariadb_dump_command
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ make_defaults_file_options(username, password, defaults_extra_filename)
+ extra_options
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ (('--ssl',) if database.get('tls') is True else ())
+ (('--skip-ssl',) if database.get('tls') is False else ())
+ ('--databases',)
+ database_names
+ ('--result-file', dump_filename)
)
logger.debug(
f'{log_prefix}: Dumping MariaDB database "{database_name}" to {dump_filename}{dry_run_label}'
)
logger.debug(f'Dumping MariaDB database "{database_name}" to {dump_filename}{dry_run_label}')
if dry_run:
return None
@ -113,19 +207,19 @@ def execute_dump_command(
return execute_command(
dump_command,
extra_environment=extra_environment,
environment=environment,
run_to_completion=False,
)
def get_default_port(databases, config, log_prefix): # pragma: no cover
def get_default_port(databases, config): # pragma: no cover
return 3306
def use_streaming(databases, config, log_prefix):
def use_streaming(databases, config):
'''
Given a sequence of MariaDB database configuration dicts, a configuration dict (ignored), and a
log prefix (ignored), return whether streaming will be using during dumps.
Given a sequence of MariaDB database configuration dicts, a configuration dict (ignored), return
whether streaming will be using during dumps.
'''
return any(databases)
@ -133,7 +227,6 @@ def use_streaming(databases, config, log_prefix):
def dump_data_sources(
databases,
config,
log_prefix,
config_paths,
borgmatic_runtime_directory,
patterns,
@ -142,8 +235,7 @@ def dump_data_sources(
'''
Dump the given MariaDB databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given
borgmatic runtime directory to construct the destination path and the given log prefix in any
log entries.
borgmatic runtime directory to construct the destination path.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
@ -153,13 +245,19 @@ def dump_data_sources(
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info(f'{log_prefix}: Dumping MariaDB databases{dry_run_label}')
logger.info(f'Dumping MariaDB databases{dry_run_label}')
for database in databases:
dump_path = make_dump_path(borgmatic_runtime_directory)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
username = borgmatic.hooks.credential.parse.resolve_credential(
database.get('username'), config
)
password = borgmatic.hooks.credential.parse.resolve_credential(
database.get('password'), config
)
environment = dict(os.environ)
dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run
database, config, username, password, environment, dry_run
)
if not dump_database_names:
@ -175,10 +273,12 @@ def dump_data_sources(
processes.append(
execute_dump_command(
renamed_database,
log_prefix,
config,
username,
password,
dump_path,
(dump_name,),
extra_environment,
environment,
dry_run,
dry_run_label,
)
@ -187,10 +287,12 @@ def dump_data_sources(
processes.append(
execute_dump_command(
database,
log_prefix,
config,
username,
password,
dump_path,
dump_database_names,
extra_environment,
environment,
dry_run,
dry_run_label,
)
@ -199,7 +301,8 @@ def dump_data_sources(
if not dry_run:
patterns.append(
borgmatic.borg.pattern.Pattern(
os.path.join(borgmatic_runtime_directory, 'mariadb_databases')
os.path.join(borgmatic_runtime_directory, 'mariadb_databases'),
source=borgmatic.borg.pattern.Pattern_source.HOOK,
)
)
@ -207,25 +310,23 @@ def dump_data_sources(
def remove_data_source_dumps(
databases, config, log_prefix, borgmatic_runtime_directory, dry_run
databases, config, borgmatic_runtime_directory, dry_run
): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the
borgmatic_runtime_directory to construct the destination path and the log prefix in any log
entries. If this is a dry run, then don't actually remove anything.
borgmatic_runtime_directory to construct the destination path. If this is a dry run, then don't
actually remove anything.
'''
dump.remove_data_source_dumps(
make_dump_path(borgmatic_runtime_directory), 'MariaDB', log_prefix, dry_run
)
dump.remove_data_source_dumps(make_dump_path(borgmatic_runtime_directory), 'MariaDB', dry_run)
def make_data_source_dump_patterns(
databases, config, log_prefix, borgmatic_runtime_directory, name=None
databases, config, borgmatic_runtime_directory, name=None
): # pragma: no cover
'''
Given a sequence of configurations dicts, a configuration dict, a prefix to log with, the
borgmatic runtime directory, and a database name to match, return the corresponding glob
patterns to match the database dump in an archive.
Given a sequence of configurations dicts, a configuration dict, the borgmatic runtime directory,
and a database name to match, return the corresponding glob patterns to match the database dump
in an archive.
'''
borgmatic_source_directory = borgmatic.config.paths.get_borgmatic_source_directory(config)
@ -243,7 +344,6 @@ def make_data_source_dump_patterns(
def restore_data_source_dump(
hook_config,
config,
log_prefix,
data_source,
dry_run,
extract_process,
@ -252,9 +352,9 @@ def restore_data_source_dump(
):
'''
Restore a database from the given extract stream. The database is supplied as a data source
configuration dict, but the given hook configuration is ignored. The given log prefix is used
for any log entries. If this is a dry run, then don't actually restore anything. Trigger the
given active extract process (an instance of subprocess.Popen) to produce output to consume.
configuration dict, but the given hook configuration is ignored. If this is a dry run, then
don't actually restore anything. Trigger the given active extract process (an instance of
subprocess.Popen) to produce output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
hostname = connection_params['hostname'] or data_source.get(
@ -263,32 +363,40 @@ def restore_data_source_dump(
port = str(
connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
)
username = connection_params['username'] or data_source.get(
'restore_username', data_source.get('username')
tls = data_source.get('restore_tls', data_source.get('tls'))
username = borgmatic.hooks.credential.parse.resolve_credential(
(
connection_params['username']
or data_source.get('restore_username', data_source.get('username'))
),
config,