Compare commits

...

247 Commits

Author SHA1 Message Date
Sébastien MB b63c854509 Fix escaped environment variable in configuration
- when an env variable is escaped in the configuration file, we expect
  not to resolve it and remove the escape char `\`
2022-06-17 09:50:56 +02:00
Dan Helfman aa013af25e Remove some whitespace around "New in version ..." documentation labels. 2022-06-16 20:49:15 -07:00
Dan Helfman cc32f0018b Start formalizing how new features are flagged by version in documentation. 2022-06-16 20:23:16 -07:00
Dan Helfman dfc4db1860 Document environment variable interpolation (#546). 2022-06-16 15:30:53 -07:00
Dan Helfman 35706604ea Upgrade documentation base images. 2022-06-16 15:22:59 -07:00
Dan Helfman 6d76e8e5cb Code formatting. 2022-06-16 14:21:18 -07:00
Dan Helfman aecb6fcd74 Code style, rename command-line flag, and move new code into its own file (#546) 2022-06-16 11:35:24 -07:00
Dan Helfman ea45f6c4c8 Environment variable resolution in configuration file (#546).
Reviewed-on: borgmatic-collective/borgmatic#548
2022-06-16 18:18:12 +00:00
Sébastien MB 97b5cd089d Allow environment variable resolution in configuration file
- all string fields containing an environment variable like ${FOO} will
  be resolved
- supported format ${FOO}, ${FOO:-bar} and ${FOO-bar} to allow default
  values if variable is not present in environment
- add --no-env argument for CLI to disable the feature which is enabled
  by default

Resolves: #546
2022-06-16 18:52:54 +02:00
Dan Helfman f2c2f3139e Add periods to ntfy config descriptions. 2022-06-10 09:42:41 -07:00
Dan Helfman dc4e7093e5 Remove link to related software that hasn't seen updates in the past couple years. 2022-06-09 19:31:50 -07:00
Dan Helfman b6f1025ecb Bump version for release. 2022-06-09 16:38:34 -07:00
Dan Helfman 65b2fe86c6 Fix Bash completion script to no longer alter your shell's settings. 2022-06-09 16:29:54 -07:00
Dan Helfman 0e90a80680 Add links in documentation for ntfy monitoring hook (#543). 2022-06-09 13:41:22 -07:00
Dan Helfman 7648bcff39 Add a hook for sending push notifications via ntfy.sh.
Reviewed-on: borgmatic-collective/borgmatic#543
2022-06-09 20:26:06 +00:00
Gavin Chappell a8b8d507b6
add a hook for sending push notifications via ntfy.sh 2022-06-09 21:10:38 +01:00
Dan Helfman 3561c93d74 Fix Healthchecks tests that leak global state, breaking downstream tests (discovered in #543). 2022-06-09 11:05:44 -07:00
Dan Helfman 331a503a25 Document the borgmatic version in which "borgmatic list --find" is available (#541). 2022-06-03 16:55:54 -07:00
Dan Helfman 9aefb5179f Fix None find paths (#541). 2022-06-03 15:20:05 -07:00
Dan Helfman d14f22e121 Add "borgmatic list --find" flag for searching for files across multiple archives (#541). 2022-06-03 15:12:14 -07:00
Dan Helfman b6893f6455 Exclude deprecated "borg list --successful" flag from getting passed to Borg. 2022-06-02 21:14:25 -07:00
Dan Helfman 80ec3e7d97 Deprecate "borgmatic list --successful" flag, as listing only non-checkpoint (successful) archives is now the default in newer versions of Borg. 2022-06-02 20:35:39 -07:00
Dan Helfman cd834311eb Clarify completion docs. 2022-06-01 10:57:23 -07:00
Dan Helfman d751cceeb0 Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic 2022-06-01 10:38:05 -07:00
Dan Helfman ce78b07e4b Add macOs to install and Bash completion documentation.
Reviewed-on: borgmatic-collective/borgmatic#540
2022-06-01 17:37:51 +00:00
adidalal 87f3c50931 setup: add macOS 2022-06-01 15:56:40 +00:00
Dan Helfman 8e9e06afe6 Bump version for release. 2022-05-31 09:41:20 -07:00
Dan Helfman 2bc91ac3d2 Add "generate-borgmatic-config --overwrite" flag to replace an existing destination file (#539). 2022-05-29 16:03:55 -07:00
Dan Helfman 5b615d51a4 Add support for "borgmatic borg debug" command (#538). 2022-05-29 15:43:03 -07:00
Dan Helfman c7f5d5fd0b Fix broken Bash completion of filenames, as in "-c config.yaml". 2022-05-29 10:49:33 -07:00
Dan Helfman 6ef7538eb0 Fix typo in Bash completions script. 2022-05-28 19:34:13 -07:00
Dan Helfman 8fa90053cf Add "borgmatic check --force" flag to ignore configured check frequencies (#523). 2022-05-28 19:29:33 -07:00
Dan Helfman b3682b61d1 Add another note about the consistency checks schema in old versions (#523). 2022-05-28 19:03:45 -07:00
Dan Helfman ad0e2e0d7c Tweak default check frequency to 1 month (#523). 2022-05-28 15:49:50 -07:00
Dan Helfman 6629f40cab In bash completion script, warn when script is out of date using script contents instead of version. (Fewer spurious warnings that way.) 2022-05-28 15:27:11 -07:00
Dan Helfman e76bfa555f Reduce the default consistency check frequency and support configuring the frequency independently for each check (#523). 2022-05-28 14:42:19 -07:00
Dan Helfman 8ddb7268eb Reuse "borg info" function. 2022-05-27 13:51:11 -07:00
Dan Helfman cb5fe02ebd Fix broken Bash completion end-to-end test. 2022-05-26 11:18:46 -07:00
Dan Helfman 77b84f8a48 Add Bash completion script so you can tab-complete the borgmatic command-line. 2022-05-26 10:27:53 -07:00
Dan Helfman 691ec96909 Fix python_requires to support all versions of 3.7 (#537).
Reviewed-on: borgmatic-collective/borgmatic#537
2022-05-26 15:51:46 +00:00
Steve Atwell 29b4666205 Fix python_requires to support all versions of 3.7
This is the standard way to support "Python 3.7 and newer" and it also
fixes use of borgmatic with some tools that do custom dependency
resolution.  E.g., using pex with --platform.
2022-05-26 07:05:04 -07:00
Dan Helfman 316a22701f Add documentation note about multiple merge limitation (#380). 2022-05-25 23:12:42 -07:00
Dan Helfman be59a3e574 Fix generate-borgmatic-config with "--source" flag to support more complex schema changes like the new Healthchecks configuration options (#536). 2022-05-25 10:26:26 -07:00
Dan Helfman 37327379bc Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic 2022-05-24 17:50:57 -07:00
Dan Helfman 22c2f13611 Remove trailing whitespace (#535).
Reviewed-on: borgmatic-collective/borgmatic#535
2022-05-25 00:50:12 +00:00
polyzen 8708ca07f4 Remove trailing whitespace 2022-05-25 00:43:40 +00:00
Dan Helfman 634d9e4946 Bump version for release. 2022-05-24 16:22:37 -07:00
Dan Helfman 54933ebef5 Change connection failures for monitoring hooks to be warnings instead of errors (#439). 2022-05-24 15:50:04 -07:00
Dan Helfman 157e59ac88 Add Healthchecks monitoring hook "send_logs" option to enable/disable sending borgmatic logs to the Healthchecks server (#460). 2022-05-24 14:44:33 -07:00
Dan Helfman 666f0dd751 Add missing Healthchecks "states" option example in configuration schema (#525). 2022-05-24 14:17:19 -07:00
Dan Helfman 8b179e4647 Reverse logic of Healtchecks "skip_states" option to just "states" (#525). 2022-05-24 14:09:42 -07:00
Dan Helfman 865eff7d98 Add Healthchecks monitoring hook "skip_states" option to disable pinging for particular monitoring states (#525). 2022-05-24 13:59:28 -07:00
Dan Helfman b9741f4d0b Add Healthchecks monitoring hook "ping_body_limit" option to configure how many bytes of logs to send to the Healthchecks server (#294). 2022-05-24 12:23:38 -07:00
Dan Helfman 02781662f8 Change monitoring hooks to specify the ping URL / integration key as a named option. 2022-05-23 20:02:10 -07:00
Dan Helfman 32a1043468 Remove the error when "archive_name_format" is specified but a retention prefix isn't (#402). 2022-05-23 16:11:24 -07:00
Dan Helfman 3e4aeec649 Warn when an unsupported variable is used in a hook command (#420). 2022-05-23 15:27:54 -07:00
Dan Helfman b98b827594 Remove stale comment. 2022-05-23 10:59:56 -07:00
Dan Helfman 255cc6ec23 When deep merging common configuration, merge colliding list values by appending them (#531). 2022-05-20 15:28:28 -07:00
Dan Helfman 51fc37d57a Improve the error message when a configuration override contains an invalid value (#528). 2022-05-20 13:38:53 -07:00
Dan Helfman 1921f55a9d Add emojis to documentation table of contents to make it easier to find particular how-to and reference guides at a glance. 2022-05-20 11:11:35 -07:00
Dan Helfman fbd381fcc1 Clarify manual database extraction documentation. 2022-05-20 10:06:19 -07:00
Dan Helfman cd88f9f2ea Better explain where to find the dump file when doing a manual restore (#510).
Reviewed-on: borgmatic-collective/borgmatic#510
2022-05-20 16:33:21 +00:00
Dan Helfman 788281cfb9 When a configuration include is a relative path, load it from either the current working directory or from the directory containing the file doing the including (#532). 2022-05-19 17:15:05 -07:00
Dan Helfman cd234b689d Link to additional borgmatic Docker image. 2022-05-12 12:00:12 -07:00
Dan Helfman 92354a77ee Mention that database dumps consumed disk space prior to borgmatic 1.5.3. 2022-05-09 16:08:47 -07:00
Dan Helfman 48ff3e70d1 Clarify documentation about include merging mappings vs. values. 2022-05-08 14:48:42 -07:00
Dan Helfman 7e9adfb899 Add NEWS entry for randomized systemd timer delay. 2022-05-07 23:11:26 -07:00
Dan Helfman e238e256f7
Add randomized delay to systemd timer.
Merge pull request from Daniel15/patch-1
2022-05-07 23:08:02 -07:00
Daniel Lo Nigro 3ecb92a8d2
Add randomized delay to systemd timer 2022-05-07 16:42:06 -07:00
Dan Helfman d58d450628 Remove stale borgmatic binary link. 2022-04-30 09:50:40 -07:00
Dan Helfman dee9c6e293 Remove link to stale borgmatic Docker image. 2022-04-30 09:46:08 -07:00
Dan Helfman 897c4487de Add mention in documentation about multiple backup scheduling needs (#511). 2022-04-28 11:16:31 -07:00
Dan Helfman 48b50b5209 Add documentation link to NEWS. 2022-04-26 10:24:25 -07:00
Dan Helfman 13bae8c23b Typo. 2022-04-26 10:12:02 -07:00
Dan Helfman 4a48e6aa04 Bump version for release. 2022-04-26 10:07:04 -07:00
Dan Helfman 525266ede6 Deep merging when including common configuration (#381). 2022-04-25 21:18:37 -07:00
Dan Helfman d045eb55ac Add mention of sudo's "secure_path" option in borgmatic installation documentation (#513). 2022-04-23 14:29:55 -07:00
Dan Helfman 0e6b425ac5 Fix "borgmatic borg key ..." to pass parameters to Borg in correct order (#515). 2022-04-23 14:03:15 -07:00
Dan Helfman bdc26f2117 Add note about old, pre-1.6.0 hooks behavior. 2022-04-22 19:58:28 -07:00
Dan Helfman ed7fe5c6d0 Instead of executing "before" command hooks before all borgmatic actions run (and "after" hooks after), execute these hooks right before/after the corresponding action (#473). 2022-04-21 22:08:25 -07:00
Dan Helfman cbce6707f4 Clarify one_file_system behavior in schema comment (#520). 2022-04-12 11:05:22 -07:00
Dan Helfman e40e726687 Change Healthchecks logs truncation size from 10k bytes to 100k bytes, corresponding to that same change on Healthchecks.io. 2022-04-06 22:00:18 -07:00
Dan Helfman 0c027a3050 Fix handling of TERM signal to exit borgmatic, not just forward the signal to Borg (#516). 2022-04-03 13:12:48 -07:00
Dan Helfman 9f44bbad65 Fix borgmatic exit code (so it's zero) when initial Borg calls fail but later retries succeed (#517). 2022-04-02 22:28:41 -07:00
Dan Helfman 413a079f51 Clarify Python version support. 2022-03-28 21:57:40 -07:00
gerdneuman 6f3accf691 Better explain where to find the dump file
I really had problem finding the dump file with the explanation as give before. I thought that the `~/.borgmatic/` would be my current user. So looked into `/home/gerd/.borgmatic` (wrong). Then I looked into `<EXTRACTED_DESTINATION_PATH/.borgmatic` (again wrong). Then finally (1h later and after having already prepared a bug ticketI figured out that the dump file is within `<EXTRACTED_DESTINATION_PATH/root/.borgmatic`. Hard to find because of course I d not only have `root` within `<EXTRACTED_DESTINATION_PATH/` but also all other backup'ed directories (including /etc/, /home/ on so on...)
2022-03-17 04:51:47 +00:00
Dan Helfman 5b3cfc542d Switch to PyPI API token. 2022-03-14 14:00:03 -07:00
Dan Helfman c838c1d11b Fix header placement in documentation guide. 2022-03-14 13:50:22 -07:00
Dan Helfman 4d1d8d7409 Bump version for release. 2022-03-14 13:43:24 -07:00
Dan Helfman db7499db82 Document "repositories" context to for "before_*" and "after_*" command action hooks (#469). 2022-03-14 13:34:14 -07:00
Dan Helfman 6b500c2a8b Add repositories context for command hooks.
Reviewed-on: borgmatic-collective/borgmatic#469
2022-03-14 20:13:15 +00:00
Dan Helfman 95c518e59b Documentation tip about dealing with hangs when database hook is enabled. 2022-03-12 13:17:32 -08:00
Dan Helfman 976516d0e1 When loading a configuration file that is unreadable due to file permissions, warn instead of erroring (#444). 2022-03-08 10:19:36 -08:00
Dan Helfman 574eb91921 Fix Borg usage error in the "compact" action when running "borgmatic --dry-run". Now, skip "compact" entirely during a dry run (#507). 2022-03-07 21:46:12 -08:00
Dan Helfman 28fef3264b Fix handling of "patterns_from" and "exclude_from" options to error instead of warning when referencing unreadable files and running "create" action (#486). 2022-03-07 15:32:07 -08:00
Dan Helfman 9161dbcb7d Removing unnecessary leading underscores from functions. 2022-03-07 11:58:29 -08:00
Dan Helfman 4b3027e4fc Add test for new working_directory option (#431). 2022-03-03 11:48:18 -08:00
Dan Helfman 0eb2634f9b Working directory option to support source directories with relative paths (#431).
Reviewed-on: borgmatic-collective/borgmatic#477
2022-03-03 19:28:17 +00:00
Dan Helfman 7c5b68c98f Bump version for release. 2022-02-10 10:29:18 -08:00
Dan Helfman 9317cbaaf0 Code formatting. 2022-02-10 10:23:34 -08:00
Dan Helfman 1b5f04b79f When using the "remote_rate_limit" option, tailor the flags passed to Borg depending on the Borg version (#394). 2022-02-10 10:16:09 -08:00
Dan Helfman 948c86f62c When using the "numeric_owner" option with the "extract" action, tailor the flags passed to Borg depending on the Borg version (#394). 2022-02-10 10:09:18 -08:00
Dan Helfman 7e7209322a When using the "numeric_owner" option, tailor the flags passed to Borg depending on the Borg version (#394). 2022-02-10 09:51:13 -08:00
Dan Helfman 00a57fd947 Code formatting. 2022-02-09 21:20:28 -08:00
Dan Helfman 6bf6ac310b When using the "bsd_flags" option, tailor the flags passed to Borg depending on the Borg version (#394). 2022-02-09 21:11:00 -08:00
Dan Helfman 4b5af2770d When the "atime" option is used, tailor the flags passed to Borg depending on version (#394). 2022-02-09 16:54:35 -08:00
Dan Helfman b525e70e1c Run "compact" action by default when no actions are specified (#394). 2022-02-09 14:33:12 -08:00
Dan Helfman 4498671233 Remove references to removed long-deprecated options (#394). 2022-02-09 11:08:02 -08:00
Dan Helfman 9997aa9a92 Fix capitalization on compact help. 2022-02-08 15:58:09 -08:00
Dan Helfman cbf7284f64 Add compact action to command-line reference documentation. 2022-02-08 15:37:24 -08:00
Dan Helfman ee466f870d Fixing ruamel.yaml.clib breakages harder. 2022-02-08 13:21:11 -08:00
Dan Helfman e3f4bf0293 Build fix for ruamel.yaml.clib error. 2022-02-08 12:52:45 -08:00
Dan Helfman 46688f10b1 Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic 2022-02-08 12:10:57 -08:00
Dan Helfman 48f44d2f3d Add tests for compact action (#394). 2022-02-08 12:05:02 -08:00
Dan Helfman bff1347ba3 Fix some test failures (#394). 2022-02-08 09:35:03 -08:00
Dan Helfman 9582324c88 Compact repository segments with new "borgmatic compact" action (#394). 2022-02-07 23:29:44 -08:00
Dan Helfman bb0716421d Add comment about systemd service setting that may interfere with external commands in hooks (#492). 2022-01-25 09:26:11 -08:00
Dan Helfman bec73245e9 Fix traceback when a YAML validation error occurs (#480, #482). 2022-01-19 20:39:03 -08:00
Dan Helfman dcead12e86 Attempt to fix documentation build error introduced by Eleventy upgrade. 2022-01-09 14:21:27 -08:00
Dan Helfman 0119514c11 Add Python version requirements to setup.py. 2022-01-09 10:19:53 -08:00
fabianschilling b39f08694d Merge branch 'master' into pr-working-directory 2022-01-05 09:30:27 +00:00
Dan Helfman 80bdf1430b Bump version for release. 2022-01-04 20:20:13 -08:00
Dan Helfman 2ee75546f5 Add MongoDB database hook documentation. 2022-01-04 16:26:38 -08:00
Dan Helfman 07d7ae60d5 Add MongoDB database hook (#288).
Reviewed-on: borgmatic-collective/borgmatic#483
2022-01-04 23:50:25 +00:00
Andrea Ghensi 87001337b4 Merge master into mongodb_hook 2022-01-04 22:20:44 +01:00
Dan Helfman 2e9964c200 Remove references to Lima Labs (shut down their storage business).
Reviewed-on: borgmatic-collective/borgmatic#488
2022-01-03 17:34:38 +00:00
Ian Kerins 3ec3d8d045 Remove references to Lima Labs
From their homepage:
> Lima Labs is shutting down our storage business. We will try to keep data available as long as possible. No promises but we are targeting 3/1/2022 to bring down Archive and Canada.
2022-01-03 02:29:38 -05:00
Dan Helfman 96384d5ee1 Attempt to fix typed-ast build issue by relaxing version requirements in test. 2022-01-02 23:22:24 -08:00
Dan Helfman 8ed5467435 Drop support for Python 3.6. Add support for 3.10. 2022-01-02 23:17:57 -08:00
Andrea Ghensi 7c6ce9399c fix integration tests and mongodb auth 2021-12-29 22:18:50 +01:00
Andrea Ghensi 6b7653484b Add mongodb dump hook 2021-12-26 01:00:58 +01:00
Fabian Schilling 85e0334826 Add missing working_directory arg to pass tests 2021-12-10 18:24:41 +01:00
Fabian Schilling 2a80e48a92 Pass working directory to execute functions 2021-12-10 18:23:44 +01:00
Fabian Schilling 5821c6782e Add defaults to not set in schema 2021-12-10 18:23:08 +01:00
Fabian Schilling f15498f6d9 Add working_directory to borgmatic schema 2021-12-10 17:58:27 +01:00
Dan Helfman a1673d1fa1 Fix unicode error when restoring particular MySQL databases (#476). 2021-12-08 16:40:25 -08:00
Dan Helfman 2e99a1898c Fix f-string with missing expression. 2021-11-29 14:05:36 -08:00
Dan Helfman 7a086d8430 Fix import ordering. 2021-11-29 14:00:14 -08:00
Dan Helfman 0e8e9ced64 When command-line configuration override produces a parse error, error cleanly (#471). 2021-11-29 12:49:21 -08:00
Dan Helfman f34951c088 Add MySQL dump command adjustment to NEWS. 2021-11-29 12:10:04 -08:00
Dan Helfman c6f47d4d56 Move mysqldump options to the beginning of the command due to MySQL bug 30994 (#470).
Reviewed-on: borgmatic-collective/borgmatic#470
2021-11-29 20:08:59 +00:00
nebulon42 c3e76585fc
move mysqldump options to the beginning of the command due to MySQL bug 30994. 2021-11-26 17:16:03 +01:00
Chen Yufei 0014b149f8 remove configuration_filename as it's already set. 2021-11-26 11:38:58 +08:00
Chen Yufei 091c07bbe2 Add context for various hooks. 2021-11-26 11:35:10 +08:00
Dan Helfman 240547102f Enable auto-play on linked asciicast. 2021-11-25 13:09:55 -08:00
Dan Helfman 2bbd53e25a
Merge pull request #43 from acsfer/patch-1
Github doesn't allow script embedding
2021-11-25 13:06:43 -08:00
acsfer 58f2f63977
Switch to HTML 2021-11-25 22:03:26 +01:00
acsfer 7df6a78c30
Github doesn't allow script embedding 2021-11-25 21:36:31 +01:00
Dan Helfman c646edf2c7 Bump version for release. 2021-11-22 13:19:15 -08:00
Dan Helfman bcc820d646 Add list_options setting (#306).
Reviewed-on: borgmatic-collective/borgmatic#464
2021-11-22 21:14:02 +00:00
nebulon42 3729ba5ca3
add list_options setting, fixes #306 2021-11-20 15:43:58 +01:00
Dan Helfman 9c19591768 Revise hosting provider links. 2021-11-15 20:06:09 -08:00
Dan Helfman 38ebfd2969 Rename retry_timeout to retry_wait and standardize log formatting (#28). 2021-11-15 11:51:17 -08:00
Dan Helfman 180018fd81 Retry failing backups (#28, #432).
Reviewed-on: borgmatic-collective/borgmatic#432
2021-11-15 19:34:24 +00:00
Dan Helfman 794ae94ac4 Attempt to limit documentation pushing to commits (so, not pull requests). 2021-11-15 11:08:26 -08:00
Dan Helfman 4eb6359ed3 Remove now-unneeded build image workaround. 2021-11-15 10:56:12 -08:00
cadamswaite 976a877a25 Formatting 2021-11-14 22:37:42 +00:00
cadamswaite b4117916b8 Add timeout and tests 2021-11-14 22:15:22 +00:00
cadamswaite 19cad89978 Add some tests for retry logic 2021-11-14 21:35:23 +00:00
cadamswaite 6b182c9d2d Merge branch 'master' into master 2021-11-14 18:24:17 +00:00
Dan Helfman 4d6ed27f73 Add to changelog: Add support for old version (2.x) of jsonschema library. 2021-10-23 09:49:16 -07:00
Dan Helfman 745a8f9b8a Add support for both jsonschema v3 and old v2 (#459).
Reviewed-on: borgmatic-collective/borgmatic#459
2021-10-23 16:47:53 +00:00
Dan Helfman 6299d8115d Limit documentation build to master of main repo, as it pushes a Docker image. 2021-10-23 09:45:17 -07:00
Kim B. Heino 717cfd2d37 validate: add support for both jsonschema v3 and old v2
RHEL8 and RHEL7 have old jsonschema v2. Try v3 (Draft7) first but
fallback to v2 (Draft4) if needed.
2021-10-23 15:04:07 +03:00
Dan Helfman 7881327004 Upgrade CI test dependencies. 2021-10-22 14:07:14 -07:00
Dan Helfman 549aa9a25f Update editable link. 2021-10-22 14:06:27 -07:00
Dan Helfman 1c6890492b Bump version for release. 2021-10-11 17:02:32 -07:00
Dan Helfman a7c8e7c823 Bump version for release. 2021-10-11 11:13:32 -07:00
Dan Helfman c8fcf6b336 Mention changing borgmatic path in cron documentation (#455). 2021-10-11 11:02:08 -07:00
Dan Helfman 449896f661 Fix error when configured source directories are not present on the filesystem at the time of backup (#387). 2021-10-11 10:40:10 -07:00
Dan Helfman 1004500d65 Update sample systemd service file comments about more granular read-only filesystem settings. 2021-10-11 09:33:07 -07:00
Dan Helfman 0a8d4e5dfb
Add more strict ProtectHome to systemd sample configuration.
Merge pull request #42 from VTimofeenko/systemd_protecthome
2021-10-11 09:26:28 -07:00
Dan Helfman 38e35bdb12 Skip TLS verify in documentation build clone to work around old drone/git CA certs. 2021-10-04 14:31:15 -07:00
Dan Helfman 65503e38b6 Sigh. 2021-10-04 13:14:19 -07:00
Dan Helfman d0c5bf6f6f Another attempt to unbreak build. 2021-10-04 13:13:35 -07:00
Dan Helfman f129e4c301 Attempt to work-around outdated CA certificates in drone/git Docker image. 2021-10-04 13:09:44 -07:00
Dan Helfman fbbb096cec Note in documentation that borgmatic requires Python 3.6+. 2021-10-04 11:15:51 -07:00
Dan Helfman 77980511c6 Add another glob pattern example to exclude patterns. 2021-09-16 09:51:40 -07:00
Dan Helfman 4ba206f8f4 Update build server URL to new organization namespace. 2021-09-14 11:35:34 -07:00
Dan Helfman ecc849dd07 Move Gitea hosting from a personal namespace to an organization. 2021-09-14 11:32:01 -07:00
Dan Helfman 7ff6066d47 Move GitHub hosting from a personal namespace to an organization. 2021-09-14 10:18:10 -07:00
Dan Helfman 2bb1fc9826 Mention Docker Compose under installation options. 2021-09-12 13:15:34 -07:00
Vladimir Timofeenko 6df6176f3a
Added more strict ProtectHome to systemd unit
This commit changes the comment in sample systemd service.

Using a combination of 'ProtectHome' and 'BindPaths' it's possible to
hide the irrelevant paths inside /root from borgmatic service when it is
run.

ReadWritePaths are suggested to be used only for paths that contain borg
repositories and the backup sources can be specified as ReadOnlyPaths.
2021-08-30 11:20:34 -07:00
Dan Helfman acb2ca79d9 Fix traceback that can occur when dumping a database (#440). 2021-08-06 08:58:11 -07:00
Dan Helfman c9211320e1 Fix dev version in changelog. 2021-08-04 15:32:51 -07:00
Dan Helfman 760286abe1 Dev release bump. 2021-07-30 09:49:07 -07:00
Dan Helfman 5890a1cb48 Fix "message too long" error when logging to rsyslog (#389). 2021-07-30 09:48:13 -07:00
Dan Helfman b3f5a9d18f Fix error when configuration file contains "umask" option (#437). 2021-07-27 10:04:22 -07:00
Dan Helfman 80b33fbf8a Code style reformatting. 2021-07-27 09:39:48 -07:00
Dan Helfman 5389ff6160
Merge pull request #41 from mkszuba/tests_no_xxd
tests/integration/test_execute: use plain Python rather than xxd
2021-07-27 09:39:02 -07:00
Marek Szuba e8b8d86592 tests/integration/test_execute: use plain Python rather than xxd
Removes this test's dependencies on vim and /dev/urandom.

Signed-off-by: Marek Szuba <marek.szuba@cern.ch>
2021-07-27 13:50:16 +01:00
Dan Helfman 92d729a9dd Try temporary work around for Drone build bug: https://github.com/drone-plugins/drone-docker/pull/327 2021-07-26 16:33:41 -07:00
Dan Helfman c63219936e Wording tweaks to security policy. 2021-07-26 13:44:14 -07:00
Dan Helfman 0aff497430 Bump version for release. 2021-07-26 10:17:49 -07:00
Dan Helfman 1f3907a6a5 Fix for failing PostgreSQL directory format test (#430). 2021-07-26 09:42:14 -07:00
Dan Helfman 2a8692c64f Fix integration test to hopefully work on Alpine (#430). 2021-07-25 22:50:00 -07:00
Dan Helfman 1709f57ff0 Fix hang when restoring a PostgreSQL "tar" format database dump (#430). 2021-07-25 22:30:15 -07:00
cadamswaite 89baf757cf Sort imports 2021-07-14 23:17:35 +01:00
cadamswaite 4f36fe2b9f Run Black on changed file 2021-07-14 22:53:01 +01:00
cadamswaite 510449ce65 Change default retries to 0 2021-07-14 22:49:03 +01:00
cadamswaite 4cc4b8d484 Add queue based retry logic 2021-07-14 22:46:02 +01:00
Dan Helfman 9c972cb0e5 Add documentation note about systemd configuration with alternate install methods (#428). 2021-06-29 21:38:53 -07:00
Dan Helfman 9b1779065e Pin ruamel.yaml.clib to work around docs build issue. 2021-06-29 21:35:46 -07:00
Dan Helfman 057ec3e59b Add NEWS entry for #379: Suppress console output in sample crontab and systemd service files. 2021-06-23 10:35:41 -07:00
Dan Helfman bc2e611a74 Suppress console output in sample crontab/systemd service files (#379).
Reviewed-on: witten/borgmatic#379
2021-06-23 17:32:47 +00:00
Dan Helfman b6d3a1e02f Merge branch 'master' of ssh://projects.torsion.org:3022/witten/borgmatic 2021-06-23 10:22:07 -07:00
Dan Helfman 54d57e1349 Add test for #407: Fix syslog logging on FreeBSD. 2021-06-23 10:21:45 -07:00
Dan Helfman af0b3da8ed Fix syslog logging on FreeBSD (#407).
Reviewed-on: witten/borgmatic#407
2021-06-23 17:21:25 +00:00
Dan Helfman 27d37b606b Better error messages! Switch the library used for validating configuration files (from pykwalify to jsonschema). 2021-06-22 13:27:59 -07:00
Dan Helfman 77a860cc62 Link borgmatic Ansible role from installation documentation. 2021-06-19 19:04:22 -07:00
Dan Helfman 7bd6374751 Bump version for release. 2021-06-17 20:44:54 -07:00
Dan Helfman cf8882f2bc Run arbitrary Borg commands with new "borgmatic borg" action (#425). 2021-06-17 20:41:44 -07:00
Dan Helfman b37dd1a79e Document use case of running backups conditionally based on laptop power level (#419). 2021-06-09 10:03:35 -07:00
Dan Helfman fd59776f91 Bump version for release. 2021-06-08 11:44:53 -07:00
Dan Helfman 9fd28d2eed Fix error handling to error loudly when Borg gets killed due to running out of memory (#423)! 2021-06-08 11:43:55 -07:00
Dan Helfman f5c61c8013 Move #borgmatic IRC channel from Freenode to Libera Chat due to Freenode takeover drama. 2021-06-06 21:09:40 -07:00
Dan Helfman 88cb49dcc4 Fix release script based on GitHub authentication query parameter deprecation. 2021-04-24 20:27:53 -07:00
Dan Helfman 73235e59be Upgrade "py" test dependency (security). 2021-04-20 10:39:49 -07:00
Dan Helfman 7076a7ff86 Add link to Hetzner storage offering from the documentation (#390). 2021-04-18 18:03:43 -07:00
Dan Helfman d6e376d32d Fix end-to-end test broken by change in source directory examples. 2021-04-18 17:54:54 -07:00
Dan Helfman 9016f4be43 Clarify that spaces in path names should not be backslashed in path names (#406). 2021-04-18 17:28:11 -07:00
Jeffery To d1c403999f
Reduce console output in sample crontab/systemd service files.
As borgmatic will log to syslog in the sample crontab/systemd service
files, this makes console output redundant. (cron will mail any console
output to the root user; systemd will log any console output to syslog.)

This adds --verbosity -1 to both files to reduce console output to the
minimum.
2021-04-13 01:40:57 +08:00
Dan Helfman d543109ef4 "Fix" build failure with Alpine Edge by switching from Edge to Alpine 3.13. 2021-04-09 15:58:23 -07:00
Dan Helfman 7085a45649 Fix build so as not to attempt to build and push documentation for a non-master branch. 2021-04-09 15:04:09 -07:00
Dan Helfman cf4c603f1d Clarify canonical home of borgmatic in documentation (#398). 2021-04-09 14:54:21 -07:00
Victor Bouvier-Deleau d2533313bc
Fix syslog logging on FreeBSD
The UNIX domain socket to use on FreeBSD is /var/run/log.
See syslogd FreeBSD man page: https://www.freebsd.org/cgi/man.cgi?query=syslogd&sektion=8
2021-04-02 14:11:50 +02:00
Dan Helfman c43b50b6e6 Upgrade PyYAML. 2021-03-30 22:29:20 -07:00
Dan Helfman c072678936 Add support for ruamel.yaml 0.17.x YAML parsing library (#404). 2021-03-30 15:53:19 -07:00
Dan Helfman 631da1465e Add support for Python 3.9. 2021-03-30 15:36:26 -07:00
Dan Helfman f29519a5cd
Merge pull request #38 from lukehsiao/patch-1
Fix link to issue tracker in documentation
2021-03-20 15:45:15 -07:00
Luke Hsiao 5d82b42ab8
Fix link to issue tracker in documentation
Fixes: a1d986d952
2021-03-18 17:26:37 -07:00
Dan Helfman 4897a78fd3 Fix database tests broken by PostgreSQL upgrade in Alpine Edge. 2020-12-24 22:23:09 -08:00
Dan Helfman a1d986d952 Replace "improve this documentation" form with link to support and ticket tracker. 2020-12-24 14:57:51 -08:00
Dan Helfman 717c90a7d0 Clarify in systemd service file comment that security settings are optional. 2020-12-09 10:08:07 -08:00
Dan Helfman 8fde19a7dc Update systemd service example to return a permission error when a system call isn't permitted. 2020-11-30 22:14:28 -08:00
Dan Helfman ad7198ba66 Tweak to test failing on some machines. 2020-11-26 16:22:42 -08:00
Dan Helfman eb4b4cc92b Fix line length in schema. 2020-11-25 19:21:06 -08:00
Dan Helfman 41bf520585 Document that passphrase is used for Borg keyfile encryption, not just repokey encryption (#373). 2020-11-25 18:36:23 -08:00
Dan Helfman c0ae01f5d5 Code formatting. 2020-11-25 17:46:57 -08:00
Dan Helfman 8b8f92d717 Prevent newer (borgmatic-unsupported) version of Black code formatter installing in Alpine Edge. 2020-11-25 17:42:04 -08:00
Dan Helfman ccd1627175 Fix timing-related test error in Alpine Edge. 2020-11-25 15:48:33 -08:00
Dan Helfman b8a7e23f46 Add missing pip to test script. 2020-11-22 17:42:58 -08:00
Dan Helfman 1f4f28b4dc Drop support for Python 3.5. Only support black code formatter on Python 3.8+. 2020-11-22 17:27:21 -08:00
Dan Helfman ea6cd53067 Update versions of test dependencies (test_requirements.txt and test containers). 2020-11-22 14:48:07 -08:00
Dan Helfman 267138776d Add protection for accidentally releasing a dev version. 2020-11-21 14:03:39 -08:00
Dan Helfman 604b3d5e17 Bump version. 2020-11-21 13:56:19 -08:00
Dan Helfman 667e1e5b15 Update document about new --override behavior (#361). 2020-11-19 11:01:53 -08:00
117 changed files with 7027 additions and 1973 deletions

View File

@ -1,110 +1,29 @@
---
kind: pipeline kind: pipeline
name: python-3-5-alpine-3-10 name: python-3-8-alpine-3-13
services: services:
- name: postgresql - name: postgresql
image: postgres:11.6-alpine image: postgres:13.1-alpine
environment: environment:
POSTGRES_PASSWORD: test POSTGRES_PASSWORD: test
POSTGRES_DB: test POSTGRES_DB: test
- name: mysql - name: mysql
image: mariadb:10.3 image: mariadb:10.5
environment: environment:
MYSQL_ROOT_PASSWORD: test MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test MYSQL_DATABASE: test
- name: mongodb
image: mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: test
clone:
skip_verify: true
steps: steps:
- name: build - name: build
image: python:3.5-alpine3.10 image: alpine:3.13
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-6-alpine-3-10
services:
- name: postgresql
image: postgres:11.6-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.3
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: python:3.6-alpine3.10
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-7-alpine-3-10
services:
- name: postgresql
image: postgres:11.6-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.3
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: python:3.7-alpine3.10
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-7-alpine-3-7
services:
- name: postgresql
image: postgres:10.11-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.1
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: python:3.7-alpine3.7
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-8-alpine-3-10
services:
- name: postgresql
image: postgres:11.6-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.3
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: python:3.8-alpine3.10
pull: always pull: always
commands: commands:
- scripts/run-full-tests - scripts/run-full-tests
@ -112,6 +31,9 @@ steps:
kind: pipeline kind: pipeline
name: documentation name: documentation
clone:
skip_verify: true
steps: steps:
- name: build - name: build
image: plugins/docker image: plugins/docker
@ -122,6 +44,11 @@ steps:
from_secret: docker_password from_secret: docker_password
repo: witten/borgmatic-docs repo: witten/borgmatic-docs
dockerfile: docs/Dockerfile dockerfile: docs/Dockerfile
when:
branch: trigger:
- master repo:
- borgmatic-collective/borgmatic
branch:
- master
event:
- push

View File

@ -36,6 +36,8 @@ module.exports = function(eleventyConfig) {
eleventyConfig.addPassthroughCopy({"docs/static": "static"}); eleventyConfig.addPassthroughCopy({"docs/static": "static"});
eleventyConfig.setLiquidOptions({dynamicPartials: false});
return { return {
templateFormats: [ templateFormats: [
"md", "md",

2
.gitignore vendored
View File

@ -2,7 +2,7 @@
*.pyc *.pyc
*.swp *.swp
.cache .cache
.coverage .coverage*
.pytest_cache .pytest_cache
.tox .tox
__pycache__ __pycache__

175
NEWS
View File

@ -1,4 +1,175 @@
1.5.11.dev0 1.6.4.dev0
* #546, #382: Keep your repository passphrases and database passwords outside of borgmatic's
configuration file with environment variable interpolation. See the documentation for more
information: https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
1.6.3
* #541: Add "borgmatic list --find" flag for searching for files across multiple archives, useful
for hunting down that file you accidentally deleted so you can extract it. See the documentation
for more information:
https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#searching-for-a-file
* #543: Add a monitoring hook for sending push notifications via ntfy. See the documentation for
more information: https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook
* Fix Bash completion script to no longer alter your shell's settings (complain about unset
variables or error on pipe failures).
* Deprecate "borgmatic list --successful" flag, as listing only non-checkpoint (successful)
archives is now the default in newer versions of Borg.
1.6.2
* #523: Reduce the default consistency check frequency and support configuring the frequency
independently for each check. Also add "borgmatic check --force" flag to ignore configured
frequencies. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#check-frequency
* #536: Fix generate-borgmatic-config to support more complex schema changes like the new
Healthchecks configuration options when the "--source" flag is used.
* #538: Add support for "borgmatic borg debug" command.
* #539: Add "generate-borgmatic-config --overwrite" flag to replace an existing destination file.
* Add Bash completion script so you can tab-complete the borgmatic command-line. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#shell-completion
1.6.1
* #294: Add Healthchecks monitoring hook "ping_body_limit" option to configure how many bytes of
logs to send to the Healthchecks server.
* #402: Remove the error when "archive_name_format" is specified but a retention prefix isn't.
* #420: Warn when an unsupported variable is used in a hook command.
* #439: Change connection failures for monitoring hooks (Healthchecks, Cronitor, PagerDuty, and
Cronhub) to be warnings instead of errors. This way, the monitoring system failing does not block
backups.
* #460: Add Healthchecks monitoring hook "send_logs" option to enable/disable sending borgmatic
logs to the Healthchecks server.
* #525: Add Healthchecks monitoring hook "states" option to only enable pinging for particular
monitoring states (start, finish, fail).
* #528: Improve the error message when a configuration override contains an invalid value.
* #531: BREAKING: When deep merging common configuration, merge colliding list values by appending
them. Previously, one list replaced the other.
* #532: When a configuration include is a relative path, load it from either the current working
directory or from the directory containing the file doing the including. Previously, only the
working directory was used.
* Add a randomized delay to the sample systemd timer to spread out the load on a server.
* Change the configuration format for borgmatic monitoring hooks (Healthchecks, Cronitor,
PagerDuty, and Cronhub) to specify the ping URL / integration key as a named option. The intent
is to support additional options (some in this release). This change is backwards-compatible.
* Add emojis to documentation table of contents to make it easier to find particular how-to and
reference guides at a glance.
1.6.0
* #381: BREAKING: Greatly simplify configuration file reuse by deep merging when including common
configuration. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#include-merging
* #473: BREAKING: Instead of executing "before" command hooks before all borgmatic actions run (and
"after" hooks after), execute these hooks right before/after the corresponding action. E.g.,
"before_check" now runs immediately before the "check" action. This better supports running
timing-sensitive tasks like pausing containers. Side effect: before/after command hooks now run
once for each configured repository instead of once per configuration file. Additionally, the
"repositories" interpolated variable has been changed to "repository", containing the path to the
current repository for the hook. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #513: Add mention of sudo's "secure_path" option to borgmatic installation documentation.
* #515: Fix "borgmatic borg key ..." to pass parameters to Borg in the correct order.
* #516: Fix handling of TERM signal to exit borgmatic, not just forward the signal to Borg.
* #517: Fix borgmatic exit code (so it's zero) when initial Borg calls fail but later retries
succeed.
* Change Healthchecks logs truncation size from 10k bytes to 100k bytes, corresponding to that
same change on Healthchecks.io.
1.5.24
* #431: Add "working_directory" option to support source directories with relative paths.
* #444: When loading a configuration file that is unreadable due to file permissions, warn instead
of erroring. This supports running borgmatic as a non-root user with configuration in ~/.config
even if there is an unreadable global configuration file in /etc.
* #469: Add "repositories" context to "before_*" and "after_*" command action hooks. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #486: Fix handling of "patterns_from" and "exclude_from" options to error instead of warning when
referencing unreadable files and "create" action is run.
* #507: Fix Borg usage error in the "compact" action when running "borgmatic --dry-run". Now, skip
"compact" entirely during a dry run.
1.5.23
* #394: Compact repository segments and free space with new "borgmatic compact" action. Borg 1.2+
only. Also run "compact" by default when no actions are specified, as "prune" in Borg 1.2 no
longer frees up space unless "compact" is run.
* #394: When using the "atime", "bsd_flags", "numeric_owner", or "remote_rate_limit" options,
tailor the flags passed to Borg depending on the Borg version.
* #480, #482: Fix traceback when a YAML validation error occurs.
1.5.22
* #288: Add database dump hook for MongoDB.
* #470: Move mysqldump options to the beginning of the command due to MySQL bug 30994.
* #471: When command-line configuration override produces a parse error, error cleanly instead of
tracebacking.
* #476: Fix unicode error when restoring particular MySQL databases.
* Drop support for Python 3.6, which has been end-of-lifed.
* Add support for Python 3.10.
1.5.21
* #28: Optionally retry failing backups via "retries" and "retry_wait" configuration options.
* #306: Add "list_options" MySQL configuration option for passing additional arguments to MySQL
list command.
* #459: Add support for old version (2.x) of jsonschema library.
1.5.20
* Re-release with correct version without dev0 tag.
1.5.19
* #387: Fix error when configured source directories are not present on the filesystem at the time
of backup. Now, Borg will complain, but the backup will still continue.
* #455: Mention changing borgmatic path in cron documentation.
* Update sample systemd service file with more granular read-only filesystem settings.
* Move Gitea and GitHub hosting from a personal namespace to an organization for better
collaboration with related projects.
* 1k ★s on GitHub!
1.5.18
* #389: Fix "message too long" error when logging to rsyslog.
* #440: Fix traceback that can occur when dumping a database.
1.5.17
* #437: Fix error when configuration file contains "umask" option.
* Remove test dependency on vim and /dev/urandom.
1.5.16
* #379: Suppress console output in sample crontab and systemd service files.
* #407: Fix syslog logging on FreeBSD.
* #430: Fix hang when restoring a PostgreSQL "tar" format database dump.
* Better error messages! Switch the library used for validating configuration files (from pykwalify
to jsonschema).
* Link borgmatic Ansible role from installation documentation:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#other-ways-to-install
1.5.15
* #419: Document use case of running backups conditionally based on laptop power level:
https://torsion.org/borgmatic/docs/how-to/backup-to-a-removable-drive-or-an-intermittent-server/
* #425: Run arbitrary Borg commands with new "borgmatic borg" action. See the documentation for
more information: https://torsion.org/borgmatic/docs/how-to/run-arbitrary-borg-commands/
1.5.14
* #390: Add link to Hetzner storage offering from the documentation.
* #398: Clarify canonical home of borgmatic in documentation.
* #406: Clarify that spaces in path names should not be backslashed in path names.
* #423: Fix error handling to error loudly when Borg gets killed due to running out of memory!
* Fix build so as not to attempt to build and push documentation for a non-master branch.
* "Fix" build failure with Alpine Edge by switching from Edge to Alpine 3.13.
* Move #borgmatic IRC channel from Freenode to Libera Chat due to Freenode takeover drama.
IRC connection info: https://torsion.org/borgmatic/#issues
1.5.13
* #373: Document that passphrase is used for Borg keyfile encryption, not just repokey encryption.
* #404: Add support for ruamel.yaml 0.17.x YAML parsing library.
* Update systemd service example to return a permission error when a system call isn't permitted
(instead of terminating borgmatic outright).
* Drop support for Python 3.5, which has been end-of-lifed.
* Add support for Python 3.9.
* Update versions of test dependencies (test_requirements.txt and test containers).
* Only support black code formatter on Python 3.8+. New black dependencies make installation
difficult on older versions of Python.
* Replace "improve this documentation" form with link to support and ticket tracker.
1.5.12
* Fix for previous release with incorrect version suffix in setup.py. No other changes.
1.5.11
* #341: Add "temporary_directory" option for changing Borg's temporary directory. * #341: Add "temporary_directory" option for changing Borg's temporary directory.
* #352: Lock down systemd security settings in sample systemd service file. * #352: Lock down systemd security settings in sample systemd service file.
* #355: Fix traceback when a database hook value is null in a configuration file. * #355: Fix traceback when a database hook value is null in a configuration file.
@ -520,7 +691,7 @@
* #49: Support for Borg experimental --patterns-from and --patterns options for specifying mixed * #49: Support for Borg experimental --patterns-from and --patterns options for specifying mixed
includes/excludes. includes/excludes.
* Moved issue tracker from Taiga to integrated Gitea tracker at * Moved issue tracker from Taiga to integrated Gitea tracker at
https://projects.torsion.org/witten/borgmatic/issues https://projects.torsion.org/borgmatic-collective/borgmatic/issues
1.1.12 1.1.12
* #46: Declare dependency on pykwalify 1.6 or above, as older versions yield "Unknown key: version" * #46: Declare dependency on pykwalify 1.6 or above, as older versions yield "Unknown key: version"

View File

@ -11,6 +11,8 @@ borgmatic is simple, configuration-driven backup software for servers and
workstations. Protect your files with client-side encryption. Backup your workstations. Protect your files with client-side encryption. Backup your
databases too. Monitor it all with integrated third-party services. databases too. Monitor it all with integrated third-party services.
The canonical home of borgmatic is at <a href="https://torsion.org/borgmatic">https://torsion.org/borgmatic</a>.
Here's an example configuration file: Here's an example configuration file:
```yaml ```yaml
@ -24,7 +26,6 @@ location:
repositories: repositories:
- 1234@usw-s001.rsync.net:backups.borg - 1234@usw-s001.rsync.net:backups.borg
- k8pDxu32@k8pDxu32.repo.borgbase.com:repo - k8pDxu32@k8pDxu32.repo.borgbase.com:repo
- user1@scp2.cdn.lima-labs.com:repo
- /var/lib/backups/local.borg - /var/lib/backups/local.borg
retention: retention:
@ -36,8 +37,9 @@ retention:
consistency: consistency:
# List of checks to run to validate your backups. # List of checks to run to validate your backups.
checks: checks:
- repository - name: repository
- archives - name: archives
frequency: 2 weeks
hooks: hooks:
# Custom preparation scripts to run. # Custom preparation scripts to run.
@ -53,9 +55,9 @@ hooks:
``` ```
Want to see borgmatic in action? Check out the <a Want to see borgmatic in action? Check out the <a
href="https://asciinema.org/a/203761" target="_blank">screencast</a>. href="https://asciinema.org/a/203761?autoplay=1" target="_blank">screencast</a>.
<script src="https://asciinema.org/a/203761.js" id="asciicast-203761" async></script> <a href="https://asciinema.org/a/203761?autoplay=1" target="_blank"><img src="https://asciinema.org/a/203761.png" width="480"></a>
borgmatic is powered by [Borg Backup](https://www.borgbackup.org/). borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
@ -64,11 +66,12 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
<a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.pagerduty.com/"><img src="docs/static/pagerduty.png" alt="PagerDuty" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://www.pagerduty.com/"><img src="docs/static/pagerduty.png" alt="PagerDuty" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic"><img src="docs/static/rsyncnet.png" alt="rsync.net" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://ntfy.sh/"><img src="docs/static/ntfy.png" alt="ntfy" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
@ -84,31 +87,35 @@ reference guides</a>.
## Hosting providers ## Hosting providers
Need somewhere to store your encrypted offsite backups? The following hosting Need somewhere to store your encrypted off-site backups? The following hosting
providers include specific support for Borg/borgmatic. Using these links and providers include specific support for Borg/borgmatic—and fund borgmatic
services helps support borgmatic development and hosting. (These are referral development and hosting when you use these links to sign up. (These are
links, but without any tracking scripts or cookies.) referral links, but without any tracking scripts or cookies.)
<ul> <ul>
<li class="referral"><a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic">rsync.net</a>: Cloud Storage provider with full support for borg and any other SSH/SFTP tool</li>
<li class="referral"><a href="https://www.borgbase.com/?utm_source=borgmatic">BorgBase</a>: Borg hosting service with support for monitoring, 2FA, and append-only repos</li> <li class="referral"><a href="https://www.borgbase.com/?utm_source=borgmatic">BorgBase</a>: Borg hosting service with support for monitoring, 2FA, and append-only repos</li>
<li class="referral"><a href="https://storage.lima-labs.com/special-pricing-offer-for-borgmatic-users/">Lima-Labs</a>: Affordable, reliable cloud data storage accessable via SSH/SCP/FTP for Borg backups or any other bulk storage needs</li>
</ul> </ul>
Additionally, [rsync.net](https://www.rsync.net/products/borg.html) and
[Hetzner](https://www.hetzner.com/storage/storage-box) have compatible storage
offerings, but do not currently fund borgmatic development or hosting.
## Support and contributing ## Support and contributing
### Issues ### Issues
You've got issues? Or an idea for a feature enhancement? We've got an [issue You've got issues? Or an idea for a feature enhancement? We've got an [issue
tracker](https://projects.torsion.org/witten/borgmatic/issues). In order to tracker](https://projects.torsion.org/borgmatic-collective/borgmatic/issues). In order to
create a new issue or comment on an issue, you'll need to [login create a new issue or comment on an issue, you'll need to [login
first](https://projects.torsion.org/user/login). Note that you can login with first](https://projects.torsion.org/user/login). Note that you can login with
an existing GitHub account if you prefer. an existing GitHub account if you prefer.
If you'd like to chat with borgmatic developers or users, head on over to the If you'd like to chat with borgmatic developers or users, head on over to the
`#borgmatic` IRC channel on Freenode, either via <a `#borgmatic` IRC channel on Libera Chat, either via <a
href="https://webchat.freenode.net/?channels=borgmatic">web chat</a> or a href="https://web.libera.chat/#borgmatic">web chat</a> or a
native <a href="irc://chat.freenode.net:6697">IRC client</a>. native <a href="ircs://irc.libera.chat:6697">IRC client</a>. If you
don't get a response right away, please hang around a while—or file a ticket
instead.
Also see the [security Also see the [security
policy](https://torsion.org/borgmatic/docs/security-policy/) for any security policy](https://torsion.org/borgmatic/docs/security-policy/) for any security
@ -120,16 +127,16 @@ Other questions or comments? Contact
### Contributing ### Contributing
borgmatic is hosted at <https://torsion.org/borgmatic> with [source code borgmatic [source code is
available](https://projects.torsion.org/witten/borgmatic), and is also available](https://projects.torsion.org/borgmatic-collective/borgmatic) and is also mirrored
mirrored on [GitHub](https://github.com/witten/borgmatic) for convenience. on [GitHub](https://github.com/borgmatic-collective/borgmatic) for convenience.
borgmatic is licensed under the GNU General Public License version 3 or any borgmatic is licensed under the GNU General Public License version 3 or any
later version. later version.
If you'd like to contribute to borgmatic development, please feel free to If you'd like to contribute to borgmatic development, please feel free to
submit a [Pull Request](https://projects.torsion.org/witten/borgmatic/pulls) submit a [Pull Request](https://projects.torsion.org/borgmatic-collective/borgmatic/pulls)
or open an [issue](https://projects.torsion.org/witten/borgmatic/issues) first or open an [issue](https://projects.torsion.org/borgmatic-collective/borgmatic/issues) first
to discuss your idea. We also accept Pull Requests on GitHub, if that's more to discuss your idea. We also accept Pull Requests on GitHub, if that's more
your thing. In general, contributions are very welcome. We don't bite! your thing. In general, contributions are very welcome. We don't bite!
@ -137,5 +144,5 @@ Also, please check out the [borgmatic development
how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for
info on cloning source code, running tests, etc. info on cloning source code, running tests, etc.
<a href="https://build.torsion.org/witten/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/witten/borgmatic/status.svg?ref=refs/heads/master)</a> <a href="https://build.torsion.org/borgmatic-collective/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/borgmatic-collective/borgmatic/status.svg?ref=refs/heads/master)</a>

View File

@ -6,14 +6,13 @@ permalink: security-policy/index.html
## Supported versions ## Supported versions
While we want to hear about security vulnerabilities in all versions of While we want to hear about security vulnerabilities in all versions of
borgmatic, security fixes will only be made to the most recently released borgmatic, security fixes are only made to the most recently released version.
version. It's not practical for our small volunteer effort to maintain It's simply not practical for our small volunteer effort to maintain multiple
multiple different release branches and put out separate security patches for release branches and put out separate security patches for each.
each.
## Reporting a vulnerability ## Reporting a vulnerability
If you find a security vulnerability, please [file a If you find a security vulnerability, please [file a
ticket](https://torsion.org/borgmatic/#issues) or [send email ticket](https://torsion.org/borgmatic/#issues) or [send email
directly](mailto:witten@torsion.org) as appropriate. You should expect to hear directly](mailto:witten@torsion.org) as appropriate. You should expect to hear
back within a few days at most, and generally sooner. back within a few days at most and generally sooner.

55
borgmatic/borg/borg.py Normal file
View File

@ -0,0 +1,55 @@
import logging
from borgmatic.borg.flags import make_flags
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
REPOSITORYLESS_BORG_COMMANDS = {'serve', None}
BORG_COMMANDS_WITH_SUBCOMMANDS = {'key', 'debug'}
BORG_SUBCOMMANDS_WITHOUT_REPOSITORY = (('debug', 'info'), ('debug', 'convert-profile'))
def run_arbitrary_borg(
repository, storage_config, options, archive=None, local_path='borg', remote_path=None
):
'''
Given a local or remote repository path, a storage config dict, a sequence of arbitrary
command-line Borg options, and an optional archive name, run an arbitrary Borg command on the
given repository/archive.
'''
lock_wait = storage_config.get('lock_wait', None)
try:
options = options[1:] if options[0] == '--' else options
# Borg commands like "key" have a sub-command ("export", etc.) that must follow it.
command_options_start_index = 2 if options[0] in BORG_COMMANDS_WITH_SUBCOMMANDS else 1
borg_command = tuple(options[:command_options_start_index])
command_options = tuple(options[command_options_start_index:])
except IndexError:
borg_command = ()
command_options = ()
if borg_command in BORG_SUBCOMMANDS_WITHOUT_REPOSITORY:
repository_archive = None
else:
repository_archive = (
'::'.join((repository, archive)) if repository and archive else repository
)
full_command = (
(local_path,)
+ borg_command
+ ((repository_archive,) if borg_command and repository_archive else ())
+ command_options
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait)
)
return execute_command(
full_command, output_log_level=logging.WARNING, borg_local_path=local_path,
)

View File

@ -1,46 +1,157 @@
import argparse
import datetime
import json
import logging import logging
import os
import pathlib
from borgmatic.borg import extract from borgmatic.borg import extract, info, state
from borgmatic.execute import DO_NOT_CAPTURE, execute_command from borgmatic.execute import DO_NOT_CAPTURE, execute_command
DEFAULT_CHECKS = ('repository', 'archives') DEFAULT_CHECKS = (
{'name': 'repository', 'frequency': '1 month'},
{'name': 'archives', 'frequency': '1 month'},
)
DEFAULT_PREFIX = '{hostname}-' DEFAULT_PREFIX = '{hostname}-'
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def _parse_checks(consistency_config, only_checks=None): def parse_checks(consistency_config, only_checks=None):
''' '''
Given a consistency config with a "checks" list, and an optional list of override checks, Given a consistency config with a "checks" sequence of dicts and an optional list of override
transform them a tuple of named checks to run. checks, return a tuple of named checks to run.
For example, given a retention config of: For example, given a retention config of:
{'checks': ['repository', 'archives']} {'checks': ({'name': 'repository'}, {'name': 'archives'})}
This will be returned as: This will be returned as:
('repository', 'archives') ('repository', 'archives')
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If the checks value If no "checks" option is present in the config, return the DEFAULT_CHECKS. If a checks value
is the string "disabled", return an empty tuple, meaning that no checks should be run. has a name of "disabled", return an empty tuple, meaning that no checks should be run.
If the "data" option is present, then make sure the "archives" option is included as well. If the "data" check is present, then make sure the "archives" check is included as well.
''' '''
checks = [ checks = only_checks or tuple(
check.lower() for check in (only_checks or consistency_config.get('checks', []) or []) check_config['name']
] for check_config in (consistency_config.get('checks', None) or DEFAULT_CHECKS)
if checks == ['disabled']: )
checks = tuple(check.lower() for check in checks)
if 'disabled' in checks:
if len(checks) > 1:
logger.warning(
'Multiple checks are configured, but one of them is "disabled"; not running any checks'
)
return () return ()
if 'data' in checks and 'archives' not in checks: if 'data' in checks and 'archives' not in checks:
checks.append('archives') return checks + ('archives',)
return tuple(check for check in checks if check not in ('disabled', '')) or DEFAULT_CHECKS return checks
def _make_check_flags(checks, check_last=None, prefix=None): def parse_frequency(frequency):
'''
Given a frequency string with a number and a unit of time, return a corresponding
datetime.timedelta instance or None if the frequency is None or "always".
For instance, given "3 weeks", return datetime.timedelta(weeks=3)
Raise ValueError if the given frequency cannot be parsed.
'''
if not frequency:
return None
frequency = frequency.strip().lower()
if frequency == 'always':
return None
try:
number, time_unit = frequency.split(' ')
number = int(number)
except ValueError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
if not time_unit.endswith('s'):
time_unit += 's'
if time_unit == 'months':
number *= 30
time_unit = 'days'
elif time_unit == 'years':
number *= 365
time_unit = 'days'
try:
return datetime.timedelta(**{time_unit: number})
except TypeError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
def filter_checks_on_frequency(
location_config, consistency_config, borg_repository_id, checks, force
):
'''
Given a location config, a consistency config with a "checks" sequence of dicts, a Borg
repository ID, a sequence of checks, and whether to force checks to run, filter down those
checks based on the configured "frequency" for each check as compared to its check time file.
In other words, a check whose check time file's timestamp is too new (based on the configured
frequency) will get cut from the returned sequence of checks. Example:
consistency_config = {
'checks': [
{
'name': 'archives',
'frequency': '2 weeks',
},
]
}
When this function is called with that consistency_config and "archives" in checks, "archives"
will get filtered out of the returned result if its check time file is newer than 2 weeks old,
indicating that it's not yet time to run that check again.
Raise ValueError if a frequency cannot be parsed.
'''
filtered_checks = list(checks)
if force:
return tuple(filtered_checks)
for check_config in consistency_config.get('checks', DEFAULT_CHECKS):
check = check_config['name']
if checks and check not in checks:
continue
frequency_delta = parse_frequency(check_config.get('frequency'))
if not frequency_delta:
continue
check_time = read_check_time(
make_check_time_path(location_config, borg_repository_id, check)
)
if not check_time:
continue
# If we've not yet reached the time when the frequency dictates we're ready for another
# check, skip this check.
if datetime.datetime.now() < check_time + frequency_delta:
remaining = check_time + frequency_delta - datetime.datetime.now()
logger.info(
f"Skipping {check} check due to configured frequency; {remaining} until next check"
)
filtered_checks.remove(check)
return tuple(filtered_checks)
def make_check_flags(checks, check_last=None, prefix=None):
''' '''
Given a parsed sequence of checks, transform it into tuple of command-line flags. Given a parsed sequence of checks, transform it into tuple of command-line flags.
@ -66,27 +177,67 @@ def _make_check_flags(checks, check_last=None, prefix=None):
last_flags = () last_flags = ()
prefix_flags = () prefix_flags = ()
if check_last: if check_last:
logger.warning( logger.info('Ignoring check_last option, as "archives" is not in consistency checks')
'Ignoring check_last option, as "archives" is not in consistency checks.'
)
if prefix: if prefix:
logger.warning( logger.info(
'Ignoring consistency prefix option, as "archives" is not in consistency checks.' 'Ignoring consistency prefix option, as "archives" is not in consistency checks'
) )
common_flags = last_flags + prefix_flags + (('--verify-data',) if 'data' in checks else ()) common_flags = last_flags + prefix_flags + (('--verify-data',) if 'data' in checks else ())
if set(DEFAULT_CHECKS).issubset(set(checks)): if {'repository', 'archives'}.issubset(set(checks)):
return common_flags return common_flags
return ( return (
tuple('--{}-only'.format(check) for check in checks if check in DEFAULT_CHECKS) tuple('--{}-only'.format(check) for check in checks if check in ('repository', 'archives'))
+ common_flags + common_flags
) )
def make_check_time_path(location_config, borg_repository_id, check_type):
'''
Given a location configuration dict, a Borg repository ID, and the name of a check type
("repository", "archives", etc.), return a path for recording that check's time (the time of
that check last occurring).
'''
return os.path.join(
os.path.expanduser(
location_config.get(
'borgmatic_source_directory', state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
),
'checks',
borg_repository_id,
check_type,
)
def write_check_time(path): # pragma: no cover
'''
Record a check time of now as the modification time of the given path.
'''
logger.debug(f'Writing check time at {path}')
os.makedirs(os.path.dirname(path), mode=0o700, exist_ok=True)
pathlib.Path(path, mode=0o600).touch()
def read_check_time(path):
'''
Return the check time based on the modification time of the given path. Return None if the path
doesn't exist.
'''
logger.debug(f'Reading check time from {path}')
try:
return datetime.datetime.fromtimestamp(os.stat(path).st_mtime)
except FileNotFoundError:
return None
def check_archives( def check_archives(
repository, repository,
location_config,
storage_config, storage_config,
consistency_config, consistency_config,
local_path='borg', local_path='borg',
@ -94,6 +245,7 @@ def check_archives(
progress=None, progress=None,
repair=None, repair=None,
only_checks=None, only_checks=None,
force=None,
): ):
''' '''
Given a local or remote repository path, a storage config dict, a consistency config dict, Given a local or remote repository path, a storage config dict, a consistency config dict,
@ -102,13 +254,34 @@ def check_archives(
Borg archives for consistency. Borg archives for consistency.
If there are no consistency checks to run, skip running them. If there are no consistency checks to run, skip running them.
Raises ValueError if the Borg repository ID cannot be determined.
''' '''
checks = _parse_checks(consistency_config, only_checks) try:
borg_repository_id = json.loads(
info.display_archives_info(
repository,
storage_config,
argparse.Namespace(json=True, archive=None),
local_path,
remote_path,
)
)['repository']['id']
except (json.JSONDecodeError, KeyError):
raise ValueError(f'Cannot determine Borg repository ID for {repository}')
checks = filter_checks_on_frequency(
location_config,
consistency_config,
borg_repository_id,
parse_checks(consistency_config, only_checks),
force,
)
check_last = consistency_config.get('check_last', None) check_last = consistency_config.get('check_last', None)
lock_wait = None lock_wait = None
extra_borg_options = storage_config.get('extra_borg_options', {}).get('check', '') extra_borg_options = storage_config.get('extra_borg_options', {}).get('check', '')
if set(checks).intersection(set(DEFAULT_CHECKS + ('data',))): if set(checks).intersection({'repository', 'archives', 'data'}):
lock_wait = storage_config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
verbosity_flags = () verbosity_flags = ()
@ -122,7 +295,7 @@ def check_archives(
full_command = ( full_command = (
(local_path, 'check') (local_path, 'check')
+ (('--repair',) if repair else ()) + (('--repair',) if repair else ())
+ _make_check_flags(checks, check_last, prefix) + make_check_flags(checks, check_last, prefix)
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ()) + (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags + verbosity_flags
@ -131,12 +304,16 @@ def check_archives(
+ (repository,) + (repository,)
) )
# The Borg repair option trigger an interactive prompt, which won't work when output is # The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly. # captured. And progress messes with the terminal directly.
if repair or progress: if repair or progress:
execute_command(full_command, output_file=DO_NOT_CAPTURE) execute_command(full_command, output_file=DO_NOT_CAPTURE)
else: else:
execute_command(full_command) execute_command(full_command)
for check in checks:
write_check_time(make_check_time_path(location_config, borg_repository_id, check))
if 'extract' in checks: if 'extract' in checks:
extract.extract_last_archive_dry_run(repository, lock_wait, local_path, remote_path) extract.extract_last_archive_dry_run(repository, lock_wait, local_path, remote_path)
write_check_time(make_check_time_path(location_config, borg_repository_id, 'extract'))

41
borgmatic/borg/compact.py Normal file
View File

@ -0,0 +1,41 @@
import logging
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def compact_segments(
dry_run,
repository,
storage_config,
local_path='borg',
remote_path=None,
progress=False,
cleanup_commits=False,
threshold=None,
):
'''
Given dry-run flag, a local or remote repository path, and a storage config dict, compact Borg
segments in a repository.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('compact', '')
full_command = (
(local_path, 'compact')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--progress',) if progress else ())
+ (('--cleanup-commits',) if cleanup_commits else ())
+ (('--threshold', str(threshold)) if threshold else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ (repository,)
)
if not dry_run:
execute_command(full_command, output_log_level=logging.INFO, borg_local_path=local_path)

View File

@ -5,12 +5,13 @@ import os
import pathlib import pathlib
import tempfile import tempfile
from borgmatic.borg import feature, state
from borgmatic.execute import DO_NOT_CAPTURE, execute_command, execute_command_with_processes from borgmatic.execute import DO_NOT_CAPTURE, execute_command, execute_command_with_processes
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def _expand_directory(directory): def expand_directory(directory):
''' '''
Given a directory path, expand any tilde (representing a user's home directory) and any globs Given a directory path, expand any tilde (representing a user's home directory) and any globs
therein. Return a list of one or more resulting paths. therein. Return a list of one or more resulting paths.
@ -20,7 +21,7 @@ def _expand_directory(directory):
return glob.glob(expanded_directory) or [expanded_directory] return glob.glob(expanded_directory) or [expanded_directory]
def _expand_directories(directories): def expand_directories(directories):
''' '''
Given a sequence of directory paths, expand tildes and globs in each one. Return all the Given a sequence of directory paths, expand tildes and globs in each one. Return all the
resulting directories as a single flattened tuple. resulting directories as a single flattened tuple.
@ -29,11 +30,11 @@ def _expand_directories(directories):
return () return ()
return tuple( return tuple(
itertools.chain.from_iterable(_expand_directory(directory) for directory in directories) itertools.chain.from_iterable(expand_directory(directory) for directory in directories)
) )
def _expand_home_directories(directories): def expand_home_directories(directories):
''' '''
Given a sequence of directory paths, expand tildes in each one. Do not perform any globbing. Given a sequence of directory paths, expand tildes in each one. Do not perform any globbing.
Return the results as a tuple. Return the results as a tuple.
@ -44,13 +45,18 @@ def _expand_home_directories(directories):
return tuple(os.path.expanduser(directory) for directory in directories) return tuple(os.path.expanduser(directory) for directory in directories)
def map_directories_to_devices(directories): # pragma: no cover def map_directories_to_devices(directories):
''' '''
Given a sequence of directories, return a map from directory to an identifier for the device on Given a sequence of directories, return a map from directory to an identifier for the device on
which that directory resides. This is handy for determining whether two different directories which that directory resides or None if the path doesn't exist.
are on the same filesystem (have the same device identifier).
This is handy for determining whether two different directories are on the same filesystem (have
the same device identifier).
''' '''
return {directory: os.stat(directory).st_dev for directory in directories} return {
directory: os.stat(directory).st_dev if os.path.exists(directory) else None
for directory in directories
}
def deduplicate_directories(directory_devices): def deduplicate_directories(directory_devices):
@ -82,6 +88,7 @@ def deduplicate_directories(directory_devices):
for parent in parents: for parent in parents:
if ( if (
pathlib.PurePath(other_directory) == parent pathlib.PurePath(other_directory) == parent
and directory_devices[directory] is not None
and directory_devices[other_directory] == directory_devices[directory] and directory_devices[other_directory] == directory_devices[directory]
): ):
if directory in deduplicated: if directory in deduplicated:
@ -91,7 +98,7 @@ def deduplicate_directories(directory_devices):
return tuple(sorted(deduplicated)) return tuple(sorted(deduplicated))
def _write_pattern_file(patterns=None): def write_pattern_file(patterns=None):
''' '''
Given a sequence of patterns, write them to a named temporary file and return it. Return None Given a sequence of patterns, write them to a named temporary file and return it. Return None
if no patterns are provided. if no patterns are provided.
@ -106,7 +113,19 @@ def _write_pattern_file(patterns=None):
return pattern_file return pattern_file
def _make_pattern_flags(location_config, pattern_filename=None): def ensure_files_readable(*filename_lists):
'''
Given a sequence of filename sequences, ensure that each filename is openable. This prevents
unreadable files from being passed to Borg, which in certain situations only warns instead of
erroring.
'''
for file_object in itertools.chain.from_iterable(
filename_list for filename_list in filename_lists if filename_list
):
open(file_object).close()
def make_pattern_flags(location_config, pattern_filename=None):
''' '''
Given a location config dict with a potential patterns_from option, and a filename containing Given a location config dict with a potential patterns_from option, and a filename containing
any additional patterns, return the corresponding Borg flags for those files as a tuple. any additional patterns, return the corresponding Borg flags for those files as a tuple.
@ -122,7 +141,7 @@ def _make_pattern_flags(location_config, pattern_filename=None):
) )
def _make_exclude_flags(location_config, exclude_filename=None): def make_exclude_flags(location_config, exclude_filename=None):
''' '''
Given a location config dict with various exclude options, and a filename containing any exclude Given a location config dict with various exclude options, and a filename containing any exclude
patterns, return the corresponding Borg flags as a tuple. patterns, return the corresponding Borg flags as a tuple.
@ -156,7 +175,7 @@ def _make_exclude_flags(location_config, exclude_filename=None):
) )
DEFAULT_BORGMATIC_SOURCE_DIRECTORY = '~/.borgmatic' DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}'
def borgmatic_source_directories(borgmatic_source_directory): def borgmatic_source_directories(borgmatic_source_directory):
@ -164,7 +183,7 @@ def borgmatic_source_directories(borgmatic_source_directory):
Return a list of borgmatic-specific source directories used for state like database backups. Return a list of borgmatic-specific source directories used for state like database backups.
''' '''
if not borgmatic_source_directory: if not borgmatic_source_directory:
borgmatic_source_directory = DEFAULT_BORGMATIC_SOURCE_DIRECTORY borgmatic_source_directory = state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
return ( return (
[borgmatic_source_directory] [borgmatic_source_directory]
@ -173,14 +192,12 @@ def borgmatic_source_directories(borgmatic_source_directory):
) )
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}'
def create_archive( def create_archive(
dry_run, dry_run,
repository, repository,
location_config, location_config,
storage_config, storage_config,
local_borg_version,
local_path='borg', local_path='borg',
remote_path=None, remote_path=None,
progress=False, progress=False,
@ -198,16 +215,20 @@ def create_archive(
''' '''
sources = deduplicate_directories( sources = deduplicate_directories(
map_directories_to_devices( map_directories_to_devices(
_expand_directories( expand_directories(
location_config['source_directories'] location_config['source_directories']
+ borgmatic_source_directories(location_config.get('borgmatic_source_directory')) + borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
) )
) )
) )
pattern_file = _write_pattern_file(location_config.get('patterns')) try:
exclude_file = _write_pattern_file( working_directory = os.path.expanduser(location_config.get('working_directory'))
_expand_home_directories(location_config.get('exclude_patterns')) except TypeError:
working_directory = None
pattern_file = write_pattern_file(location_config.get('patterns'))
exclude_file = write_pattern_file(
expand_home_directories(location_config.get('exclude_patterns'))
) )
checkpoint_interval = storage_config.get('checkpoint_interval', None) checkpoint_interval = storage_config.get('checkpoint_interval', None)
chunker_params = storage_config.get('chunker_params', None) chunker_params = storage_config.get('chunker_params', None)
@ -219,25 +240,52 @@ def create_archive(
archive_name_format = storage_config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT) archive_name_format = storage_config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('create', '') extra_borg_options = storage_config.get('extra_borg_options', {}).get('create', '')
if feature.available(feature.Feature.ATIME, local_borg_version):
atime_flags = ('--atime',) if location_config.get('atime') is True else ()
else:
atime_flags = ('--noatime',) if location_config.get('atime') is False else ()
if feature.available(feature.Feature.NOFLAGS, local_borg_version):
noflags_flags = ('--noflags',) if location_config.get('bsd_flags') is False else ()
else:
noflags_flags = ('--nobsdflags',) if location_config.get('bsd_flags') is False else ()
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_owner') else ()
else:
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_owner') else ()
if feature.available(feature.Feature.UPLOAD_RATELIMIT, local_borg_version):
upload_ratelimit_flags = (
('--upload-ratelimit', str(remote_rate_limit)) if remote_rate_limit else ()
)
else:
upload_ratelimit_flags = (
('--remote-ratelimit', str(remote_rate_limit)) if remote_rate_limit else ()
)
ensure_files_readable(location_config.get('patterns_from'), location_config.get('exclude_from'))
full_command = ( full_command = (
(local_path, 'create') tuple(local_path.split(' '))
+ _make_pattern_flags(location_config, pattern_file.name if pattern_file else None) + ('create',)
+ _make_exclude_flags(location_config, exclude_file.name if exclude_file else None) + make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
+ make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
+ (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ()) + (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ())
+ (('--chunker-params', chunker_params) if chunker_params else ()) + (('--chunker-params', chunker_params) if chunker_params else ())
+ (('--compression', compression) if compression else ()) + (('--compression', compression) if compression else ())
+ (('--remote-ratelimit', str(remote_rate_limit)) if remote_rate_limit else ()) + upload_ratelimit_flags
+ ( + (
('--one-file-system',) ('--one-file-system',)
if location_config.get('one_file_system') or stream_processes if location_config.get('one_file_system') or stream_processes
else () else ()
) )
+ (('--numeric-owner',) if location_config.get('numeric_owner') else ()) + numeric_ids_flags
+ (('--noatime',) if location_config.get('atime') is False else ()) + atime_flags
+ (('--noctime',) if location_config.get('ctime') is False else ()) + (('--noctime',) if location_config.get('ctime') is False else ())
+ (('--nobirthtime',) if location_config.get('birthtime') is False else ()) + (('--nobirthtime',) if location_config.get('birthtime') is False else ())
+ (('--read-special',) if (location_config.get('read_special') or stream_processes) else ()) + (('--read-special',) if (location_config.get('read_special') or stream_processes) else ())
+ (('--nobsdflags',) if location_config.get('bsd_flags') is False else ()) + noflags_flags
+ (('--files-cache', files_cache) if files_cache else ()) + (('--files-cache', files_cache) if files_cache else ())
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ()) + (('--umask', str(umask)) if umask else ())
@ -276,6 +324,13 @@ def create_archive(
output_log_level, output_log_level,
output_file, output_file,
borg_local_path=local_path, borg_local_path=local_path,
working_directory=working_directory,
) )
return execute_command(full_command, output_log_level, output_file, borg_local_path=local_path) return execute_command(
full_command,
output_log_level,
output_file,
borg_local_path=local_path,
working_directory=working_directory,
)

View File

@ -2,6 +2,7 @@ import logging
import os import os
import subprocess import subprocess
from borgmatic.borg import feature
from borgmatic.execute import DO_NOT_CAPTURE, execute_command from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -61,6 +62,7 @@ def extract_archive(
paths, paths,
location_config, location_config,
storage_config, storage_config,
local_borg_version,
local_path='borg', local_path='borg',
remote_path=None, remote_path=None,
destination_path=None, destination_path=None,
@ -70,9 +72,9 @@ def extract_archive(
): ):
''' '''
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
restore from the archive, location/storage configuration dicts, optional local and remote Borg restore from the archive, the local Borg version string, location/storage configuration dicts,
paths, and an optional destination path to extract to, extract the archive into the current optional local and remote Borg paths, and an optional destination path to extract to, extract
directory. the archive into the current directory.
If extract to stdout is True, then start the extraction streaming to stdout, and return that If extract to stdout is True, then start the extraction streaming to stdout, and return that
extract process as an instance of subprocess.Popen. extract process as an instance of subprocess.Popen.
@ -83,10 +85,15 @@ def extract_archive(
if progress and extract_to_stdout: if progress and extract_to_stdout:
raise ValueError('progress and extract_to_stdout cannot both be set') raise ValueError('progress and extract_to_stdout cannot both be set')
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_owner') else ()
else:
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_owner') else ()
full_command = ( full_command = (
(local_path, 'extract') (local_path, 'extract')
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())
+ (('--numeric-owner',) if location_config.get('numeric_owner') else ()) + numeric_ids_flags
+ (('--umask', str(umask)) if umask else ()) + (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ()) + (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ()) + (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())

28
borgmatic/borg/feature.py Normal file
View File

@ -0,0 +1,28 @@
from enum import Enum
from pkg_resources import parse_version
class Feature(Enum):
COMPACT = 1
ATIME = 2
NOFLAGS = 3
NUMERIC_IDS = 4
UPLOAD_RATELIMIT = 5
FEATURE_TO_MINIMUM_BORG_VERSION = {
Feature.COMPACT: parse_version('1.2.0a2'), # borg compact
Feature.ATIME: parse_version('1.2.0a7'), # borg create --atime
Feature.NOFLAGS: parse_version('1.2.0a8'), # borg create --noflags
Feature.NUMERIC_IDS: parse_version('1.2.0b3'), # borg create/extract/mount --numeric-ids
Feature.UPLOAD_RATELIMIT: parse_version('1.2.0b3'), # borg create --upload-ratelimit
}
def available(feature, borg_version):
'''
Given a Borg Feature constant and a Borg version string, return whether that feature is
available in that version of Borg.
'''
return FEATURE_TO_MINIMUM_BORG_VERSION[feature] <= parse_version(borg_version)

View File

@ -1,6 +1,8 @@
import argparse
import logging import logging
import subprocess import subprocess
from borgmatic.borg import info
from borgmatic.execute import DO_NOT_CAPTURE, execute_command from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -23,17 +25,14 @@ def initialize_repository(
whether the repository should be append-only, and the storage quota to use, initialize the whether the repository should be append-only, and the storage quota to use, initialize the
repository. If the repository already exists, then log and skip initialization. repository. If the repository already exists, then log and skip initialization.
''' '''
info_command = (
(local_path, 'info')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (repository,)
)
logger.debug(' '.join(info_command))
try: try:
execute_command(info_command, output_log_level=None) info.display_archives_info(
repository,
storage_config,
argparse.Namespace(json=True, archive=None),
local_path,
remote_path,
)
logger.info('Repository already exists. Skipping initialization.') logger.info('Repository already exists. Skipping initialization.')
return return
except subprocess.CalledProcessError as error: except subprocess.CalledProcessError as error:

View File

@ -1,4 +1,6 @@
import copy
import logging import logging
import re
from borgmatic.borg.flags import make_flags, make_flags_from_arguments from borgmatic.borg.flags import make_flags, make_flags_from_arguments
from borgmatic.execute import execute_command from borgmatic.execute import execute_command
@ -6,17 +8,11 @@ from borgmatic.execute import execute_command
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# A hack to convince Borg to exclude archives ending in ".checkpoint". This assumes that a
# non-checkpoint archive name ends in a digit (e.g. from a timestamp).
BORG_EXCLUDE_CHECKPOINTS_GLOB = '*[0123456789]'
def resolve_archive_name(repository, archive, storage_config, local_path='borg', remote_path=None): def resolve_archive_name(repository, archive, storage_config, local_path='borg', remote_path=None):
''' '''
Given a local or remote repository path, an archive name, a storage config dict, a local Borg Given a local or remote repository path, an archive name, a storage config dict, a local Borg
path, and a remote Borg path, simply return the archive name. But if the archive name is path, and a remote Borg path, simply return the archive name. But if the archive name is
"latest", then instead introspect the repository for the latest successful (non-checkpoint) "latest", then instead introspect the repository for the latest archive and return its name.
archive, and return its name.
Raise ValueError if "latest" is given but there are no archives in the repository. Raise ValueError if "latest" is given but there are no archives in the repository.
''' '''
@ -31,7 +27,6 @@ def resolve_archive_name(repository, archive, storage_config, local_path='borg',
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ()) + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ make_flags('remote-path', remote_path) + make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait) + make_flags('lock-wait', lock_wait)
+ make_flags('glob-archives', BORG_EXCLUDE_CHECKPOINTS_GLOB)
+ make_flags('last', 1) + make_flags('last', 1)
+ ('--short', repository) + ('--short', repository)
) )
@ -47,17 +42,20 @@ def resolve_archive_name(repository, archive, storage_config, local_path='borg',
return latest_archive return latest_archive
def list_archives(repository, storage_config, list_arguments, local_path='borg', remote_path=None): MAKE_FLAGS_EXCLUDES = ('repository', 'archive', 'successful', 'paths', 'find_paths')
def make_list_command(
repository, storage_config, list_arguments, local_path='borg', remote_path=None
):
''' '''
Given a local or remote repository path, a storage config dict, and the arguments to the list Given a local or remote repository path, a storage config dict, the arguments to the list
action, display the output of listing Borg archives in the repository or return JSON output. Or, action, and local and remote Borg paths, return a command as a tuple to list archives or paths
if an archive name is given, listing the files in that archive. within an archive.
''' '''
lock_wait = storage_config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
if list_arguments.successful:
list_arguments.glob_archives = BORG_EXCLUDE_CHECKPOINTS_GLOB
full_command = ( return (
(local_path, 'list') (local_path, 'list')
+ ( + (
('--info',) ('--info',)
@ -71,19 +69,92 @@ def list_archives(repository, storage_config, list_arguments, local_path='borg',
) )
+ make_flags('remote-path', remote_path) + make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait) + make_flags('lock-wait', lock_wait)
+ make_flags_from_arguments( + make_flags_from_arguments(list_arguments, excludes=MAKE_FLAGS_EXCLUDES,)
list_arguments, excludes=('repository', 'archive', 'paths', 'successful')
)
+ ( + (
'::'.join((repository, list_arguments.archive)) ('::'.join((repository, list_arguments.archive)),)
if list_arguments.archive if list_arguments.archive
else repository, else (repository,)
) )
+ (tuple(list_arguments.paths) if list_arguments.paths else ()) + (tuple(list_arguments.paths) if list_arguments.paths else ())
) )
return execute_command(
full_command, def make_find_paths(find_paths):
output_log_level=None if list_arguments.json else logging.WARNING, '''
borg_local_path=local_path, Given a sequence of path fragments or patterns as passed to `--find`, transform all path
fragments into glob patterns. Pass through existing patterns untouched.
For example, given find_paths of:
['foo.txt', 'pp:root/somedir']
... transform that into:
['sh:**/*foo.txt*/**', 'pp:root/somedir']
'''
if not find_paths:
return ()
return tuple(
find_path
if re.compile(r'([-!+RrPp] )|(\w\w:)').match(find_path)
else f'sh:**/*{find_path}*/**'
for find_path in find_paths
) )
def list_archives(repository, storage_config, list_arguments, local_path='borg', remote_path=None):
'''
Given a local or remote repository path, a storage config dict, the arguments to the list
action, and local and remote Borg paths, display the output of listing Borg archives in the
repository or return JSON output. Or, if an archive name is given, list the files in that
archive. Or, if list_arguments.find_paths are given, list the files by searching across multiple
archives.
'''
# If there are any paths to find (and there's not a single archive already selected), start by
# getting a list of archives to search.
if list_arguments.find_paths and not list_arguments.archive:
repository_arguments = copy.copy(list_arguments)
repository_arguments.archive = None
repository_arguments.json = False
repository_arguments.format = None
# Ask Borg to list archives. Capture its output for use below.
archive_lines = tuple(
execute_command(
make_list_command(
repository, storage_config, repository_arguments, local_path, remote_path
),
output_log_level=None,
borg_local_path=local_path,
)
.strip('\n')
.split('\n')
)
else:
archive_lines = (list_arguments.archive,)
# For each archive listed by Borg, run list on the contents of that archive.
for archive_line in archive_lines:
try:
archive = archive_line.split()[0]
except (AttributeError, IndexError):
archive = None
if archive:
logger.warning(archive_line)
archive_arguments = copy.copy(list_arguments)
archive_arguments.archive = archive
main_command = make_list_command(
repository, storage_config, archive_arguments, local_path, remote_path
) + make_find_paths(list_arguments.find_paths)
output = execute_command(
main_command,
output_log_level=None if list_arguments.json else logging.WARNING,
borg_local_path=local_path,
)
if list_arguments.json:
return output

1
borgmatic/borg/state.py Normal file
View File

@ -0,0 +1 @@
DEFAULT_BORGMATIC_SOURCE_DIRECTORY = '~/.borgmatic'

25
borgmatic/borg/version.py Normal file
View File

@ -0,0 +1,25 @@
import logging
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def local_borg_version(local_path='borg'):
'''
Given a local Borg binary path, return a version string for it.
Raise OSError or CalledProcessError if there is a problem running Borg.
Raise ValueError if the version cannot be parsed.
'''
full_command = (
(local_path, '--version')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
)
output = execute_command(full_command, output_log_level=None, borg_local_path=local_path)
try:
return output.split(' ')[1].strip()
except IndexError:
raise ValueError('Could not parse Borg version string')

View File

@ -6,6 +6,7 @@ from borgmatic.config import collect
SUBPARSER_ALIASES = { SUBPARSER_ALIASES = {
'init': ['--init', '-I'], 'init': ['--init', '-I'],
'prune': ['--prune', '-p'], 'prune': ['--prune', '-p'],
'compact': [],
'create': ['--create', '-C'], 'create': ['--create', '-C'],
'check': ['--check', '-k'], 'check': ['--check', '-k'],
'extract': ['--extract', '-x'], 'extract': ['--extract', '-x'],
@ -15,17 +16,18 @@ SUBPARSER_ALIASES = {
'restore': ['--restore', '-r'], 'restore': ['--restore', '-r'],
'list': ['--list', '-l'], 'list': ['--list', '-l'],
'info': ['--info', '-i'], 'info': ['--info', '-i'],
'borg': [],
} }
def parse_subparser_arguments(unparsed_arguments, subparsers): def parse_subparser_arguments(unparsed_arguments, subparsers):
''' '''
Given a sequence of arguments, and a subparsers object as returned by Given a sequence of arguments and a dict from subparser name to argparse.ArgumentParser
argparse.ArgumentParser().add_subparsers(), give each requested action's subparser a shot at instance, give each requested action's subparser a shot at parsing all arguments. This allows
parsing all arguments. This allows common arguments like "--repository" to be shared across common arguments like "--repository" to be shared across multiple subparsers.
multiple subparsers.
Return the result as a dict mapping from subparser name to a parsed namespace of arguments. Return the result as a tuple of (a dict mapping from subparser name to a parsed namespace of
arguments, a list of remaining arguments not claimed by any subparser).
''' '''
arguments = collections.OrderedDict() arguments = collections.OrderedDict()
remaining_arguments = list(unparsed_arguments) remaining_arguments = list(unparsed_arguments)
@ -35,7 +37,12 @@ def parse_subparser_arguments(unparsed_arguments, subparsers):
for alias in aliases for alias in aliases
} }
for subparser_name, subparser in subparsers.choices.items(): # If the "borg" action is used, skip all other subparsers. This avoids confusion like
# "borg list" triggering borgmatic's own list action.
if 'borg' in unparsed_arguments:
subparsers = {'borg': subparsers['borg']}
for subparser_name, subparser in subparsers.items():
if subparser_name not in remaining_arguments: if subparser_name not in remaining_arguments:
continue continue
@ -47,59 +54,45 @@ def parse_subparser_arguments(unparsed_arguments, subparsers):
parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments) parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments)
for value in vars(parsed).values(): for value in vars(parsed).values():
if isinstance(value, str): if isinstance(value, str):
if value in subparsers.choices: if value in subparsers:
remaining_arguments.remove(value) remaining_arguments.remove(value)
elif isinstance(value, list): elif isinstance(value, list):
for item in value: for item in value:
if item in subparsers.choices: if item in subparsers:
remaining_arguments.remove(item) remaining_arguments.remove(item)
arguments[canonical_name] = parsed arguments[canonical_name] = parsed
# If no actions are explicitly requested, assume defaults: prune, create, and check. # If no actions are explicitly requested, assume defaults: prune, compact, create, and check.
if not arguments and '--help' not in unparsed_arguments and '-h' not in unparsed_arguments: if not arguments and '--help' not in unparsed_arguments and '-h' not in unparsed_arguments:
for subparser_name in ('prune', 'create', 'check'): for subparser_name in ('prune', 'compact', 'create', 'check'):
subparser = subparsers.choices[subparser_name] subparser = subparsers[subparser_name]
parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments) parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments)
arguments[subparser_name] = parsed arguments[subparser_name] = parsed
return arguments
def parse_global_arguments(unparsed_arguments, top_level_parser, subparsers):
'''
Given a sequence of arguments, a top-level parser (containing subparsers), and a subparsers
object as returned by argparse.ArgumentParser().add_subparsers(), parse and return any global
arguments as a parsed argparse.Namespace instance.
'''
# Ask each subparser, one by one, to greedily consume arguments. Any arguments that remain
# are global arguments.
remaining_arguments = list(unparsed_arguments) remaining_arguments = list(unparsed_arguments)
present_subparser_names = set()
for subparser_name, subparser in subparsers.choices.items(): # Now ask each subparser, one by one, to greedily consume arguments.
if subparser_name not in remaining_arguments: for subparser_name, subparser in subparsers.items():
if subparser_name not in arguments.keys():
continue continue
present_subparser_names.add(subparser_name) subparser = subparsers[subparser_name]
unused_parsed, remaining_arguments = subparser.parse_known_args(remaining_arguments) unused_parsed, remaining_arguments = subparser.parse_known_args(remaining_arguments)
# If no actions are explicitly requested, assume defaults: prune, create, and check. # Special case: If "borg" is present in the arguments, consume all arguments after (+1) the
if ( # "borg" action.
not present_subparser_names if 'borg' in arguments:
and '--help' not in unparsed_arguments borg_options_index = remaining_arguments.index('borg') + 1
and '-h' not in unparsed_arguments arguments['borg'].options = remaining_arguments[borg_options_index:]
): remaining_arguments = remaining_arguments[:borg_options_index]
for subparser_name in ('prune', 'create', 'check'):
subparser = subparsers.choices[subparser_name]
unused_parsed, remaining_arguments = subparser.parse_known_args(remaining_arguments)
# Remove the subparser names themselves. # Remove the subparser names themselves.
for subparser_name in present_subparser_names: for subparser_name, subparser in subparsers.items():
if subparser_name in remaining_arguments: if subparser_name in remaining_arguments:
remaining_arguments.remove(subparser_name) remaining_arguments.remove(subparser_name)
return top_level_parser.parse_args(remaining_arguments) return (arguments, remaining_arguments)
class Extend_action(Action): class Extend_action(Action):
@ -116,10 +109,9 @@ class Extend_action(Action):
setattr(namespace, self.dest, list(values)) setattr(namespace, self.dest, list(values))
def parse_arguments(*unparsed_arguments): def make_parsers():
''' '''
Given command-line arguments with which this script was invoked, parse the arguments and return Build a top-level parser and its subparsers and return them as a tuple.
them as a dict mapping from subparser name (or "global") to an argparse.Namespace instance.
''' '''
config_paths = collect.get_default_config_paths(expand_home=True) config_paths = collect.get_default_config_paths(expand_home=True)
unexpanded_config_paths = collect.get_default_config_paths(expand_home=False) unexpanded_config_paths = collect.get_default_config_paths(expand_home=False)
@ -196,6 +188,18 @@ def parse_arguments(*unparsed_arguments):
action='extend', action='extend',
help='One or more configuration file options to override with specified values', help='One or more configuration file options to override with specified values',
) )
global_group.add_argument(
'--no-environment-interpolation',
dest='resolve_env',
action='store_false',
help='Do not resolve environment variables in configuration file',
)
global_group.add_argument(
'--bash-completion',
default=False,
action='store_true',
help='Show bash completion script and exit',
)
global_group.add_argument( global_group.add_argument(
'--version', '--version',
dest='version', dest='version',
@ -207,8 +211,8 @@ def parse_arguments(*unparsed_arguments):
top_level_parser = ArgumentParser( top_level_parser = ArgumentParser(
description=''' description='''
Simple, configuration-driven backup software for servers and workstations. If none of Simple, configuration-driven backup software for servers and workstations. If none of
the action options are given, then borgmatic defaults to: prune, create, and check the action options are given, then borgmatic defaults to: prune, compact, create, and
archives. check.
''', ''',
parents=[global_parser], parents=[global_parser],
) )
@ -216,7 +220,7 @@ def parse_arguments(*unparsed_arguments):
subparsers = top_level_parser.add_subparsers( subparsers = top_level_parser.add_subparsers(
title='actions', title='actions',
metavar='', metavar='',
help='Specify zero or more actions. Defaults to prune, create, and check. Use --help with action for details:', help='Specify zero or more actions. Defaults to prune, compact, create, and check. Use --help with action for details:',
) )
init_parser = subparsers.add_parser( init_parser = subparsers.add_parser(
'init', 'init',
@ -249,8 +253,8 @@ def parse_arguments(*unparsed_arguments):
prune_parser = subparsers.add_parser( prune_parser = subparsers.add_parser(
'prune', 'prune',
aliases=SUBPARSER_ALIASES['prune'], aliases=SUBPARSER_ALIASES['prune'],
help='Prune archives according to the retention policy', help='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
description='Prune archives according to the retention policy', description='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
add_help=False, add_help=False,
) )
prune_group = prune_parser.add_argument_group('prune arguments') prune_group = prune_parser.add_argument_group('prune arguments')
@ -266,6 +270,38 @@ def parse_arguments(*unparsed_arguments):
) )
prune_group.add_argument('-h', '--help', action='help', help='Show this help message and exit') prune_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
compact_parser = subparsers.add_parser(
'compact',
aliases=SUBPARSER_ALIASES['compact'],
help='Compact segments to free space (Borg 1.2+ only)',
description='Compact segments to free space (Borg 1.2+ only)',
add_help=False,
)
compact_group = compact_parser.add_argument_group('compact arguments')
compact_group.add_argument(
'--progress',
dest='progress',
default=False,
action='store_true',
help='Display progress as each segment is compacted',
)
compact_group.add_argument(
'--cleanup-commits',
dest='cleanup_commits',
default=False,
action='store_true',
help='Cleanup commit-only 17-byte segment files left behind by Borg 1.1',
)
compact_group.add_argument(
'--threshold',
type=int,
dest='threshold',
help='Minimum saved space percentage threshold for compacting a segment, defaults to 10',
)
compact_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
create_parser = subparsers.add_parser( create_parser = subparsers.add_parser(
'create', 'create',
aliases=SUBPARSER_ALIASES['create'], aliases=SUBPARSER_ALIASES['create'],
@ -316,7 +352,7 @@ def parse_arguments(*unparsed_arguments):
dest='repair', dest='repair',
default=False, default=False,
action='store_true', action='store_true',
help='Attempt to repair any inconsistencies found (experimental and only for interactive use)', help='Attempt to repair any inconsistencies found (for interactive use)',
) )
check_group.add_argument( check_group.add_argument(
'--only', '--only',
@ -324,7 +360,13 @@ def parse_arguments(*unparsed_arguments):
choices=('repository', 'archives', 'data', 'extract'), choices=('repository', 'archives', 'data', 'extract'),
dest='only', dest='only',
action='append', action='append',
help='Run a particular consistency check (repository, archives, data, or extract) instead of configured checks; can specify flag multiple times', help='Run a particular consistency check (repository, archives, data, or extract) instead of configured checks (subject to configured frequency, can specify flag multiple times)',
)
check_group.add_argument(
'--force',
default=False,
action='store_true',
help='Ignore configured check frequencies and run checks unconditionally',
) )
check_group.add_argument('-h', '--help', action='help', help='Show this help message and exit') check_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
@ -510,8 +552,7 @@ def parse_arguments(*unparsed_arguments):
) )
list_group = list_parser.add_argument_group('list arguments') list_group = list_parser.add_argument_group('list arguments')
list_group.add_argument( list_group.add_argument(
'--repository', '--repository', help='Path of repository to list, defaults to the configured repositories',
help='Path of repository to list, defaults to the configured repository if there is only one',
) )
list_group.add_argument('--archive', help='Name of archive to list (or "latest")') list_group.add_argument('--archive', help='Name of archive to list (or "latest")')
list_group.add_argument( list_group.add_argument(
@ -519,7 +560,14 @@ def parse_arguments(*unparsed_arguments):
metavar='PATH', metavar='PATH',
nargs='+', nargs='+',
dest='paths', dest='paths',
help='Paths to list from archive, defaults to the entire archive', help='Paths or patterns to list from a single selected archive (via "--archive"), defaults to listing the entire archive',
)
list_group.add_argument(
'--find',
metavar='PATH',
nargs='+',
dest='find_paths',
help='Partial paths or patterns to search for and list across multiple archives',
) )
list_group.add_argument( list_group.add_argument(
'--short', default=False, action='store_true', help='Output only archive or path names' '--short', default=False, action='store_true', help='Output only archive or path names'
@ -536,9 +584,9 @@ def parse_arguments(*unparsed_arguments):
) )
list_group.add_argument( list_group.add_argument(
'--successful', '--successful',
default=False, default=True,
action='store_true', action='store_true',
help='Only list archive names of successful (non-checkpoint) backups', help='Deprecated in favor of listing successful (non-checkpoint) backups by default in newer versions of Borg',
) )
list_group.add_argument( list_group.add_argument(
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys' '--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
@ -601,8 +649,42 @@ def parse_arguments(*unparsed_arguments):
) )
info_group.add_argument('-h', '--help', action='help', help='Show this help message and exit') info_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
arguments = parse_subparser_arguments(unparsed_arguments, subparsers) borg_parser = subparsers.add_parser(
arguments['global'] = parse_global_arguments(unparsed_arguments, top_level_parser, subparsers) 'borg',
aliases=SUBPARSER_ALIASES['borg'],
help='Run an arbitrary Borg command',
description='Run an arbitrary Borg command based on borgmatic\'s configuration',
add_help=False,
)
borg_group = borg_parser.add_argument_group('borg arguments')
borg_group.add_argument(
'--repository',
help='Path of repository to pass to Borg, defaults to the configured repositories',
)
borg_group.add_argument('--archive', help='Name of archive to pass to Borg (or "latest")')
borg_group.add_argument(
'--',
metavar='OPTION',
dest='options',
nargs='+',
help='Options to pass to Borg, command first ("create", "list", etc). "--" is optional. To specify the repository or the archive, you must use --repository or --archive instead of providing them here.',
)
borg_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
return top_level_parser, subparsers
def parse_arguments(*unparsed_arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as a dict mapping from subparser name (or "global") to an argparse.Namespace instance.
'''
top_level_parser, subparsers = make_parsers()
arguments, remaining_arguments = parse_subparser_arguments(
unparsed_arguments, subparsers.choices
)
arguments['global'] = top_level_parser.parse_args(remaining_arguments)
if arguments['global'].excludes_filename: if arguments['global'].excludes_filename:
raise ValueError( raise ValueError(
@ -612,9 +694,6 @@ def parse_arguments(*unparsed_arguments):
if 'init' in arguments and arguments['global'].dry_run: if 'init' in arguments and arguments['global'].dry_run:
raise ValueError('The init action cannot be used with the --dry-run option') raise ValueError('The init action cannot be used with the --dry-run option')
if 'list' in arguments and arguments['list'].glob_archives and arguments['list'].successful:
raise ValueError('The --glob-archives and --successful options cannot be used together')
if ( if (
'list' in arguments 'list' in arguments
and 'info' in arguments and 'info' in arguments

View File

@ -4,22 +4,29 @@ import json
import logging import logging
import os import os
import sys import sys
import time
from queue import Queue
from subprocess import CalledProcessError from subprocess import CalledProcessError
import colorama import colorama
import pkg_resources import pkg_resources
import borgmatic.commands.completion
from borgmatic.borg import borg as borg_borg
from borgmatic.borg import check as borg_check from borgmatic.borg import check as borg_check
from borgmatic.borg import compact as borg_compact
from borgmatic.borg import create as borg_create from borgmatic.borg import create as borg_create
from borgmatic.borg import environment as borg_environment from borgmatic.borg import environment as borg_environment
from borgmatic.borg import export_tar as borg_export_tar from borgmatic.borg import export_tar as borg_export_tar
from borgmatic.borg import extract as borg_extract from borgmatic.borg import extract as borg_extract
from borgmatic.borg import feature as borg_feature
from borgmatic.borg import info as borg_info from borgmatic.borg import info as borg_info
from borgmatic.borg import init as borg_init from borgmatic.borg import init as borg_init
from borgmatic.borg import list as borg_list from borgmatic.borg import list as borg_list
from borgmatic.borg import mount as borg_mount from borgmatic.borg import mount as borg_mount
from borgmatic.borg import prune as borg_prune from borgmatic.borg import prune as borg_prune
from borgmatic.borg import umount as borg_umount from borgmatic.borg import umount as borg_umount
from borgmatic.borg import version as borg_version
from borgmatic.commands.arguments import parse_arguments from borgmatic.commands.arguments import parse_arguments
from borgmatic.config import checks, collect, convert, validate from borgmatic.config import checks, collect, convert, validate
from borgmatic.hooks import command, dispatch, dump, monitor from borgmatic.hooks import command, dispatch, dump, monitor
@ -35,8 +42,8 @@ LEGACY_CONFIG_PATH = '/etc/borgmatic/config'
def run_configuration(config_filename, config, arguments): def run_configuration(config_filename, config, arguments):
''' '''
Given a config filename, the corresponding parsed config dict, and command-line arguments as a Given a config filename, the corresponding parsed config dict, and command-line arguments as a
dict from subparser name to a namespace of parsed arguments, execute its defined pruning, dict from subparser name to a namespace of parsed arguments, execute the defined prune, compact,
backups, consistency checks, and/or other actions. create, check, and/or other actions.
Yield a combination of: Yield a combination of:
@ -51,14 +58,24 @@ def run_configuration(config_filename, config, arguments):
local_path = location.get('local_path', 'borg') local_path = location.get('local_path', 'borg')
remote_path = location.get('remote_path') remote_path = location.get('remote_path')
retries = storage.get('retries', 0)
retry_wait = storage.get('retry_wait', 0)
borg_environment.initialize(storage) borg_environment.initialize(storage)
encountered_error = None encountered_error = None
error_repository = '' error_repository = ''
prune_create_or_check = {'prune', 'create', 'check'}.intersection(arguments) using_primary_action = {'prune', 'compact', 'create', 'check'}.intersection(arguments)
monitoring_log_level = verbosity_to_log_level(global_arguments.monitoring_verbosity) monitoring_log_level = verbosity_to_log_level(global_arguments.monitoring_verbosity)
try: try:
if prune_create_or_check: local_borg_version = borg_version.local_borg_version(local_path)
except (OSError, CalledProcessError, ValueError) as error:
yield from log_error_records(
'{}: Error getting local Borg version'.format(config_filename), error
)
return
try:
if using_primary_action:
dispatch.call_hooks( dispatch.call_hooks(
'initialize_monitor', 'initialize_monitor',
hooks, hooks,
@ -67,39 +84,7 @@ def run_configuration(config_filename, config, arguments):
monitoring_log_level, monitoring_log_level,
global_arguments.dry_run, global_arguments.dry_run,
) )
if 'prune' in arguments: if using_primary_action:
command.execute_hook(
hooks.get('before_prune'),
hooks.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
)
if 'create' in arguments:
command.execute_hook(
hooks.get('before_backup'),
hooks.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
)
if 'check' in arguments:
command.execute_hook(
hooks.get('before_check'),
hooks.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
)
if 'extract' in arguments:
command.execute_hook(
hooks.get('before_extract'),
hooks.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
)
if prune_create_or_check:
dispatch.call_hooks( dispatch.call_hooks(
'ping_monitor', 'ping_monitor',
hooks, hooks,
@ -114,15 +99,23 @@ def run_configuration(config_filename, config, arguments):
return return
encountered_error = error encountered_error = error
yield from make_error_log_records( yield from log_error_records('{}: Error pinging monitor'.format(config_filename), error)
'{}: Error running pre hook'.format(config_filename), error
)
if not encountered_error: if not encountered_error:
for repository_path in location['repositories']: repo_queue = Queue()
for repo in location['repositories']:
repo_queue.put((repo, 0),)
while not repo_queue.empty():
repository_path, retry_num = repo_queue.get()
timeout = retry_num * retry_wait
if timeout:
logger.warning(f'{config_filename}: Sleeping {timeout}s before next retry')
time.sleep(timeout)
try: try:
yield from run_actions( yield from run_actions(
arguments=arguments, arguments=arguments,
config_filename=config_filename,
location=location, location=location,
storage=storage, storage=storage,
retention=retention, retention=retention,
@ -130,58 +123,37 @@ def run_configuration(config_filename, config, arguments):
hooks=hooks, hooks=hooks,
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
local_borg_version=local_borg_version,
repository_path=repository_path, repository_path=repository_path,
) )
except (OSError, CalledProcessError, ValueError) as error: except (OSError, CalledProcessError, ValueError) as error:
encountered_error = error if retry_num < retries:
error_repository = repository_path repo_queue.put((repository_path, retry_num + 1),)
yield from make_error_log_records( tuple( # Consume the generator so as to trigger logging.
log_error_records(
'{}: Error running actions for repository'.format(repository_path),
error,
levelno=logging.WARNING,
log_command_error_output=True,
)
)
logger.warning(
f'{config_filename}: Retrying... attempt {retry_num + 1}/{retries}'
)
continue
if command.considered_soft_failure(config_filename, error):
return
yield from log_error_records(
'{}: Error running actions for repository'.format(repository_path), error '{}: Error running actions for repository'.format(repository_path), error
) )
encountered_error = error
error_repository = repository_path
if not encountered_error: if not encountered_error:
try: try:
if 'prune' in arguments: if using_primary_action:
command.execute_hook(
hooks.get('after_prune'),
hooks.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
)
if 'create' in arguments:
dispatch.call_hooks(
'remove_database_dumps',
hooks,
config_filename,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
command.execute_hook(
hooks.get('after_backup'),
hooks.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
)
if 'check' in arguments:
command.execute_hook(
hooks.get('after_check'),
hooks.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
)
if 'extract' in arguments:
command.execute_hook(
hooks.get('after_extract'),
hooks.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
)
if prune_create_or_check:
dispatch.call_hooks( dispatch.call_hooks(
'ping_monitor', 'ping_monitor',
hooks, hooks,
@ -204,11 +176,9 @@ def run_configuration(config_filename, config, arguments):
return return
encountered_error = error encountered_error = error
yield from make_error_log_records( yield from log_error_records('{}: Error pinging monitor'.format(config_filename), error)
'{}: Error running post hook'.format(config_filename), error
)
if encountered_error and prune_create_or_check: if encountered_error and using_primary_action:
try: try:
command.execute_hook( command.execute_hook(
hooks.get('on_error'), hooks.get('on_error'),
@ -241,7 +211,7 @@ def run_configuration(config_filename, config, arguments):
if command.considered_soft_failure(config_filename, error): if command.considered_soft_failure(config_filename, error):
return return
yield from make_error_log_records( yield from log_error_records(
'{}: Error running on-error hook'.format(config_filename), error '{}: Error running on-error hook'.format(config_filename), error
) )
@ -249,6 +219,7 @@ def run_configuration(config_filename, config, arguments):
def run_actions( def run_actions(
*, *,
arguments, arguments,
config_filename,
location, location,
storage, storage,
retention, retention,
@ -256,21 +227,30 @@ def run_actions(
hooks, hooks,
local_path, local_path,
remote_path, remote_path,
repository_path local_borg_version,
): # pragma: no cover repository_path,
):
''' '''
Given parsed command-line arguments as an argparse.ArgumentParser instance, several different Given parsed command-line arguments as an argparse.ArgumentParser instance, the configuration
configuration dicts, local and remote paths to Borg, and a repository name, run all actions filename, several different configuration dicts, local and remote paths to Borg, a local Borg
from the command-line arguments on the given repository. version string, and a repository name, run all actions from the command-line arguments on the
given repository.
Yield JSON output strings from executing any actions that produce JSON. Yield JSON output strings from executing any actions that produce JSON.
Raise OSError or subprocess.CalledProcessError if an error occurs running a command for an Raise OSError or subprocess.CalledProcessError if an error occurs running a command for an
action. Raise ValueError if the arguments or configuration passed to action are invalid. action or a hook. Raise ValueError if the arguments or configuration passed to action are
invalid.
''' '''
repository = os.path.expanduser(repository_path) repository = os.path.expanduser(repository_path)
global_arguments = arguments['global'] global_arguments = arguments['global']
dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else '' dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else ''
hook_context = {
'repository': repository_path,
# Deprecated: For backwards compatibility with borgmatic < 1.6.0.
'repositories': ','.join(location['repositories']),
}
if 'init' in arguments: if 'init' in arguments:
logger.info('{}: Initializing repository'.format(repository)) logger.info('{}: Initializing repository'.format(repository))
borg_init.initialize_repository( borg_init.initialize_repository(
@ -283,6 +263,14 @@ def run_actions(
remote_path=remote_path, remote_path=remote_path,
) )
if 'prune' in arguments: if 'prune' in arguments:
command.execute_hook(
hooks.get('before_prune'),
hooks.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
**hook_context,
)
logger.info('{}: Pruning archives{}'.format(repository, dry_run_label)) logger.info('{}: Pruning archives{}'.format(repository, dry_run_label))
borg_prune.prune_archives( borg_prune.prune_archives(
global_arguments.dry_run, global_arguments.dry_run,
@ -294,7 +282,54 @@ def run_actions(
stats=arguments['prune'].stats, stats=arguments['prune'].stats,
files=arguments['prune'].files, files=arguments['prune'].files,
) )
command.execute_hook(
hooks.get('after_prune'),
hooks.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
**hook_context,
)
if 'compact' in arguments:
command.execute_hook(
hooks.get('before_compact'),
hooks.get('umask'),
config_filename,
'pre-compact',
global_arguments.dry_run,
)
if borg_feature.available(borg_feature.Feature.COMPACT, local_borg_version):
logger.info('{}: Compacting segments{}'.format(repository, dry_run_label))
borg_compact.compact_segments(
global_arguments.dry_run,
repository,
storage,
local_path=local_path,
remote_path=remote_path,
progress=arguments['compact'].progress,
cleanup_commits=arguments['compact'].cleanup_commits,
threshold=arguments['compact'].threshold,
)
else: # pragma: nocover
logger.info(
'{}: Skipping compact (only available/needed in Borg 1.2+)'.format(repository)
)
command.execute_hook(
hooks.get('after_compact'),
hooks.get('umask'),
config_filename,
'post-compact',
global_arguments.dry_run,
)
if 'create' in arguments: if 'create' in arguments:
command.execute_hook(
hooks.get('before_backup'),
hooks.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
**hook_context,
)
logger.info('{}: Creating archive{}'.format(repository, dry_run_label)) logger.info('{}: Creating archive{}'.format(repository, dry_run_label))
dispatch.call_hooks( dispatch.call_hooks(
'remove_database_dumps', 'remove_database_dumps',
@ -319,6 +354,7 @@ def run_actions(
repository, repository,
location, location,
storage, storage,
local_borg_version,
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
progress=arguments['create'].progress, progress=arguments['create'].progress,
@ -327,13 +363,39 @@ def run_actions(
files=arguments['create'].files, files=arguments['create'].files,
stream_processes=stream_processes, stream_processes=stream_processes,
) )
if json_output: if json_output: # pragma: nocover
yield json.loads(json_output) yield json.loads(json_output)
dispatch.call_hooks(
'remove_database_dumps',
hooks,
config_filename,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
command.execute_hook(
hooks.get('after_backup'),
hooks.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
**hook_context,
)
if 'check' in arguments and checks.repository_enabled_for_checks(repository, consistency): if 'check' in arguments and checks.repository_enabled_for_checks(repository, consistency):
command.execute_hook(
hooks.get('before_check'),
hooks.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
**hook_context,
)
logger.info('{}: Running consistency checks'.format(repository)) logger.info('{}: Running consistency checks'.format(repository))
borg_check.check_archives( borg_check.check_archives(
repository, repository,
location,
storage, storage,
consistency, consistency,
local_path=local_path, local_path=local_path,
@ -341,8 +403,25 @@ def run_actions(
progress=arguments['check'].progress, progress=arguments['check'].progress,
repair=arguments['check'].repair, repair=arguments['check'].repair,
only_checks=arguments['check'].only, only_checks=arguments['check'].only,
force=arguments['check'].force,
)
command.execute_hook(
hooks.get('after_check'),
hooks.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
**hook_context,
) )
if 'extract' in arguments: if 'extract' in arguments:
command.execute_hook(
hooks.get('before_extract'),
hooks.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
**hook_context,
)
if arguments['extract'].repository is None or validate.repositories_match( if arguments['extract'].repository is None or validate.repositories_match(
repository, arguments['extract'].repository repository, arguments['extract'].repository
): ):
@ -358,12 +437,21 @@ def run_actions(
arguments['extract'].paths, arguments['extract'].paths,
location, location,
storage, storage,
local_borg_version,
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
destination_path=arguments['extract'].destination, destination_path=arguments['extract'].destination,
strip_components=arguments['extract'].strip_components, strip_components=arguments['extract'].strip_components,
progress=arguments['extract'].progress, progress=arguments['extract'].progress,
) )
command.execute_hook(
hooks.get('after_extract'),
hooks.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
**hook_context,
)
if 'export-tar' in arguments: if 'export-tar' in arguments:
if arguments['export-tar'].repository is None or validate.repositories_match( if arguments['export-tar'].repository is None or validate.repositories_match(
repository, arguments['export-tar'].repository repository, arguments['export-tar'].repository
@ -396,7 +484,7 @@ def run_actions(
logger.info( logger.info(
'{}: Mounting archive {}'.format(repository, arguments['mount'].archive) '{}: Mounting archive {}'.format(repository, arguments['mount'].archive)
) )
else: else: # pragma: nocover
logger.info('{}: Mounting repository'.format(repository)) logger.info('{}: Mounting repository'.format(repository))
borg_mount.mount_archive( borg_mount.mount_archive(
@ -412,7 +500,7 @@ def run_actions(
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
) )
if 'restore' in arguments: if 'restore' in arguments: # pragma: nocover
if arguments['restore'].repository is None or validate.repositories_match( if arguments['restore'].repository is None or validate.repositories_match(
repository, arguments['restore'].repository repository, arguments['restore'].repository
): ):
@ -466,6 +554,7 @@ def run_actions(
paths=dump.convert_glob_patterns_to_borg_patterns([dump_pattern]), paths=dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
location_config=location, location_config=location,
storage_config=storage, storage_config=storage,
local_borg_version=local_borg_version,
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
destination_path='/', destination_path='/',
@ -510,7 +599,7 @@ def run_actions(
repository, arguments['list'].repository repository, arguments['list'].repository
): ):
list_arguments = copy.copy(arguments['list']) list_arguments = copy.copy(arguments['list'])
if not list_arguments.json: if not list_arguments.json: # pragma: nocover
logger.warning('{}: Listing archives'.format(repository)) logger.warning('{}: Listing archives'.format(repository))
list_arguments.archive = borg_list.resolve_archive_name( list_arguments.archive = borg_list.resolve_archive_name(
repository, list_arguments.archive, storage, local_path, remote_path repository, list_arguments.archive, storage, local_path, remote_path
@ -522,14 +611,14 @@ def run_actions(
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
) )
if json_output: if json_output: # pragma: nocover
yield json.loads(json_output) yield json.loads(json_output)
if 'info' in arguments: if 'info' in arguments:
if arguments['info'].repository is None or validate.repositories_match( if arguments['info'].repository is None or validate.repositories_match(
repository, arguments['info'].repository repository, arguments['info'].repository
): ):
info_arguments = copy.copy(arguments['info']) info_arguments = copy.copy(arguments['info'])
if not info_arguments.json: if not info_arguments.json: # pragma: nocover
logger.warning('{}: Displaying summary info for archives'.format(repository)) logger.warning('{}: Displaying summary info for archives'.format(repository))
info_arguments.archive = borg_list.resolve_archive_name( info_arguments.archive = borg_list.resolve_archive_name(
repository, info_arguments.archive, storage, local_path, remote_path repository, info_arguments.archive, storage, local_path, remote_path
@ -541,11 +630,27 @@ def run_actions(
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
) )
if json_output: if json_output: # pragma: nocover
yield json.loads(json_output) yield json.loads(json_output)
if 'borg' in arguments:
if arguments['borg'].repository is None or validate.repositories_match(
repository, arguments['borg'].repository
):
logger.warning('{}: Running arbitrary Borg command'.format(repository))
archive_name = borg_list.resolve_archive_name(
repository, arguments['borg'].archive, storage, local_path, remote_path
)
borg_borg.run_arbitrary_borg(
repository,
storage,
options=arguments['borg'].options,
archive=archive_name,
local_path=local_path,
remote_path=remote_path,
)
def load_configurations(config_filenames, overrides=None): def load_configurations(config_filenames, overrides=None, resolve_env=True):
''' '''
Given a sequence of configuration filenames, load and validate each configuration file. Return Given a sequence of configuration filenames, load and validate each configuration file. Return
the results as a tuple of: dict of configuration filename to corresponding parsed configuration, the results as a tuple of: dict of configuration filename to corresponding parsed configuration,
@ -559,7 +664,21 @@ def load_configurations(config_filenames, overrides=None):
for config_filename in config_filenames: for config_filename in config_filenames:
try: try:
configs[config_filename] = validate.parse_configuration( configs[config_filename] = validate.parse_configuration(
config_filename, validate.schema_filename(), overrides config_filename, validate.schema_filename(), overrides, resolve_env
)
except PermissionError:
logs.extend(
[
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg='{}: Insufficient permissions to read configuration file'.format(
config_filename
),
)
),
]
) )
except (ValueError, OSError, validate.Validation_error) as error: except (ValueError, OSError, validate.Validation_error) as error:
logs.extend( logs.extend(
@ -593,28 +712,39 @@ def log_record(suppress_log=False, **kwargs):
return record return record
def make_error_log_records(message, error=None): def log_error_records(
message, error=None, levelno=logging.CRITICAL, log_command_error_output=False
):
''' '''
Given error message text and an optional exception object, yield a series of logging.LogRecord Given error message text, an optional exception object, an optional log level, and whether to
instances with error summary information. As a side effect, log each record. log the error output of a CalledProcessError (if any), log error summary information and also
yield it as a series of logging.LogRecord instances.
Note that because the logs are yielded as a generator, logs won't get logged unless you consume
the generator output.
''' '''
level_name = logging._levelToName[levelno]
if not error: if not error:
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=message) yield log_record(levelno=levelno, levelname=level_name, msg=message)
return return
try: try:
raise error raise error
except CalledProcessError as error: except CalledProcessError as error:
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=message) yield log_record(levelno=levelno, levelname=level_name, msg=message)
if error.output: if error.output:
# Suppress these logs for now and save full error output for the log summary at the end. # Suppress these logs for now and save full error output for the log summary at the end.
yield log_record( yield log_record(
levelno=logging.CRITICAL, levelname='CRITICAL', msg=error.output, suppress_log=True levelno=levelno,
levelname=level_name,
msg=error.output,
suppress_log=not log_command_error_output,
) )
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=error) yield log_record(levelno=levelno, levelname=level_name, msg=error)
except (ValueError, OSError) as error: except (ValueError, OSError) as error:
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=message) yield log_record(levelno=levelno, levelname=level_name, msg=message)
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=error) yield log_record(levelno=levelno, levelname=level_name, msg=error)
except: # noqa: E722 except: # noqa: E722
# Raising above only as a means of determining the error type. Swallow the exception here # Raising above only as a means of determining the error type. Swallow the exception here
# because we don't want the exception to propagate out of this function. # because we don't want the exception to propagate out of this function.
@ -653,11 +783,11 @@ def collect_configuration_run_summary_logs(configs, arguments):
try: try:
validate.guard_configuration_contains_repository(repository, configs) validate.guard_configuration_contains_repository(repository, configs)
except ValueError as error: except ValueError as error:
yield from make_error_log_records(str(error)) yield from log_error_records(str(error))
return return
if not configs: if not configs:
yield from make_error_log_records( yield from log_error_records(
'{}: No valid configuration files found'.format( '{}: No valid configuration files found'.format(
' '.join(arguments['global'].config_paths) ' '.join(arguments['global'].config_paths)
) )
@ -676,7 +806,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
arguments['global'].dry_run, arguments['global'].dry_run,
) )
except (CalledProcessError, ValueError, OSError) as error: except (CalledProcessError, ValueError, OSError) as error:
yield from make_error_log_records('Error running pre-everything hook', error) yield from log_error_records('Error running pre-everything hook', error)
return return
# Execute the actions corresponding to each configuration file. # Execute the actions corresponding to each configuration file.
@ -686,7 +816,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
error_logs = tuple(result for result in results if isinstance(result, logging.LogRecord)) error_logs = tuple(result for result in results if isinstance(result, logging.LogRecord))
if error_logs: if error_logs:
yield from make_error_log_records( yield from log_error_records(
'{}: Error running configuration file'.format(config_filename) '{}: Error running configuration file'.format(config_filename)
) )
yield from error_logs yield from error_logs
@ -708,7 +838,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
mount_point=arguments['umount'].mount_point, local_path=get_local_path(configs) mount_point=arguments['umount'].mount_point, local_path=get_local_path(configs)
) )
except (CalledProcessError, OSError) as error: except (CalledProcessError, OSError) as error:
yield from make_error_log_records('Error unmounting mount point', error) yield from log_error_records('Error unmounting mount point', error)
if json_results: if json_results:
sys.stdout.write(json.dumps(json_results)) sys.stdout.write(json.dumps(json_results))
@ -725,7 +855,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
arguments['global'].dry_run, arguments['global'].dry_run,
) )
except (CalledProcessError, ValueError, OSError) as error: except (CalledProcessError, ValueError, OSError) as error:
yield from make_error_log_records('Error running post-everything hook', error) yield from log_error_records('Error running post-everything hook', error)
def exit_with_help_link(): # pragma: no cover def exit_with_help_link(): # pragma: no cover
@ -757,9 +887,14 @@ def main(): # pragma: no cover
if global_arguments.version: if global_arguments.version:
print(pkg_resources.require('borgmatic')[0].version) print(pkg_resources.require('borgmatic')[0].version)
sys.exit(0) sys.exit(0)
if global_arguments.bash_completion:
print(borgmatic.commands.completion.bash_completion())
sys.exit(0)
config_filenames = tuple(collect.collect_config_filenames(global_arguments.config_paths)) config_filenames = tuple(collect.collect_config_filenames(global_arguments.config_paths))
configs, parse_logs = load_configurations(config_filenames, global_arguments.overrides) configs, parse_logs = load_configurations(
config_filenames, global_arguments.overrides, global_arguments.resolve_env
)
any_json_flags = any( any_json_flags = any(
getattr(sub_arguments, 'json', False) for sub_arguments in arguments.values() getattr(sub_arguments, 'json', False) for sub_arguments in arguments.values()

View File

@ -0,0 +1,57 @@
from borgmatic.commands import arguments
UPGRADE_MESSAGE = '''
Your bash completions script is from a different version of borgmatic than is
currently installed. Please upgrade your script so your completions match the
command-line flags in your installed borgmatic! Try this to upgrade:
sudo sh -c "borgmatic --bash-completion > $BASH_SOURCE"
source $BASH_SOURCE
'''
def parser_flags(parser):
'''
Given an argparse.ArgumentParser instance, return its argument flags in a space-separated
string.
'''
return ' '.join(option for action in parser._actions for option in action.option_strings)
def bash_completion():
'''
Return a bash completion script for the borgmatic command. Produce this by introspecting
borgmatic's command-line argument parsers.
'''
top_level_parser, subparsers = arguments.make_parsers()
global_flags = parser_flags(top_level_parser)
actions = ' '.join(subparsers.choices.keys())
# Avert your eyes.
return '\n'.join(
(
'check_version() {',
' local this_script="$(cat "$BASH_SOURCE" 2> /dev/null)"',
' local installed_script="$(borgmatic --bash-completion 2> /dev/null)"',
' if [ "$this_script" != "$installed_script" ] && [ "$installed_script" != "" ];'
' then cat << EOF\n%s\nEOF' % UPGRADE_MESSAGE,
' fi',
'}',
'complete_borgmatic() {',
)
+ tuple(
''' if [[ " ${COMP_WORDS[*]} " =~ " %s " ]]; then
COMPREPLY=($(compgen -W "%s %s %s" -- "${COMP_WORDS[COMP_CWORD]}"))
return 0
fi'''
% (action, parser_flags(subparser), actions, global_flags)
for action, subparser in subparsers.choices.items()
)
+ (
' COMPREPLY=($(compgen -W "%s %s" -- "${COMP_WORDS[COMP_CWORD]}"))'
% (actions, global_flags),
' (check_version &)',
'}',
'\ncomplete -o bashdefault -o default -F complete_borgmatic borgmatic',
)
)

View File

@ -23,10 +23,16 @@ def parse_arguments(*arguments):
'--destination', '--destination',
dest='destination_filename', dest='destination_filename',
default=DEFAULT_DESTINATION_CONFIG_FILENAME, default=DEFAULT_DESTINATION_CONFIG_FILENAME,
help='Destination YAML configuration file. Default: {}'.format( help='Destination YAML configuration file, default: {}'.format(
DEFAULT_DESTINATION_CONFIG_FILENAME DEFAULT_DESTINATION_CONFIG_FILENAME
), ),
) )
parser.add_argument(
'--overwrite',
default=False,
action='store_true',
help='Whether to overwrite any existing destination file, defaults to false',
)
return parser.parse_args(arguments) return parser.parse_args(arguments)
@ -36,7 +42,10 @@ def main(): # pragma: no cover
args = parse_arguments(*sys.argv[1:]) args = parse_arguments(*sys.argv[1:])
generate.generate_sample_configuration( generate.generate_sample_configuration(
args.source_filename, args.destination_filename, validate.schema_filename() args.source_filename,
args.destination_filename,
validate.schema_filename(),
overwrite=args.overwrite,
) )
print('Generated a sample configuration file at {}.'.format(args.destination_filename)) print('Generated a sample configuration file at {}.'.format(args.destination_filename))

View File

@ -17,7 +17,7 @@ def _convert_section(source_section_config, section_schema):
( (
option_name, option_name,
int(option_value) int(option_value)
if section_schema['map'].get(option_name, {}).get('type') == 'int' if section_schema['properties'].get(option_name, {}).get('type') == 'integer'
else option_value, else option_value,
) )
for option_name, option_value in source_section_config.items() for option_name, option_value in source_section_config.items()
@ -38,7 +38,7 @@ def convert_legacy_parsed_config(source_config, source_excludes, schema):
''' '''
destination_config = yaml.comments.CommentedMap( destination_config = yaml.comments.CommentedMap(
[ [
(section_name, _convert_section(section_config, schema['map'][section_name])) (section_name, _convert_section(section_config, schema['properties'][section_name]))
for section_name, section_config in source_config._asdict().items() for section_name, section_config in source_config._asdict().items()
] ]
) )
@ -54,11 +54,11 @@ def convert_legacy_parsed_config(source_config, source_excludes, schema):
destination_config['consistency']['checks'] = source_config.consistency['checks'].split(' ') destination_config['consistency']['checks'] = source_config.consistency['checks'].split(' ')
# Add comments to each section, and then add comments to the fields in each section. # Add comments to each section, and then add comments to the fields in each section.
generate.add_comments_to_configuration_map(destination_config, schema) generate.add_comments_to_configuration_object(destination_config, schema)
for section_name, section_config in destination_config.items(): for section_name, section_config in destination_config.items():
generate.add_comments_to_configuration_map( generate.add_comments_to_configuration_object(
section_config, schema['map'][section_name], indent=generate.INDENT section_config, schema['properties'][section_name], indent=generate.INDENT
) )
return destination_config return destination_config

View File

@ -0,0 +1,40 @@
import os
import re
_VARIABLE_PATTERN = re.compile(r'(?P<escape>\\)?(?P<variable>\$\{(?P<name>[A-Za-z0-9_]+)((:?-)(?P<default>[^}]+))?\})')
def _resolve_string(matcher):
'''
Get the value from environment given a matcher containing a name and an optional default value.
If the variable is not defined in environment and no default value is provided, an Error is raised.
'''
if matcher.group('escape') is not None:
# in case of escaped envvar, unescape it
return matcher.group('variable')
# resolve the env var
name, default = matcher.group('name'), matcher.group('default')
out = os.getenv(name, default=default)
if out is None:
raise ValueError('Cannot find variable ${name} in environment'.format(name=name))
return out
def resolve_env_variables(item):
'''
Resolves variables like or ${FOO} from given configuration with values from process environment
Supported formats:
- ${FOO} will return FOO env variable
- ${FOO-bar} or ${FOO:-bar} will return FOO env variable if it exists, else "bar"
If any variable is missing in environment and no default value is provided, an Error is raised.
'''
if isinstance(item, str):
return _VARIABLE_PATTERN.sub(_resolve_string, item)
if isinstance(item, list):
for i, subitem in enumerate(item):
item[i] = resolve_env_variables(subitem)
if isinstance(item, dict):
for key, value in item.items():
item[key] = resolve_env_variables(value)
return item

View File

@ -5,7 +5,7 @@ import re
from ruamel import yaml from ruamel import yaml
from borgmatic.config import load from borgmatic.config import load, normalize
INDENT = 4 INDENT = 4
SEQUENCE_INDENT = 2 SEQUENCE_INDENT = 2
@ -24,29 +24,27 @@ def _insert_newline_before_comment(config, field_name):
def _schema_to_sample_configuration(schema, level=0, parent_is_sequence=False): def _schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
''' '''
Given a loaded configuration schema, generate and return sample config for it. Include comments Given a loaded configuration schema, generate and return sample config for it. Include comments
for each section based on the schema "desc" description. for each section based on the schema "description".
''' '''
schema_type = schema.get('type')
example = schema.get('example') example = schema.get('example')
if example is not None: if example is not None:
return example return example
if 'seq' in schema: if schema_type == 'array':
config = yaml.comments.CommentedSeq( config = yaml.comments.CommentedSeq(
[ [_schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
_schema_to_sample_configuration(item_schema, level, parent_is_sequence=True)
for item_schema in schema['seq']
]
) )
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT)) add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
elif 'map' in schema: elif schema_type == 'object':
config = yaml.comments.CommentedMap( config = yaml.comments.CommentedMap(
[ [
(field_name, _schema_to_sample_configuration(sub_schema, level + 1)) (field_name, _schema_to_sample_configuration(sub_schema, level + 1))
for field_name, sub_schema in schema['map'].items() for field_name, sub_schema in schema['properties'].items()
] ]
) )
indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0) indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
add_comments_to_configuration_map( add_comments_to_configuration_object(
config, schema, indent=indent, skip_first=parent_is_sequence config, schema, indent=indent, skip_first=parent_is_sequence
) )
else: else:
@ -111,13 +109,18 @@ def render_configuration(config):
return rendered.getvalue() return rendered.getvalue()
def write_configuration(config_filename, rendered_config, mode=0o600): def write_configuration(config_filename, rendered_config, mode=0o600, overwrite=False):
''' '''
Given a target config filename and rendered config YAML, write it out to file. Create any Given a target config filename and rendered config YAML, write it out to file. Create any
containing directories as needed. containing directories as needed. But if the file already exists and overwrite is False,
abort before writing anything.
''' '''
if os.path.exists(config_filename): if not overwrite and os.path.exists(config_filename):
raise FileExistsError('{} already exists. Aborting.'.format(config_filename)) raise FileExistsError(
'{} already exists. Aborting. Use --overwrite to replace the file.'.format(
config_filename
)
)
try: try:
os.makedirs(os.path.dirname(config_filename), mode=0o700) os.makedirs(os.path.dirname(config_filename), mode=0o700)
@ -132,8 +135,8 @@ def write_configuration(config_filename, rendered_config, mode=0o600):
def add_comments_to_configuration_sequence(config, schema, indent=0): def add_comments_to_configuration_sequence(config, schema, indent=0):
''' '''
If the given config sequence's items are maps, then mine the schema for the description of the If the given config sequence's items are object, then mine the schema for the description of the
map's first item, and slap that atop the sequence. Indent the comment the given number of object's first item, and slap that atop the sequence. Indent the comment the given number of
characters. characters.
Doing this for sequences of maps results in nice comments that look like: Doing this for sequences of maps results in nice comments that look like:
@ -142,16 +145,16 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
things: things:
# First key description. Added by this function. # First key description. Added by this function.
- key: foo - key: foo
# Second key description. Added by add_comments_to_configuration_map(). # Second key description. Added by add_comments_to_configuration_object().
other: bar other: bar
``` ```
''' '''
if 'map' not in schema['seq'][0]: if schema['items'].get('type') != 'object':
return return
for field_name in config[0].keys(): for field_name in config[0].keys():
field_schema = schema['seq'][0]['map'].get(field_name, {}) field_schema = schema['items']['properties'].get(field_name, {})
description = field_schema.get('desc') description = field_schema.get('description')
# No description to use? Skip it. # No description to use? Skip it.
if not field_schema or not description: if not field_schema or not description:
@ -160,7 +163,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
config[0].yaml_set_start_comment(description, indent=indent) config[0].yaml_set_start_comment(description, indent=indent)
# We only want the first key's description here, as the rest of the keys get commented by # We only want the first key's description here, as the rest of the keys get commented by
# add_comments_to_configuration_map(). # add_comments_to_configuration_object().
return return
@ -169,7 +172,7 @@ REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
COMMENTED_OUT_SENTINEL = 'COMMENT_OUT' COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
def add_comments_to_configuration_map(config, schema, indent=0, skip_first=False): def add_comments_to_configuration_object(config, schema, indent=0, skip_first=False):
''' '''
Using descriptions from a schema as a source, add those descriptions as comments to the given Using descriptions from a schema as a source, add those descriptions as comments to the given
config mapping, before each field. Indent the comment the given number of characters. config mapping, before each field. Indent the comment the given number of characters.
@ -178,8 +181,8 @@ def add_comments_to_configuration_map(config, schema, indent=0, skip_first=False
if skip_first and index == 0: if skip_first and index == 0:
continue continue
field_schema = schema['map'].get(field_name, {}) field_schema = schema['properties'].get(field_name, {})
description = field_schema.get('desc', '').strip() description = field_schema.get('description', '').strip()
# If this is an optional key, add an indicator to the comment flagging it to be commented # If this is an optional key, add an indicator to the comment flagging it to be commented
# out from the sample configuration. This sentinel is consumed by downstream processing that # out from the sample configuration. This sentinel is consumed by downstream processing that
@ -265,18 +268,22 @@ def merge_source_configuration_into_destination(destination_config, source_confi
return destination_config return destination_config
def generate_sample_configuration(source_filename, destination_filename, schema_filename): def generate_sample_configuration(
source_filename, destination_filename, schema_filename, overwrite=False
):
''' '''
Given an optional source configuration filename, and a required destination configuration Given an optional source configuration filename, and a required destination configuration
filename, and the path to a schema filename in pykwalify YAML schema format, write out a filename, the path to a schema filename in a YAML rendition of the JSON Schema format, and
sample configuration file based on that schema. If a source filename is provided, merge the whether to overwrite a destination file, write out a sample configuration file based on that
parsed contents of that configuration into the generated configuration. schema. If a source filename is provided, merge the parsed contents of that configuration into
the generated configuration.
''' '''
schema = yaml.round_trip_load(open(schema_filename)) schema = yaml.round_trip_load(open(schema_filename))
source_config = None source_config = None
if source_filename: if source_filename:
source_config = load.load_configuration(source_filename) source_config = load.load_configuration(source_filename)
normalize.normalize(source_config)
destination_config = merge_source_configuration_into_destination( destination_config = merge_source_configuration_into_destination(
_schema_to_sample_configuration(schema), source_config _schema_to_sample_configuration(schema), source_config
@ -285,4 +292,5 @@ def generate_sample_configuration(source_filename, destination_filename, schema_
write_configuration( write_configuration(
destination_filename, destination_filename,
_comment_out_optional_configuration(render_configuration(destination_config)), _comment_out_optional_configuration(render_configuration(destination_config)),
overwrite=overwrite,
) )

View File

@ -6,6 +6,19 @@ import ruamel.yaml
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class Yaml_with_loader_stream(ruamel.yaml.YAML):
'''
A derived class of ruamel.yaml.YAML that simply tacks the loaded stream (file object) onto the
loader class so that it's available anywhere that's passed a loader (in this case,
include_configuration() below).
'''
def get_constructor_parser(self, stream):
constructor, parser = super(Yaml_with_loader_stream, self).get_constructor_parser(stream)
constructor.loader.stream = stream
return constructor, parser
def load_configuration(filename): def load_configuration(filename):
''' '''
Load the given configuration file and return its contents as a data structure of nested dicts Load the given configuration file and return its contents as a data structure of nested dicts
@ -14,7 +27,7 @@ def load_configuration(filename):
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes. if there are too many recursive includes.
''' '''
yaml = ruamel.yaml.YAML(typ='safe') yaml = Yaml_with_loader_stream(typ='safe')
yaml.Constructor = Include_constructor yaml.Constructor = Include_constructor
return yaml.load(open(filename)) return yaml.load(open(filename))
@ -22,10 +35,146 @@ def load_configuration(filename):
def include_configuration(loader, filename_node): def include_configuration(loader, filename_node):
''' '''
Load the given YAML filename (ignoring the given loader so we can use our own), and return its Load the given YAML filename (ignoring the given loader so we can use our own) and return its
contents as a data structure of nested dicts and lists. contents as a data structure of nested dicts and lists. If the filename is relative, probe for
it within 1. the current working directory and 2. the directory containing the YAML file doing
the including.
Raise FileNotFoundError if an included file was not found.
''' '''
return load_configuration(os.path.expanduser(filename_node.value)) include_directories = [os.getcwd(), os.path.abspath(os.path.dirname(loader.stream.name))]
include_filename = os.path.expanduser(filename_node.value)
if not os.path.isabs(include_filename):
candidate_filenames = [
os.path.join(directory, include_filename) for directory in include_directories
]
for candidate_filename in candidate_filenames:
if os.path.exists(candidate_filename):
include_filename = candidate_filename
break
else:
raise FileNotFoundError(
f'Could not find include {filename_node.value} at {" or ".join(candidate_filenames)}'
)
return load_configuration(include_filename)
DELETED_NODE = object()
def deep_merge_nodes(nodes):
'''
Given a nested borgmatic configuration data structure as a list of tuples in the form of:
(
ruamel.yaml.nodes.ScalarNode as a key,
ruamel.yaml.nodes.MappingNode or other Node as a value,
),
... deep merge any node values corresponding to duplicate keys and return the result. If
there are colliding keys with non-MappingNode values (e.g., integers or strings), the last
of the values wins.
For instance, given node values of:
[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
ScalarNode(tag='tag:yaml.org,2002:int', value='24')
),
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='7')
),
]),
),
(
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='5')
),
]),
),
]
... the returned result would be:
[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
ScalarNode(tag='tag:yaml.org,2002:int', value='24')
),
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='5')
),
]),
),
]
The purpose of deep merging like this is to support, for instance, merging one borgmatic
configuration file into another for reuse, such that a configuration section ("retention",
etc.) does not completely replace the corresponding section in a merged file.
'''
# Map from original node key/value to the replacement merged node. DELETED_NODE as a replacement
# node indications deletion.
replaced_nodes = {}
# To find nodes that require merging, compare each node with each other node.
for a_key, a_value in nodes:
for b_key, b_value in nodes:
# If we've already considered one of the nodes for merging, skip it.
if (a_key, a_value) in replaced_nodes or (b_key, b_value) in replaced_nodes:
continue
# If the keys match and the values are different, we need to merge these two A and B nodes.
if a_key.tag == b_key.tag and a_key.value == b_key.value and a_value != b_value:
# Since we're merging into the B node, consider the A node a duplicate and remove it.
replaced_nodes[(a_key, a_value)] = DELETED_NODE
# If we're dealing with MappingNodes, recurse and merge its values as well.
if isinstance(b_value, ruamel.yaml.nodes.MappingNode):
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.MappingNode(
tag=b_value.tag,
value=deep_merge_nodes(a_value.value + b_value.value),
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
# If we're dealing with SequenceNodes, merge by appending one sequence to the other.
elif isinstance(b_value, ruamel.yaml.nodes.SequenceNode):
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.SequenceNode(
tag=b_value.tag,
value=a_value.value + b_value.value,
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
return [
replaced_nodes.get(node, node) for node in nodes if replaced_nodes.get(node) != DELETED_NODE
]
class Include_constructor(ruamel.yaml.SafeConstructor): class Include_constructor(ruamel.yaml.SafeConstructor):
@ -40,14 +189,19 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
def flatten_mapping(self, node): def flatten_mapping(self, node):
''' '''
Support the special case of shallow merging included configuration into an existing mapping Support the special case of deep merging included configuration into an existing mapping
using the YAML '<<' merge key. Example syntax: using the YAML '<<' merge key. Example syntax:
``` ```
retention: retention:
keep_daily: 1 keep_daily: 1
<<: !include common.yaml
<<: !include common.yaml
``` ```
These includes are deep merged into the current configuration file. For instance, in this
example, any "retention" options in common.yaml will get merged into the "retention" section
in the example configuration file.
''' '''
representer = ruamel.yaml.representer.SafeRepresenter() representer = ruamel.yaml.representer.SafeRepresenter()
@ -57,3 +211,5 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
node.value[index] = (key_node, included_value) node.value[index] = (key_node, included_value)
super(Include_constructor, self).flatten_mapping(node) super(Include_constructor, self).flatten_mapping(node)
node.value = deep_merge_nodes(node.value)

View File

@ -3,8 +3,29 @@ def normalize(config):
Given a configuration dict, apply particular hard-coded rules to normalize its contents to Given a configuration dict, apply particular hard-coded rules to normalize its contents to
adhere to the configuration schema. adhere to the configuration schema.
''' '''
# Upgrade exclude_if_present from a string to a list.
exclude_if_present = config.get('location', {}).get('exclude_if_present') exclude_if_present = config.get('location', {}).get('exclude_if_present')
# "Upgrade" exclude_if_present from a string to a list.
if isinstance(exclude_if_present, str): if isinstance(exclude_if_present, str):
config['location']['exclude_if_present'] = [exclude_if_present] config['location']['exclude_if_present'] = [exclude_if_present]
# Upgrade various monitoring hooks from a string to a dict.
healthchecks = config.get('hooks', {}).get('healthchecks')
if isinstance(healthchecks, str):
config['hooks']['healthchecks'] = {'ping_url': healthchecks}
cronitor = config.get('hooks', {}).get('cronitor')
if isinstance(cronitor, str):
config['hooks']['cronitor'] = {'ping_url': cronitor}
pagerduty = config.get('hooks', {}).get('pagerduty')
if isinstance(pagerduty, str):
config['hooks']['pagerduty'] = {'integration_key': pagerduty}
cronhub = config.get('hooks', {}).get('cronhub')
if isinstance(cronhub, str):
config['hooks']['cronhub'] = {'ping_url': cronhub}
# Upgrade consistency checks from a list of strings to a list of dicts.
checks = config.get('consistency', {}).get('checks')
if isinstance(checks, list) and len(checks) and isinstance(checks[0], str):
config['consistency']['checks'] = [{'name': check_type} for check_type in checks]

View File

@ -26,6 +26,8 @@ def convert_value_type(value):
''' '''
Given a string value, determine its logical type (string, boolean, integer, etc.), and return it Given a string value, determine its logical type (string, boolean, integer, etc.), and return it
converted to that type. converted to that type.
Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
''' '''
return ruamel.yaml.YAML(typ='safe').load(io.StringIO(value)) return ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
@ -50,14 +52,20 @@ def parse_overrides(raw_overrides):
if not raw_overrides: if not raw_overrides:
return () return ()
try: parsed_overrides = []
return tuple(
(tuple(raw_keys.split('.')), convert_value_type(value)) for raw_override in raw_overrides:
for raw_override in raw_overrides try:
for raw_keys, value in (raw_override.split('=', 1),) raw_keys, value = raw_override.split('=', 1)
) parsed_overrides.append((tuple(raw_keys.split('.')), convert_value_type(value),))
except ValueError: except ValueError:
raise ValueError('Invalid override. Make sure you use the form: SECTION.OPTION=VALUE') raise ValueError(
f"Invalid override '{raw_override}'. Make sure you use the form: SECTION.OPTION=VALUE"
)
except ruamel.yaml.error.YAMLError as error:
raise ValueError(f"Invalid override '{raw_override}': {error.problem}")
return tuple(parsed_overrides)
def apply_overrides(config, raw_overrides): def apply_overrides(config, raw_overrides):

File diff suppressed because it is too large Load Diff

View File

@ -1,12 +1,10 @@
import logging
import os import os
import jsonschema
import pkg_resources import pkg_resources
import pykwalify.core
import pykwalify.errors
import ruamel.yaml import ruamel.yaml
from borgmatic.config import load, normalize, override from borgmatic.config import environment, load, normalize, override
def schema_filename(): def schema_filename():
@ -17,15 +15,40 @@ def schema_filename():
return pkg_resources.resource_filename('borgmatic', 'config/schema.yaml') return pkg_resources.resource_filename('borgmatic', 'config/schema.yaml')
def format_json_error_path_element(path_element):
'''
Given a path element into a JSON data structure, format it for display as a string.
'''
if isinstance(path_element, int):
return str('[{}]'.format(path_element))
return str('.{}'.format(path_element))
def format_json_error(error):
'''
Given an instance of jsonschema.exceptions.ValidationError, format it for display as a string.
'''
if not error.path:
return 'At the top level: {}'.format(error.message)
formatted_path = ''.join(format_json_error_path_element(element) for element in error.path)
return "At '{}': {}".format(formatted_path.lstrip('.'), error.message)
class Validation_error(ValueError): class Validation_error(ValueError):
''' '''
A collection of error message strings generated when attempting to validate a particular A collection of error messages generated when attempting to validate a particular
configurartion file. configuration file.
''' '''
def __init__(self, config_filename, error_messages): def __init__(self, config_filename, errors):
'''
Given a configuration filename path and a sequence of string error messages, create a
Validation_error.
'''
self.config_filename = config_filename self.config_filename = config_filename
self.error_messages = error_messages self.errors = errors
def __str__(self): def __str__(self):
''' '''
@ -33,7 +56,7 @@ class Validation_error(ValueError):
''' '''
return 'An error occurred while parsing a configuration file at {}:\n'.format( return 'An error occurred while parsing a configuration file at {}:\n'.format(
self.config_filename self.config_filename
) + '\n'.join(self.error_messages) ) + '\n'.join(error for error in self.errors)
def apply_logical_validation(config_filename, parsed_configuration): def apply_logical_validation(config_filename, parsed_configuration):
@ -42,15 +65,6 @@ def apply_logical_validation(config_filename, parsed_configuration):
below), run through any additional logical validation checks. If there are any such validation below), run through any additional logical validation checks. If there are any such validation
problems, raise a Validation_error. problems, raise a Validation_error.
''' '''
archive_name_format = parsed_configuration.get('storage', {}).get('archive_name_format')
prefix = parsed_configuration.get('retention', {}).get('prefix')
if archive_name_format and not prefix:
raise Validation_error(
config_filename,
('If you provide an archive_name_format, you must also specify a retention prefix.',),
)
location_repositories = parsed_configuration.get('location', {}).get('repositories') location_repositories = parsed_configuration.get('location', {}).get('repositories')
check_repositories = parsed_configuration.get('consistency', {}).get('check_repositories', []) check_repositories = parsed_configuration.get('consistency', {}).get('check_repositories', [])
for repository in check_repositories: for repository in check_repositories:
@ -65,29 +79,12 @@ def apply_logical_validation(config_filename, parsed_configuration):
) )
def remove_examples(schema): def parse_configuration(config_filename, schema_filename, overrides=None, resolve_env=True):
''' '''
pykwalify gets angry if the example field is not a string. So rather than bend to its will, Given the path to a config filename in YAML format, the path to a schema filename in a YAML
remove all examples from the given schema before passing the schema to pykwalify. rendition of JSON Schema format, a sequence of configuration file override strings in the form
''' of "section.option=value", return the parsed configuration as a data structure of nested dicts
if 'map' in schema: and lists corresponding to the schema. Example return value:
for item_name, item_schema in schema['map'].items():
item_schema.pop('example', None)
remove_examples(item_schema)
elif 'seq' in schema:
for item_schema in schema['seq']:
item_schema.pop('example', None)
remove_examples(item_schema)
return schema
def parse_configuration(config_filename, schema_filename, overrides=None):
'''
Given the path to a config filename in YAML format, the path to a schema filename in pykwalify
YAML schema format, a sequence of configuration file override strings in the form of
"section.option=value", return the parsed configuration as a data structure of nested dicts and
lists corresponding to the schema. Example return value:
{'location': {'source_directories': ['/home', '/etc'], 'repository': 'hostname.borg'}, {'location': {'source_directories': ['/home', '/etc'], 'repository': 'hostname.borg'},
'retention': {'keep_daily': 7}, 'consistency': {'checks': ['repository', 'archives']}} 'retention': {'keep_daily': 7}, 'consistency': {'checks': ['repository', 'archives']}}
@ -95,26 +92,31 @@ def parse_configuration(config_filename, schema_filename, overrides=None):
Raise FileNotFoundError if the file does not exist, PermissionError if the user does not Raise FileNotFoundError if the file does not exist, PermissionError if the user does not
have permissions to read the file, or Validation_error if the config does not match the schema. have permissions to read the file, or Validation_error if the config does not match the schema.
''' '''
logging.getLogger('pykwalify').setLevel(logging.ERROR)
try: try:
config = load.load_configuration(config_filename) config = load.load_configuration(config_filename)
schema = load.load_configuration(schema_filename) schema = load.load_configuration(schema_filename)
except (ruamel.yaml.error.YAMLError, RecursionError) as error: except (ruamel.yaml.error.YAMLError, RecursionError) as error:
raise Validation_error(config_filename, (str(error),)) raise Validation_error(config_filename, (str(error),))
override.apply_overrides(config, overrides)
normalize.normalize(config) normalize.normalize(config)
override.apply_overrides(config, overrides)
if resolve_env:
environment.resolve_env_variables(config)
validator = pykwalify.core.Core(source_data=config, schema_data=remove_examples(schema)) try:
parsed_result = validator.validate(raise_exception=False) validator = jsonschema.Draft7Validator(schema)
except AttributeError: # pragma: no cover
validator = jsonschema.Draft4Validator(schema)
validation_errors = tuple(validator.iter_errors(config))
if validator.validation_errors: if validation_errors:
raise Validation_error(config_filename, validator.validation_errors) raise Validation_error(
config_filename, tuple(format_json_error(error) for error in validation_errors)
)
apply_logical_validation(config_filename, parsed_result) apply_logical_validation(config_filename, config)
return parsed_result return config
def normalize_repository_path(repository): def normalize_repository_path(repository):

View File

@ -23,7 +23,7 @@ def exit_code_indicates_error(process, exit_code, borg_local_path=None):
command = process.args.split(' ') if isinstance(process.args, str) else process.args command = process.args.split(' ') if isinstance(process.args, str) else process.args
if borg_local_path and command[0] == borg_local_path: if borg_local_path and command[0] == borg_local_path:
return bool(exit_code >= BORG_ERROR_EXIT_CODE) return bool(exit_code < 0 or exit_code >= BORG_ERROR_EXIT_CODE)
return bool(exit_code != 0) return bool(exit_code != 0)
@ -59,11 +59,12 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
''' '''
# Map from output buffer to sequence of last lines. # Map from output buffer to sequence of last lines.
buffer_last_lines = collections.defaultdict(list) buffer_last_lines = collections.defaultdict(list)
output_buffers = [ process_for_output_buffer = {
output_buffer_for_process(process, exclude_stdouts) output_buffer_for_process(process, exclude_stdouts): process
for process in processes for process in processes
if process.stdout or process.stderr if process.stdout or process.stderr
] }
output_buffers = list(process_for_output_buffer.keys())
# Log output for each process until they all exit. # Log output for each process until they all exit.
while True: while True:
@ -71,8 +72,23 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
(ready_buffers, _, _) = select.select(output_buffers, [], []) (ready_buffers, _, _) = select.select(output_buffers, [], [])
for ready_buffer in ready_buffers: for ready_buffer in ready_buffers:
ready_process = process_for_output_buffer.get(ready_buffer)
# The "ready" process has exited, but it might be a pipe destination with other
# processes (pipe sources) waiting to be read from. So as a measure to prevent
# hangs, vent all processes when one exits.
if ready_process and ready_process.poll() is not None:
for other_process in processes:
if (
other_process.poll() is None
and other_process.stdout
and other_process.stdout not in output_buffers
):
# Add the process's output to output_buffers to ensure it'll get read.
output_buffers.append(other_process.stdout)
line = ready_buffer.readline().rstrip().decode() line = ready_buffer.readline().rstrip().decode()
if not line: if not line or not ready_process:
continue continue
# Keep the last few lines of output in case the process errors, and we need the output for # Keep the last few lines of output in case the process errors, and we need the output for
@ -123,9 +139,12 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
if not output_buffer: if not output_buffer:
continue continue
remaining_output = output_buffer.read().rstrip().decode() while True: # pragma: no cover
remaining_output = output_buffer.readline().rstrip().decode()
if not remaining_output:
break
if remaining_output: # pragma: no cover
logger.log(output_log_level, remaining_output) logger.log(output_log_level, remaining_output)

View File

@ -1,5 +1,6 @@
import logging import logging
import os import os
import re
from borgmatic import execute from borgmatic import execute
@ -9,14 +10,19 @@ logger = logging.getLogger(__name__)
SOFT_FAIL_EXIT_CODE = 75 SOFT_FAIL_EXIT_CODE = 75
def interpolate_context(command, context): def interpolate_context(config_filename, hook_description, command, context):
''' '''
Given a single hook command and a dict of context names/values, interpolate the values by Given a config filename, a hook description, a single hook command, and a dict of context
"{name}" into the command and return the result. names/values, interpolate the values by "{name}" into the command and return the result.
''' '''
for name, value in context.items(): for name, value in context.items():
command = command.replace('{%s}' % name, str(value)) command = command.replace('{%s}' % name, str(value))
for unsupported_variable in re.findall(r'{\w+}', command):
logger.warning(
f"{config_filename}: Variable '{unsupported_variable}' is not supported in {hook_description} hook"
)
return command return command
@ -26,8 +32,7 @@ def execute_hook(commands, umask, config_filename, description, dry_run, **conte
a hook description, and whether this is a dry run, run the given commands. Or, don't run them a hook description, and whether this is a dry run, run the given commands. Or, don't run them
if this is a dry run. if this is a dry run.
The context contains optional values interpolated by name into the hook commands. Currently, The context contains optional values interpolated by name into the hook commands.
this only applies to the on_error hook.
Raise ValueError if the umask cannot be parsed. Raise ValueError if the umask cannot be parsed.
Raise subprocesses.CalledProcessError if an error occurs in a hook. Raise subprocesses.CalledProcessError if an error occurs in a hook.
@ -39,7 +44,9 @@ def execute_hook(commands, umask, config_filename, description, dry_run, **conte
dry_run_label = ' (dry run; not actually running hooks)' if dry_run else '' dry_run_label = ' (dry run; not actually running hooks)' if dry_run else ''
context['configuration_filename'] = config_filename context['configuration_filename'] = config_filename
commands = [interpolate_context(command, context) for command in commands] commands = [
interpolate_context(config_filename, description, command, context) for command in commands
]
if len(commands) == 1: if len(commands) == 1:
logger.info( logger.info(

View File

@ -22,14 +22,18 @@ def initialize_monitor(
pass pass
def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run): def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
''' '''
Ping the given Cronhub URL, modified with the monitor.State. Use the given configuration Ping the configured Cronhub URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything. filename in any log entries. If this is a dry run, then don't actually ping anything.
''' '''
dry_run_label = ' (dry run; not actually pinging)' if dry_run else '' dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
formatted_state = '/{}/'.format(MONITOR_STATE_TO_CRONHUB[state]) formatted_state = '/{}/'.format(MONITOR_STATE_TO_CRONHUB[state])
ping_url = ping_url.replace('/start/', formatted_state).replace('/ping/', formatted_state) ping_url = (
hook_config['ping_url']
.replace('/start/', formatted_state)
.replace('/ping/', formatted_state)
)
logger.info( logger.info(
'{}: Pinging Cronhub {}{}'.format(config_filename, state.name.lower(), dry_run_label) '{}: Pinging Cronhub {}{}'.format(config_filename, state.name.lower(), dry_run_label)
@ -38,7 +42,10 @@ def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run
if not dry_run: if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR) logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.get(ping_url) try:
requests.get(ping_url)
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Cronhub error: {error}')
def destroy_monitor( def destroy_monitor(

View File

@ -22,13 +22,13 @@ def initialize_monitor(
pass pass
def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run): def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
''' '''
Ping the given Cronitor URL, modified with the monitor.State. Use the given configuration Ping the configured Cronitor URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything. filename in any log entries. If this is a dry run, then don't actually ping anything.
''' '''
dry_run_label = ' (dry run; not actually pinging)' if dry_run else '' dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
ping_url = '{}/{}'.format(ping_url, MONITOR_STATE_TO_CRONITOR[state]) ping_url = '{}/{}'.format(hook_config['ping_url'], MONITOR_STATE_TO_CRONITOR[state])
logger.info( logger.info(
'{}: Pinging Cronitor {}{}'.format(config_filename, state.name.lower(), dry_run_label) '{}: Pinging Cronitor {}{}'.format(config_filename, state.name.lower(), dry_run_label)
@ -37,7 +37,10 @@ def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run
if not dry_run: if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR) logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.get(ping_url) try:
requests.get(ping_url)
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Cronitor error: {error}')
def destroy_monitor( def destroy_monitor(

View File

@ -1,16 +1,27 @@
import logging import logging
from borgmatic.hooks import cronhub, cronitor, healthchecks, mysql, pagerduty, postgresql from borgmatic.hooks import (
cronhub,
cronitor,
healthchecks,
mongodb,
mysql,
ntfy,
pagerduty,
postgresql,
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
HOOK_NAME_TO_MODULE = { HOOK_NAME_TO_MODULE = {
'healthchecks': healthchecks,
'cronitor': cronitor,
'cronhub': cronhub, 'cronhub': cronhub,
'cronitor': cronitor,
'healthchecks': healthchecks,
'mongodb_databases': mongodb,
'mysql_databases': mysql,
'ntfy': ntfy,
'pagerduty': pagerduty, 'pagerduty': pagerduty,
'postgresql_databases': postgresql, 'postgresql_databases': postgresql,
'mysql_databases': mysql,
} }

View File

@ -2,11 +2,11 @@ import logging
import os import os
import shutil import shutil
from borgmatic.borg.create import DEFAULT_BORGMATIC_SOURCE_DIRECTORY from borgmatic.borg.state import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
DATABASE_HOOK_NAMES = ('postgresql_databases', 'mysql_databases') DATABASE_HOOK_NAMES = ('postgresql_databases', 'mysql_databases', 'mongodb_databases')
def make_database_dump_path(borgmatic_source_directory, database_hook_name): def make_database_dump_path(borgmatic_source_directory, database_hook_name):

View File

@ -13,13 +13,14 @@ MONITOR_STATE_TO_HEALTHCHECKS = {
} }
PAYLOAD_TRUNCATION_INDICATOR = '...\n' PAYLOAD_TRUNCATION_INDICATOR = '...\n'
PAYLOAD_LIMIT_BYTES = 10 * 1024 - len(PAYLOAD_TRUNCATION_INDICATOR) DEFAULT_PING_BODY_LIMIT_BYTES = 100000
class Forgetful_buffering_handler(logging.Handler): class Forgetful_buffering_handler(logging.Handler):
''' '''
A buffering log handler that stores log messages in memory, and throws away messages (oldest A buffering log handler that stores log messages in memory, and throws away messages (oldest
first) once a particular capacity in bytes is reached. first) once a particular capacity in bytes is reached. But if the given byte capacity is zero,
don't throw away any messages.
''' '''
def __init__(self, byte_capacity, log_level): def __init__(self, byte_capacity, log_level):
@ -36,6 +37,9 @@ class Forgetful_buffering_handler(logging.Handler):
self.byte_count += len(message) self.byte_count += len(message)
self.buffer.append(message) self.buffer.append(message)
if not self.byte_capacity:
return
while self.byte_count > self.byte_capacity and self.buffer: while self.byte_count > self.byte_capacity and self.buffer:
self.byte_count -= len(self.buffer[0]) self.byte_count -= len(self.buffer[0])
self.buffer.pop(0) self.buffer.pop(0)
@ -65,31 +69,45 @@ def format_buffered_logs_for_payload():
return payload return payload
def initialize_monitor( def initialize_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
''' '''
Add a handler to the root logger that stores in memory the most recent logs emitted. That Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
way, we can send them all to Healthchecks upon a finish or failure state. we can send them all to Healthchecks upon a finish or failure state. But skip this if the
"send_logs" option is false.
''' '''
if hook_config.get('send_logs') is False:
return
ping_body_limit = max(
hook_config.get('ping_body_limit', DEFAULT_PING_BODY_LIMIT_BYTES)
- len(PAYLOAD_TRUNCATION_INDICATOR),
0,
)
logging.getLogger().addHandler( logging.getLogger().addHandler(
Forgetful_buffering_handler(PAYLOAD_LIMIT_BYTES, monitoring_log_level) Forgetful_buffering_handler(ping_body_limit, monitoring_log_level)
) )
def ping_monitor(ping_url_or_uuid, config_filename, state, monitoring_log_level, dry_run): def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
''' '''
Ping the given Healthchecks URL or UUID, modified with the monitor.State. Use the given Ping the configured Healthchecks URL or UUID, modified with the monitor.State. Use the given
configuration filename in any log entries, and log to Healthchecks with the giving log level. configuration filename in any log entries, and log to Healthchecks with the giving log level.
If this is a dry run, then don't actually ping anything. If this is a dry run, then don't actually ping anything.
''' '''
ping_url = ( ping_url = (
ping_url_or_uuid hook_config['ping_url']
if ping_url_or_uuid.startswith('http') if hook_config['ping_url'].startswith('http')
else 'https://hc-ping.com/{}'.format(ping_url_or_uuid) else 'https://hc-ping.com/{}'.format(hook_config['ping_url'])
) )
dry_run_label = ' (dry run; not actually pinging)' if dry_run else '' dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
if 'states' in hook_config and state.name.lower() not in hook_config['states']:
logger.info(
f'{config_filename}: Skipping Healthchecks {state.name.lower()} ping due to configured states'
)
return
healthchecks_state = MONITOR_STATE_TO_HEALTHCHECKS.get(state) healthchecks_state = MONITOR_STATE_TO_HEALTHCHECKS.get(state)
if healthchecks_state: if healthchecks_state:
ping_url = '{}/{}'.format(ping_url, healthchecks_state) ping_url = '{}/{}'.format(ping_url, healthchecks_state)
@ -106,10 +124,13 @@ def ping_monitor(ping_url_or_uuid, config_filename, state, monitoring_log_level,
if not dry_run: if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR) logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.post(ping_url, data=payload.encode('utf-8')) try:
requests.post(ping_url, data=payload.encode('utf-8'))
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Healthchecks error: {error}')
def destroy_monitor(ping_url_or_uuid, config_filename, monitoring_log_level, dry_run): def destroy_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
''' '''
Remove the monitor handler that was added to the root logger. This prevents the handler from Remove the monitor handler that was added to the root logger. This prevents the handler from
getting reused by other instances of this monitor. getting reused by other instances of this monitor.

162
borgmatic/hooks/mongodb.py Normal file
View File

@ -0,0 +1,162 @@
import logging
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'mongodb_databases'
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given MongoDB databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given log
prefix in any log entries. Use the given location configuration dict to construct the
destination path.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
logger.info('{}: Dumping MongoDB databases{}'.format(log_prefix, dry_run_label))
processes = []
for database in databases:
name = database['name']
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), name, database.get('hostname')
)
dump_format = database.get('format', 'archive')
logger.debug(
'{}: Dumping MongoDB database {} to {}{}'.format(
log_prefix, name, dump_filename, dry_run_label
)
)
if dry_run:
continue
if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename)
else:
dump.create_named_pipe_for_dump(dump_filename)
command = build_dump_command(database, dump_filename, dump_format)
processes.append(execute_command(command, shell=True, run_to_completion=False))
return processes
def build_dump_command(database, dump_filename, dump_format):
'''
Return the mongodump command from a single database configuration.
'''
all_databases = database['name'] == 'all'
command = ['mongodump', '--archive']
if dump_format == 'directory':
command.append(dump_filename)
if 'hostname' in database:
command.extend(('--host', database['hostname']))
if 'port' in database:
command.extend(('--port', str(database['port'])))
if 'username' in database:
command.extend(('--username', database['username']))
if 'password' in database:
command.extend(('--password', database['password']))
if 'authentication_database' in database:
command.extend(('--authenticationDatabase', database['authentication_database']))
if not all_databases:
command.extend(('--db', database['name']))
if 'options' in database:
command.extend(database['options'].split(' '))
if dump_format != 'directory':
command.extend(('>', dump_filename))
return command
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the log
prefix in any log entries. Use the given location configuration dict to construct the
destination path. If this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'MongoDB', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
'''
Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
and a database name to match, return the corresponding glob patterns to match the database dump
in an archive.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
'''
Restore the given MongoDB database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database = database_config[0]
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), database['name'], database.get('hostname')
)
restore_command = build_restore_command(extract_process, database, dump_filename)
logger.debug(
'{}: Restoring MongoDB database {}{}'.format(log_prefix, database['name'], dry_run_label)
)
if dry_run:
return
execute_command_with_processes(
restore_command,
[extract_process] if extract_process else [],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout if extract_process else None,
borg_local_path=location_config.get('local_path', 'borg'),
)
def build_restore_command(extract_process, database, dump_filename):
'''
Return the mongorestore command from a single database configuration.
'''
command = ['mongorestore', '--archive']
if not extract_process:
command.append(dump_filename)
if database['name'] != 'all':
command.extend(('--drop', '--db', database['name']))
if 'hostname' in database:
command.extend(('--host', database['hostname']))
if 'port' in database:
command.extend(('--port', str(database['port'])))
if 'username' in database:
command.extend(('--username', database['username']))
if 'password' in database:
command.extend(('--password', database['password']))
if 'authentication_database' in database:
command.extend(('--authenticationDatabase', database['authentication_database']))
return command

View File

@ -1,6 +1,6 @@
from enum import Enum from enum import Enum
MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty') MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty', 'ntfy')
class State(Enum): class State(Enum):

View File

@ -31,6 +31,7 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run_labe
show_command = ( show_command = (
('mysql',) ('mysql',)
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ()) + (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ()) + (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
@ -81,12 +82,12 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
dump_command = ( dump_command = (
('mysqldump',) ('mysqldump',)
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ ('--add-drop-database',) + ('--add-drop-database',)
+ (('--host', database['hostname']) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ()) + (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ()) + (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ()) + (('--user', database['username']) if 'username' in database else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ ('--databases',) + ('--databases',)
+ dump_database_names + dump_database_names
# Use shell redirection rather than execute_command(output_file=open(...)) to prevent # Use shell redirection rather than execute_command(output_file=open(...)) to prevent
@ -151,7 +152,7 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
database = database_config[0] database = database_config[0]
restore_command = ( restore_command = (
('mysql', '--batch', '--verbose') ('mysql', '--batch')
+ (('--host', database['hostname']) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ()) + (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ()) + (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())

73
borgmatic/hooks/ntfy.py Normal file
View File

@ -0,0 +1,73 @@
import logging
import requests
from borgmatic.hooks import monitor
logger = logging.getLogger(__name__)
MONITOR_STATE_TO_NTFY = {
monitor.State.START: None,
monitor.State.FINISH: None,
monitor.State.FAIL: None,
}
def initialize_monitor(
ping_url, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No initialization is necessary for this monitor.
'''
pass
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the configured Ntfy topic. Use the given configuration filename in any log entries.
If this is a dry run, then don't actually ping anything.
'''
run_states = hook_config.get('states', ['fail'])
if state.name.lower() in run_states:
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
state_config = hook_config.get(
state.name.lower(),
{
'title': f'A Borgmatic {state.name} event happened',
'message': f'A Borgmatic {state.name} event happened',
'priority': 'default',
'tags': 'borgmatic',
},
)
base_url = hook_config.get('server', 'https://ntfy.sh')
topic = hook_config.get('topic')
logger.info(f'{config_filename}: Pinging ntfy topic {topic}{dry_run_label}')
logger.debug(f'{config_filename}: Using Ntfy ping URL {base_url}/{topic}')
headers = {
'X-Title': state_config.get('title'),
'X-Message': state_config.get('message'),
'X-Priority': state_config.get('priority'),
'X-Tags': state_config.get('tags'),
}
if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR)
try:
requests.post(f'{base_url}/{topic}', headers=headers)
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Ntfy error: {error}')
def destroy_monitor(
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No destruction is necessary for this monitor.
'''
pass

View File

@ -21,10 +21,10 @@ def initialize_monitor(
pass pass
def ping_monitor(integration_key, config_filename, state, monitoring_log_level, dry_run): def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
''' '''
If this is an error state, create a PagerDuty event with the given integration key. Use the If this is an error state, create a PagerDuty event with the configured integration key. Use
given configuration filename in any log entries. If this is a dry run, then don't actually the given configuration filename in any log entries. If this is a dry run, then don't actually
create an event. create an event.
''' '''
if state != monitor.State.FAIL: if state != monitor.State.FAIL:
@ -47,7 +47,7 @@ def ping_monitor(integration_key, config_filename, state, monitoring_log_level,
) )
payload = json.dumps( payload = json.dumps(
{ {
'routing_key': integration_key, 'routing_key': hook_config['integration_key'],
'event_action': 'trigger', 'event_action': 'trigger',
'payload': { 'payload': {
'summary': 'backup failed on {}'.format(hostname), 'summary': 'backup failed on {}'.format(hostname),
@ -68,7 +68,10 @@ def ping_monitor(integration_key, config_filename, state, monitoring_log_level,
logger.debug('{}: Using PagerDuty payload: {}'.format(config_filename, payload)) logger.debug('{}: Using PagerDuty payload: {}'.format(config_filename, payload))
logging.getLogger('urllib3').setLevel(logging.ERROR) logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.post(EVENTS_API_URL, data=payload.encode('utf-8')) try:
requests.post(EVENTS_API_URL, data=payload.encode('utf-8'))
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: PagerDuty error: {error}')
def destroy_monitor( def destroy_monitor(

View File

@ -1,4 +1,5 @@
import logging import logging
import logging.handlers
import os import os
import sys import sys
@ -151,6 +152,8 @@ def configure_logging(
syslog_path = '/dev/log' syslog_path = '/dev/log'
elif os.path.exists('/var/run/syslog'): elif os.path.exists('/var/run/syslog'):
syslog_path = '/var/run/syslog' syslog_path = '/var/run/syslog'
elif os.path.exists('/var/run/log'):
syslog_path = '/var/run/log'
if syslog_path and not interactive_console(): if syslog_path and not interactive_console():
syslog_handler = logging.handlers.SysLogHandler(address=syslog_path) syslog_handler = logging.handlers.SysLogHandler(address=syslog_path)

View File

@ -1,23 +1,34 @@
import logging
import os import os
import signal import signal
import sys
logger = logging.getLogger(__name__)
def _handle_signal(signal_number, frame): # pragma: no cover EXIT_CODE_FROM_SIGNAL = 128
def handle_signal(signal_number, frame):
''' '''
Send the signal to all processes in borgmatic's process group, which includes child processes. Send the signal to all processes in borgmatic's process group, which includes child processes.
''' '''
# Prevent infinite signal handler recursion. If the parent frame is this very same handler # Prevent infinite signal handler recursion. If the parent frame is this very same handler
# function, we know we're recursing. # function, we know we're recursing.
if frame.f_back.f_code.co_name == _handle_signal.__name__: if frame.f_back.f_code.co_name == handle_signal.__name__:
return return
os.killpg(os.getpgrp(), signal_number) os.killpg(os.getpgrp(), signal_number)
if signal_number == signal.SIGTERM:
logger.critical('Exiting due to TERM signal')
sys.exit(EXIT_CODE_FROM_SIGNAL + signal.SIGTERM)
def configure_signals(): # pragma: no cover
def configure_signals():
''' '''
Configure borgmatic's signal handlers to pass relevant signals through to any child processes Configure borgmatic's signal handlers to pass relevant signals through to any child processes
like Borg. Note that SIGINT gets passed through even without these changes. like Borg. Note that SIGINT gets passed through even without these changes.
''' '''
for signal_number in (signal.SIGHUP, signal.SIGTERM, signal.SIGUSR1, signal.SIGUSR2): for signal_number in (signal.SIGHUP, signal.SIGTERM, signal.SIGUSR1, signal.SIGUSR2):
signal.signal(signal_number, _handle_signal) signal.signal(signal_number, handle_signal)

View File

@ -1,13 +1,14 @@
FROM python:3.8.1-alpine3.11 as borgmatic FROM alpine:3.16.0 as borgmatic
COPY . /app COPY . /app
RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib
RUN pip install --no-cache /app && generate-borgmatic-config && chmod +r /etc/borgmatic/config.yaml RUN pip install --no-cache /app && generate-borgmatic-config && chmod +r /etc/borgmatic/config.yaml
RUN borgmatic --help > /command-line.txt \ RUN borgmatic --help > /command-line.txt \
&& for action in init prune create check extract mount umount restore list info; do \ && for action in init prune compact create check extract export-tar mount umount restore list info borg; do \
echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \ echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \
&& borgmatic "$action" --help >> /command-line.txt; done && borgmatic "$action" --help >> /command-line.txt; done
FROM node:13.7.0-alpine as html FROM node:18.4.0-alpine as html
ARG ENVIRONMENT=production ARG ENVIRONMENT=production
@ -26,7 +27,7 @@ COPY . /source
RUN NODE_ENV=${ENVIRONMENT} npx eleventy --input=/source/docs --output=/output/docs \ RUN NODE_ENV=${ENVIRONMENT} npx eleventy --input=/source/docs --output=/output/docs \
&& mv /output/docs/index.html /output/index.html && mv /output/docs/index.html /output/index.html
FROM nginx:1.16.1-alpine FROM nginx:1.22.0-alpine
COPY --from=html /output /usr/share/nginx/html COPY --from=html /output /usr/share/nginx/html
COPY --from=borgmatic /etc/borgmatic/config.yaml /usr/share/nginx/html/docs/reference/config.yaml COPY --from=borgmatic /etc/borgmatic/config.yaml /usr/share/nginx/html/docs/reference/config.yaml

View File

@ -1,18 +0,0 @@
#suggestion-form textarea {
font-family: sans-serif;
width: 100%;
}
#suggestion-form label {
font-weight: bold;
}
#suggestion-form input[type=email] {
font-size: 16px;
width: 100%;
}
#suggestion-form .form-error {
color: red;
}

View File

@ -1,33 +0,0 @@
<h2>Improve this documentation</h2>
<p>Have an idea on how to make this documentation even better? Send your
feedback below! But if you need help with borgmatic, or have an idea for a
borgmatic feature, please use our <a href="https://torsion.org/borgmatic/#issues">issue
tracker</a> instead.</p>
<form id="suggestion-form">
<div><label for="suggestion">Documentation suggestion</label></div>
<textarea id="suggestion" rows="8" cols="60" name="suggestion"></textarea>
<div data-sk-error="suggestion" class="form-error"></div>
<input id="_page" type="hidden" name="_page">
<input id="_subject" type="hidden" name="_subject" value="borgmatic documentation suggestion">
<br />
<label for="email">Email address</label>
<div><input id="email" type="email" name="email" placeholder="Only required if you want a response!"></div>
<div data-sk-error="email" class="form-error"></div>
<br />
<div><button type="submit">Send</button></div>
<br />
</form>
<script>
document.getElementById('_page').value = window.location.href;
window.sk=window.sk||function(){(sk.q=sk.q||[]).push(arguments)};
sk('form', 'init', {
id: '1d536680ab96',
element: '#suggestion-form'
});
</script>
<script defer src="https://js.statickit.com/statickit.js"></script>

View File

@ -0,0 +1,5 @@
<h2>Improve this documentation</h2>
<p>Have an idea on how to make this documentation even better? Use our <a
href="https://projects.torsion.org/borgmatic-collective/borgmatic/issues">issue tracker</a> to send your
feedback!</p>

View File

@ -258,6 +258,7 @@ footer.elv-layout {
/* Header */ /* Header */
.elv-header { .elv-header {
position: relative; position: relative;
text-align: center;
} }
.elv-header-default { .elv-header-default {
display: flex; display: flex;

View File

@ -11,7 +11,6 @@
{% include 'components/minilink.css' %} {% include 'components/minilink.css' %}
{% include 'components/toc.css' %} {% include 'components/toc.css' %}
{% include 'components/info-blocks.css' %} {% include 'components/info-blocks.css' %}
{% include 'components/suggestion-form.css' %}
{% include 'prism-theme.css' %} {% include 'prism-theme.css' %}
{% include 'asciinema.css' %} {% include 'asciinema.css' %}
{% endset %} {% endset %}

View File

@ -28,5 +28,5 @@ headerClass: elv-header-default
{{ content | safe }} {{ content | safe }}
{% include 'components/suggestion-form.html' %} {% include 'components/suggestion-link.html' %}
</main> </main>

View File

@ -1,17 +1,18 @@
--- ---
title: How to add preparation and cleanup steps to backups title: How to add preparation and cleanup steps to backups
eleventyNavigation: eleventyNavigation:
key: Add preparation and cleanup steps key: 🧹 Add preparation and cleanup steps
parent: How-to guides parent: How-to guides
order: 8 order: 9
--- ---
## Preparation and cleanup hooks ## Preparation and cleanup hooks
If you find yourself performing prepraration tasks before your backup runs, or If you find yourself performing preparation tasks before your backup runs, or
cleanup work afterwards, borgmatic hooks may be of interest. Hooks are shell cleanup work afterwards, borgmatic hooks may be of interest. Hooks are shell
commands that borgmatic executes for you at various points, and they're commands that borgmatic executes for you at various points as it runs, and
configured in the `hooks` section of your configuration file. But if you're they're configured in the `hooks` section of your configuration file. But if
looking to backup a database, it's probably easier to use the [database backup you're looking to backup a database, it's probably easier to use the [database
backup
feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/) feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
instead. instead.
@ -27,15 +28,45 @@ hooks:
- umount /some/filesystem - umount /some/filesystem
``` ```
The `before_backup` and `after_backup` hooks each run once per configuration <span class="minilink minilink-addedin">New in version 1.6.0</span> The
file. `before_backup` hooks run prior to backups of all repositories in a `before_backup` and `after_backup` hooks each run once per repository in a
configuration file, right before the `create` action. `after_backup` hooks run configuration file. `before_backup` hooks runs right before the `create`
afterwards, but not if an error occurs in a previous hook or in the backups action for a particular repository, and `after_backup` hooks run afterwards,
themselves. but not if an error occurs in a previous hook or in the backups themselves.
(Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file
rather than once per repository.)
There are additional hooks for the `prune` and `check` actions as well. There are additional hooks that run before/after other actions as well. For
`before_prune` and `after_prune` run if there are any `prune` actions, while instance, `before_prune` runs before a `prune` action for a repository, while
`before_check` and `after_check` run if there are any `check` actions. `after_prune` runs after it.
## Variable interpolation
The before and after action hooks support interpolating particular runtime
variables into the hook command. Here's an example that assumes you provide a
separate shell script:
```yaml
hooks:
after_prune:
- record-prune.sh "{configuration_filename}" "{repository}"
```
In this example, when the hook is triggered, borgmatic interpolates runtime
values into the hook command: the borgmatic configuration filename and the
paths of the current Borg repository. Here's the full set of supported
variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the
hook was defined
* `repository`: path of the current repository as configured in the current
borgmatic configuration file
Note that you can also interpolate in [arbitrary environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
## Global hooks
You can also use `before_everything` and `after_everything` hooks to perform You can also use `before_everything` and `after_everything` hooks to perform
global setup or cleanup: global setup or cleanup:
@ -58,6 +89,8 @@ but only if there is a `create` action. It runs even if an error occurs during
a backup or a backup hook, but not if an error occurs during a a backup or a backup hook, but not if an error occurs during a
`before_everything` hook. `before_everything` hook.
## Error hooks
borgmatic also runs `on_error` hooks if an error occurs, either when creating borgmatic also runs `on_error` hooks if an error occurs, either when creating
a backup or running a backup hook. See the [monitoring and alerting a backup or running a backup hook. See the [monitoring and alerting
documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/) documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/)

View File

@ -1,9 +1,9 @@
--- ---
title: How to backup to a removable drive or an intermittent server title: How to backup to a removable drive or an intermittent server
eleventyNavigation: eleventyNavigation:
key: Backup to a removable drive or server key: 💾 Backup to a removable drive/server
parent: How-to guides parent: How-to guides
order: 9 order: 10
--- ---
## Occasional backups ## Occasional backups
@ -16,9 +16,14 @@ But if you run borgmatic and your hard drive isn't plugged in, or your buddy's
server is offline, then you'll get an annoying error message and the overall server is offline, then you'll get an annoying error message and the overall
borgmatic run will fail (even if individual repositories still complete). borgmatic run will fail (even if individual repositories still complete).
Another variant is when the source machine is only sometimes available for
backups, e.g. a laptop where you want to skip backups when the battery falls
below a certain level.
So what if you want borgmatic to swallow the error of a missing drive So what if you want borgmatic to swallow the error of a missing drive
or an offline server, and continue trucking along? That's where the concept of or an offline server or a low battery—and exit gracefully? That's where the
"soft failure" come in. concept of "soft failure" come in.
## Soft failure command hooks ## Soft failure command hooks
@ -78,6 +83,17 @@ hooks:
- ping -q -c 1 buddys-server.org > /dev/null || exit 75 - ping -q -c 1 buddys-server.org > /dev/null || exit 75
``` ```
Or to only run backups if the battery level is high enough:
```yaml
hooks:
before_backup:
- is_battery_percent_at_least.sh 25
```
(Writing the battery script is left as an exercise to the reader.)
## Caveats and details ## Caveats and details
There are some caveats you should be aware of with this feature. There are some caveats you should be aware of with this feature.
@ -99,6 +115,6 @@ There are some caveats you should be aware of with this feature.
* The soft failure doesn't have to apply to a repository. You can even perform * The soft failure doesn't have to apply to a repository. You can even perform
a test to make sure that individual source directories are mounted and a test to make sure that individual source directories are mounted and
available. Use your imagination! available. Use your imagination!
* The soft failure feature also works for `before_prune`, `after_prune`, * The soft failure feature also works for before/after hooks for other
`before_check`, and `after_check` hooks. But it is not implemented for actions as well. But it is not implemented for `before_everything` or
`before_everything` or `after_everything`. `after_everything`.

View File

@ -1,9 +1,9 @@
--- ---
title: How to backup your databases title: How to backup your databases
eleventyNavigation: eleventyNavigation:
key: Backup your databases key: 🗄️ Backup your databases
parent: How-to guides parent: How-to guides
order: 7 order: 8
--- ---
## Database dump hooks ## Database dump hooks
@ -15,7 +15,8 @@ consistent snapshot that is more suited for backups.
Fortunately, borgmatic includes built-in support for creating database dumps Fortunately, borgmatic includes built-in support for creating database dumps
prior to running backups. For example, here is everything you need to dump and prior to running backups. For example, here is everything you need to dump and
backup a couple of local PostgreSQL databases and a MySQL/MariaDB database: backup a couple of local PostgreSQL databases, a MySQL/MariaDB database, and a
MongoDB database:
```yaml ```yaml
hooks: hooks:
@ -24,12 +25,16 @@ hooks:
- name: orders - name: orders
mysql_databases: mysql_databases:
- name: posts - name: posts
mongodb_databases:
- name: messages
``` ```
As part of each backup, borgmatic streams a database dump for each configured As part of each backup, borgmatic streams a database dump for each configured
database directly to Borg, so it's included in the backup without consuming database directly to Borg, so it's included in the backup without consuming
additional disk space. (The one exception is PostgreSQL's "directory" dump additional disk space. (The exceptions are the PostgreSQL/MongoDB "directory"
format, which can't stream and therefore does consume temporary disk space.) dump formats, which can't stream and therefore do consume temporary disk
space. Additionally, prior to borgmatic 1.5.3, all database dumps consumed
temporary disk space.)
To support this, borgmatic creates temporary named pipes in `~/.borgmatic` by To support this, borgmatic creates temporary named pipes in `~/.borgmatic` by
default. To customize this path, set the `borgmatic_source_directory` option default. To customize this path, set the `borgmatic_source_directory` option
@ -59,6 +64,14 @@ hooks:
username: root username: root
password: trustsome1 password: trustsome1
options: "--skip-comments" options: "--skip-comments"
mongodb_databases:
- name: messages
hostname: database3.example.org
port: 27018
username: dbuser
password: trustsome1
authentication_database: mongousers
options: "--ssl"
``` ```
If you want to dump all databases on a host, use `all` for the database name: If you want to dump all databases on a host, use `all` for the database name:
@ -69,13 +82,15 @@ hooks:
- name: all - name: all
mysql_databases: mysql_databases:
- name: all - name: all
mongodb_databases:
- name: all
``` ```
Note that you may need to use a `username` of the `postgres` superuser for Note that you may need to use a `username` of the `postgres` superuser for
this to work with PostgreSQL. this to work with PostgreSQL.
If you would like to backup databases only and not source directories, you can If you would like to backup databases only and not source directories, you can
specify an empty `source_directories` value because it is a mandatory field: specify an empty `source_directories` value (as it is a mandatory field):
```yaml ```yaml
location: location:
@ -85,6 +100,14 @@ hooks:
- name: all - name: all
``` ```
### External passwords
If you don't want to keep your database passwords in your borgmatic
configuration file, you can instead pass them in via [environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/)
or command-line [configuration
overrides](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides).
### Configuration backups ### Configuration backups
@ -97,7 +120,7 @@ bring back any missing configuration files in order to restore a database.
## Supported databases ## Supported databases
As of now, borgmatic supports PostgreSQL and MySQL/MariaDB databases As of now, borgmatic supports PostgreSQL, MySQL/MariaDB, and MongoDB databases
directly. But see below about general-purpose preparation and cleanup hooks as directly. But see below about general-purpose preparation and cleanup hooks as
a work-around with other database systems. Also, please [file a a work-around with other database systems. Also, please [file a
ticket](https://torsion.org/borgmatic/#issues) for additional database systems ticket](https://torsion.org/borgmatic/#issues) for additional database systems
@ -185,19 +208,26 @@ backups to avoid getting caught without a way to restore a database.
databases that share the exact same name on different hosts. databases that share the exact same name on different hosts.
4. Because database hooks implicitly enable the `read_special` configuration 4. Because database hooks implicitly enable the `read_special` configuration
setting to support dump and restore streaming, you'll need to ensure that any setting to support dump and restore streaming, you'll need to ensure that any
special files are excluded from backups (named pipes, block devices, and special files are excluded from backups (named pipes, block devices,
character devices) to prevent hanging. Try a command like `find / -type c,b,p` character devices, and sockets) to prevent hanging. Try a command like
to find such files. Common directories to exclude are `/dev` and `/run`, but `find /your/source/path -type c,b,p,s` to find such files. Common directories
that may not be exhaustive. to exclude are `/dev` and `/run`, but that may not be exhaustive.
### Manual restoration ### Manual restoration
If you prefer to restore a database without the help of borgmatic, first If you prefer to restore a database without the help of borgmatic, first
[extract](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/) an [extract](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/) an
archive containing a database dump, and then manually restore the dump file archive containing a database dump.
found within the extracted `~/.borgmatic/` path (e.g. with `pg_restore` or
`mysql` commands). borgmatic extracts the dump file into the *`username`*`/.borgmatic/` directory
within the extraction destination path, where *`username`* is the user that
created the backup. For example, if you created the backup with the `root`
user and you're extracting to `/tmp`, then the dump will be in
`/tmp/root/.borgmatic`.
After extraction, you can manually restore the dump file using native database
commands like `pg_restore`, `mysql`, `mongorestore` or similar.
## Preparation and cleanup hooks ## Preparation and cleanup hooks
@ -230,5 +260,10 @@ hooks:
### borgmatic hangs during backup ### borgmatic hangs during backup
See Limitations above about `read_special`. You may need to exclude certain See Limitations above about `read_special`. You may need to exclude certain
paths with named pipes, block devices, or character devices on which borgmatic paths with named pipes, block devices, character devices, or sockets on which
is hanging. borgmatic is hanging.
Alternatively, if excluding special files is too onerous, you can create two
separate borgmatic configuration files—one for your source files and a
separate one for backing up databases. That way, the database `read_special`
option will not be active when backing up special files.

View File

@ -1,27 +1,28 @@
--- ---
title: How to deal with very large backups title: How to deal with very large backups
eleventyNavigation: eleventyNavigation:
key: Deal with very large backups key: 📏 Deal with very large backups
parent: How-to guides parent: How-to guides
order: 3 order: 4
--- ---
## Biggish data ## Biggish data
Borg itself is great for efficiently de-duplicating data across successive Borg itself is great for efficiently de-duplicating data across successive
backup archives, even when dealing with very large repositories. But you may backup archives, even when dealing with very large repositories. But you may
find that while borgmatic's default mode of "prune, create, and check" works find that while borgmatic's default mode of `prune`, `compact`, `create`, and
well on small repositories, it's not so great on larger ones. That's because `check` works well on small repositories, it's not so great on larger ones.
running the default pruning and consistency checks take a long time on large That's because running the default pruning, compact, and consistency checks
repositories. take a long time on large repositories.
### A la carte actions ### A la carte actions
If you find yourself in this situation, you have some options. First, you can If you find yourself in this situation, you have some options. First, you can
run borgmatic's pruning, creating, or checking actions separately. For run borgmatic's `prune`, `compact`, `create`, or `check` actions separately.
instance, the following optional actions are available: For instance, the following optional actions are available:
```bash ```bash
borgmatic prune borgmatic prune
borgmatic compact
borgmatic create borgmatic create
borgmatic check borgmatic check
``` ```
@ -32,7 +33,7 @@ borgmatic check
You can run with only one of these actions provided, or you can mix and match You can run with only one of these actions provided, or you can mix and match
any number of them in a single borgmatic run. This supports approaches like any number of them in a single borgmatic run. This supports approaches like
skipping certain actions while running others. For instance, this skips skipping certain actions while running others. For instance, this skips
`prune` and only runs `create` and `check`: `prune` and `compact` and only runs `create` and `check`:
```bash ```bash
borgmatic create check borgmatic create check
@ -48,7 +49,7 @@ consistency checks with `check` on a much less frequent basis (e.g. with
Another option is to customize your consistency checks. The default Another option is to customize your consistency checks. The default
consistency checks run both full-repository checks and per-archive checks consistency checks run both full-repository checks and per-archive checks
within each repository. within each repository no more than once a month.
But if you find that archive checks are too slow, for example, you can But if you find that archive checks are too slow, for example, you can
configure borgmatic to run repository checks only. Configure this in the configure borgmatic to run repository checks only. Configure this in the
@ -57,9 +58,11 @@ configure borgmatic to run repository checks only. Configure this in the
```yaml ```yaml
consistency: consistency:
checks: checks:
- repository - name: repository
``` ```
(Prior to borgmatic 1.6.2, `checks` was a plain list of strings without the `name:` part.)
Here are the available checks from fastest to slowest: Here are the available checks from fastest to slowest:
* `repository`: Checks the consistency of the repository itself. * `repository`: Checks the consistency of the repository itself.
@ -69,6 +72,37 @@ Here are the available checks from fastest to slowest:
See [Borg's check documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html) for more information. See [Borg's check documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html) for more information.
### Check frequency
<span class="minilink minilink-addedin">New in version 1.6.2</span> You can
optionally configure checks to run on a periodic basis rather than every time
borgmatic runs checks. For instance:
```yaml
consistency:
checks:
- name: repository
frequency: 2 weeks
```
This tells borgmatic to run this consistency check at most once every two
weeks for a given repository. The `frequency` value is a number followed by a
unit of time, e.g. "3 days", "1 week", "2 months", etc. The `frequency`
defaults to "always", which means run this check every time checks run.
Unlike a real scheduler like cron, borgmatic only makes a best effort to run
checks on the configured frequency. It compares that frequency with how long
it's been since the last check for a given repository (as recorded in a file
within `~/.borgmatic/checks`). If it hasn't been long enough, the check is
skipped. And you still have to run `borgmatic check` (or just `borgmatic`) in
order for checks to run, even when a `frequency` is configured!
If you want to temporarily ignore your configured frequencies, you can invoke
`borgmatic check --force` to run checks unconditionally.
### Disabling checks
If that's still too slow, you can disable consistency checks entirely, If that's still too slow, you can disable consistency checks entirely,
either for a single repository or for all repositories. either for a single repository or for all repositories.
@ -77,7 +111,7 @@ Disabling all consistency checks looks like this:
```yaml ```yaml
consistency: consistency:
checks: checks:
- disabled - name: disabled
``` ```
Or, if you have multiple repositories in your borgmatic configuration file, Or, if you have multiple repositories in your borgmatic configuration file,
@ -98,7 +132,8 @@ borgmatic check --only data --only extract
``` ```
This is useful for running slow consistency checks on an infrequent basis, This is useful for running slow consistency checks on an infrequent basis,
separate from your regular checks. separate from your regular checks. It is still subject to any configured
check frequencies unless the `--force` flag is used.
## Troubleshooting ## Troubleshooting

View File

@ -1,26 +1,26 @@
--- ---
title: How to develop on borgmatic title: How to develop on borgmatic
eleventyNavigation: eleventyNavigation:
key: Develop on borgmatic key: 🏗️ Develop on borgmatic
parent: How-to guides parent: How-to guides
order: 11 order: 13
--- ---
## Source code ## Source code
To get set up to hack on borgmatic, first clone master via HTTPS or SSH: To get set up to hack on borgmatic, first clone master via HTTPS or SSH:
```bash ```bash
git clone https://projects.torsion.org/witten/borgmatic.git git clone https://projects.torsion.org/borgmatic-collective/borgmatic.git
``` ```
Or: Or:
```bash ```bash
git clone ssh://git@projects.torsion.org:3022/witten/borgmatic.git git clone ssh://git@projects.torsion.org:3022/borgmatic-collective/borgmatic.git
``` ```
Then, install borgmatic Then, install borgmatic
"[editable](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs)" "[editable](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs)"
so that you can run borgmatic commands while you're hacking on them to so that you can run borgmatic commands while you're hacking on them to
make sure your changes work. make sure your changes work.
@ -66,8 +66,6 @@ following:
tox -e black tox -e black
``` ```
Note that Black requires at minimum Python 3.6.
And if you get a complaint from the And if you get a complaint from the
[isort](https://github.com/timothycrosley/isort) Python import orderer, you [isort](https://github.com/timothycrosley/isort) Python import orderer, you
can ask isort to order your imports for you: can ask isort to order your imports for you:
@ -118,7 +116,7 @@ See the Black, Flake8, and isort documentation for more information.
Each pull request triggers a continuous integration build which runs the test Each pull request triggers a continuous integration build which runs the test
suite. You can view these builds on suite. You can view these builds on
[build.torsion.org](https://build.torsion.org/witten/borgmatic), and they're [build.torsion.org](https://build.torsion.org/borgmatic-collective/borgmatic), and they're
also linked from the commits list on each pull request. also linked from the commits list on each pull request.
## Documentation development ## Documentation development

View File

@ -1,9 +1,9 @@
--- ---
title: How to extract a backup title: How to extract a backup
eleventyNavigation: eleventyNavigation:
key: Extract a backup key: 📤 Extract a backup
parent: How-to guides parent: How-to guides
order: 6 order: 7
--- ---
## Extract ## Extract
@ -116,7 +116,7 @@ Omit the `--archive` flag to mount all archives (lazy-loaded):
borgmatic mount --mount-point /mnt borgmatic mount --mount-point /mnt
``` ```
Or use the "latest" value for the archive to mount the latest successful archive: Or use the "latest" value for the archive to mount the latest archive:
```bash ```bash
borgmatic mount --archive latest --mount-point /mnt borgmatic mount --archive latest --mount-point /mnt

View File

@ -1,9 +1,9 @@
--- ---
title: How to inspect your backups title: How to inspect your backups
eleventyNavigation: eleventyNavigation:
key: Inspect your backups key: 🔎 Inspect your backups
parent: How-to guides parent: How-to guides
order: 4 order: 5
--- ---
## Backup progress ## Backup progress
@ -51,6 +51,31 @@ borgmatic info
`--info`. Or upgrade borgmatic!) `--info`. Or upgrade borgmatic!)
### Searching for a file
<span class="minilink minilink-addedin">New in version 1.6.3</span> Let's say
you've accidentally deleted a file and want to find the backup archive(s)
containing it. `borgmatic list` provides a `--find` flag for exactly this
purpose. For instance, if you're looking for a `foo.txt`:
```bash
borgmatic list --find foo.txt
```
This will list your archives and indicate those with files matching
`*foo.txt*` anywhere in the archive. The `--find` parameter can alternatively
be a [Borg
pattern](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-patterns).
To limit the archives searched, use the standard `list` parameters for
filtering archives such as `--last`, `--archive`, `--glob-archives`, etc. For
example, to search only the last five archives:
```bash
borgmatic list --find foo.txt --last 5
```
## Logging ## Logging
By default, borgmatic logs to a local syslog-compatible daemon if one is By default, borgmatic logs to a local syslog-compatible daemon if one is

View File

@ -1,9 +1,9 @@
--- ---
title: How to make backups redundant title: How to make backups redundant
eleventyNavigation: eleventyNavigation:
key: Make backups redundant key: ☁️ Make backups redundant
parent: How-to guides parent: How-to guides
order: 2 order: 3
--- ---
## Multiple repositories ## Multiple repositories
@ -22,7 +22,6 @@ location:
repositories: repositories:
- 1234@usw-s001.rsync.net:backups.borg - 1234@usw-s001.rsync.net:backups.borg
- k8pDxu32@k8pDxu32.repo.borgbase.com:repo - k8pDxu32@k8pDxu32.repo.borgbase.com:repo
- user1@scp2.cdn.lima-labs.com:repo
- /var/lib/backups/local.borg - /var/lib/backups/local.borg
``` ```
@ -35,8 +34,7 @@ Here's a way of visualizing what borgmatic does with the above configuration:
1. Backup `/home` and `/etc` to `1234@usw-s001.rsync.net:backups.borg` 1. Backup `/home` and `/etc` to `1234@usw-s001.rsync.net:backups.borg`
2. Backup `/home` and `/etc` to `k8pDxu32@k8pDxu32.repo.borgbase.com:repo` 2. Backup `/home` and `/etc` to `k8pDxu32@k8pDxu32.repo.borgbase.com:repo`
3. Backup `/home` and `/etc` to `user1@scp2.cdn.lima-labs.com:repo` 3. Backup `/home` and `/etc` to `/var/lib/backups/local.borg`
4. Backup `/home` and `/etc` to `/var/lib/backups/local.borg`
This gives you redundancy of your data across repositories and even This gives you redundancy of your data across repositories and even
potentially across providers. potentially across providers.

View File

@ -1,7 +1,7 @@
--- ---
title: How to make per-application backups title: How to make per-application backups
eleventyNavigation: eleventyNavigation:
key: Make per-application backups key: 🔀 Make per-application backups
parent: How-to guides parent: How-to guides
order: 1 order: 1
--- ---
@ -32,10 +32,16 @@ perform any merging of configuration files by default. If you'd like borgmatic
to merge your configuration files, see below about configuration includes. to merge your configuration files, see below about configuration includes.
Additionally, the `~/.config/borgmatic.d/` directory works the same way as Additionally, the `~/.config/borgmatic.d/` directory works the same way as
`/etc/borgmatic.d`. If you need even more customizability, you can specify `/etc/borgmatic.d`.
alternate configuration paths on the command-line with borgmatic's `--config`
flag. See `borgmatic --help` for more information.
If you need even more customizability, you can specify alternate configuration
paths on the command-line with borgmatic's `--config` flag. (See `borgmatic
--help` for more information.) For instance, if you want to schedule your
various borgmatic backups to run at different times, you'll need multiple
entries in your [scheduling software of
choice](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#autopilot),
each entry using borgmatic's `--config` flag instead of relying on
`/etc/borgmatic.d`.
## Configuration includes ## Configuration includes
@ -69,6 +75,10 @@ themselves and complaining that they are not valid configuration files, you
should put them in a directory other than `/etc/borgmatic.d/`. (A subdirectory should put them in a directory other than `/etc/borgmatic.d/`. (A subdirectory
is fine.) is fine.)
When a configuration include is a relative path, borgmatic loads it from either
the current working directory or from the directory containing the file doing
the including.
Note that this form of include must be a YAML value rather than a key. For Note that this form of include must be a YAML value rather than a key. For
example, this will not work: example, this will not work:
@ -80,7 +90,7 @@ location:
!include /etc/borgmatic/common_retention.yaml !include /etc/borgmatic/common_retention.yaml
``` ```
But if you do want to merge in a YAML key and its values, keep reading! But if you do want to merge in a YAML key *and* its values, keep reading!
## Include merging ## Include merging
@ -89,35 +99,50 @@ If you need to get even fancier and pull in common configuration options while
potentially overriding individual options, you can perform a YAML merge of potentially overriding individual options, you can perform a YAML merge of
included configuration using the YAML `<<` key. For instance, here's an included configuration using the YAML `<<` key. For instance, here's an
example of a main configuration file that pulls in two retention options via example of a main configuration file that pulls in two retention options via
an include, and then overrides one of them locally: an include and then overrides one of them locally:
```yaml ```yaml
<<: !include /etc/borgmatic/common.yaml
location: location:
... ...
retention: retention:
keep_daily: 5 keep_daily: 5
<<: !include /etc/borgmatic/common_retention.yaml
``` ```
This is what `common_retention.yaml` might look like: This is what `common.yaml` might look like:
```yaml ```yaml
keep_hourly: 24 retention:
keep_daily: 7 keep_hourly: 24
keep_daily: 7
``` ```
Once this include gets merged in, the resulting configuration would have a Once this include gets merged in, the resulting configuration would have a
`keep_hourly` value of `24` and an overridden `keep_daily` value of `5`. `keep_hourly` value of `24` and an overridden `keep_daily` value of `5`.
When there is a collision of an option between the local file and the merged When there's an option collision between the local file and the merged
include, the local file's option takes precedent. And note that this is a include, the local file's option takes precedence.
shallow merge rather than a deep merge, so the merging does not descend into
nested values.
Note that this `<<` include merging syntax is only for merging in mappings Note that this `<<` include merging syntax is only for merging in mappings
(keys/values). If you'd like to include other types like scalars or lists (configuration options and their values). But if you'd like to include a
directly, please see the section above about standard includes. single value directly, please see the section above about standard includes.
Additionally, there is a limitation preventing multiple `<<` include merges
per section. So for instance, that means you can do one `<<` merge at the
global level, another `<<` within each configuration section, etc. (This is a
YAML limitation.)
### Deep merge
<span class="minilink minilink-addedin">New in version 1.6.0</span> borgmatic
performs a deep merge of merged include files, meaning that values are merged
at all levels in the two configuration files. Colliding list values are
appended together. This allows you to include common configuration—up to full
borgmatic configuration files—while overriding only the parts you want to
customize.
## Configuration overrides ## Configuration overrides
@ -140,7 +165,19 @@ What this does is load your configuration files, and for each one, disregard
the configured value for the `remote_path` option in the `location` section, the configured value for the `remote_path` option in the `location` section,
and use the value of `/usr/local/bin/borg1` instead. and use the value of `/usr/local/bin/borg1` instead.
Note that the value is parsed as an actual YAML string, so you can even set You can even override multiple values at once. For instance:
```bash
borgmatic create --override section.option1=value1 section.option2=value2
```
This will accomplish the same thing:
```bash
borgmatic create --override section.option1=value1 --override section.option2=value2
```
Note that each value is parsed as an actual YAML string, so you can even set
list values by using brackets. For instance: list values by using brackets. For instance:
```bash ```bash
@ -150,7 +187,14 @@ borgmatic create --override location.repositories=[test1.borg,test2.borg]
Or even a single list element: Or even a single list element:
```bash ```bash
borgmatic create --override location.repositories=[/root/test1.borg] borgmatic create --override location.repositories=[/root/test.borg]
```
If your override value contains special YAML characters like colons, then
you'll need quotes for it to parse correctly:
```bash
borgmatic create --override location.repositories="['user@server:test.borg']"
``` ```
There is not currently a way to override a single element of a list without There is not currently a way to override a single element of a list without
@ -165,3 +209,5 @@ indentation and a leading dash.)
Be sure to quote your overrides if they contain spaces or other characters Be sure to quote your overrides if they contain spaces or other characters
that your shell may interpret. that your shell may interpret.
An alternate to command-line overrides is passing in your values via [environment variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).

View File

@ -1,9 +1,9 @@
--- ---
title: How to monitor your backups title: How to monitor your backups
eleventyNavigation: eleventyNavigation:
key: Monitor your backups key: 🚨 Monitor your backups
parent: How-to guides parent: How-to guides
order: 5 order: 6
--- ---
## Monitoring and alerting ## Monitoring and alerting
@ -38,17 +38,19 @@ below for how to configure this.
borgmatic integrates with monitoring services like borgmatic integrates with monitoring services like
[Healthchecks](https://healthchecks.io/), [Cronitor](https://cronitor.io), [Healthchecks](https://healthchecks.io/), [Cronitor](https://cronitor.io),
[Cronhub](https://cronhub.io), and [PagerDuty](https://www.pagerduty.com/) and [Cronhub](https://cronhub.io), [PagerDuty](https://www.pagerduty.com/), and
pings these services whenever borgmatic runs. That way, you'll receive an [ntfy](https://ntfy.sh/) and pings these services whenever borgmatic runs.
alert when something goes wrong or (for certain hooks) the service doesn't That way, you'll receive an alert when something goes wrong or (for certain
hear from borgmatic for a configured interval. See [Healthchecks hooks) the service doesn't hear from borgmatic for a configured interval. See
[Healthchecks
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook), hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook),
[Cronitor [Cronitor
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook), hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook),
[Cronhub [Cronhub
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook), hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook),
and [PagerDuty [PagerDuty
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook) hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook),
and [ntfy hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
below for how to configure this. below for how to configure this.
While these services offer different features, you probably only need to use While these services offer different features, you probably only need to use
@ -59,8 +61,6 @@ one of them at most.
You can use traditional monitoring software to consume borgmatic JSON output You can use traditional monitoring software to consume borgmatic JSON output
and track when the last successful backup occurred. See [scripting and track when the last successful backup occurred. See [scripting
borgmatic](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#scripting-borgmatic) borgmatic](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#scripting-borgmatic)
and [related
software](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#related-software)
below for how to configure this. below for how to configure this.
### Borg hosting providers ### Borg hosting providers
@ -83,10 +83,10 @@ tests](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/).
## Error hooks ## Error hooks
When an error occurs during a `prune`, `create`, or `check` action, borgmatic When an error occurs during a `prune`, `compact`, `create`, or `check` action,
can run configurable shell commands to fire off custom error notifications or borgmatic can run configurable shell commands to fire off custom error
take other actions, so you can get alerted as soon as something goes wrong. notifications or take other actions, so you can get alerted as soon as
Here's a not-so-useful example: something goes wrong. Here's a not-so-useful example:
```yaml ```yaml
hooks: hooks:
@ -104,10 +104,9 @@ hooks:
- send-text-message.sh "{configuration_filename}" "{repository}" - send-text-message.sh "{configuration_filename}" "{repository}"
``` ```
In this example, when the error occurs, borgmatic interpolates a few runtime In this example, when the error occurs, borgmatic interpolates runtime values
values into the hook command: the borgmatic configuration filename, and the into the hook command: the borgmatic configuration filename, and the path of
path of the repository. Here's the full set of supported variables you can use the repository. Here's the full set of supported variables you can use here:
here:
* `configuration_filename`: borgmatic configuration filename in which the * `configuration_filename`: borgmatic configuration filename in which the
error occurred error occurred
@ -117,9 +116,9 @@ here:
* `output`: output of the command that failed (may be blank if an error * `output`: output of the command that failed (may be blank if an error
occurred without running a command) occurred without running a command)
Note that borgmatic runs the `on_error` hooks only for `prune`, `create`, or Note that borgmatic runs the `on_error` hooks only for `prune`, `compact`,
`check` actions or hooks in which an error occurs, and not other actions. `create`, or `check` actions or hooks in which an error occurs, and not other
borgmatic does not run `on_error` hooks if an error occurs within a actions. borgmatic does not run `on_error` hooks if an error occurs within a
`before_everything` or `after_everything` hook. For more about hooks, see the `before_everything` or `after_everything` hook. For more about hooks, see the
[borgmatic hooks [borgmatic hooks
documentation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/), documentation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/),
@ -137,14 +136,15 @@ URL" for your project. Here's an example:
```yaml ```yaml
hooks: hooks:
healthchecks: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a healthchecks:
ping_url: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a
``` ```
With this hook in place, borgmatic pings your Healthchecks project when a With this hook in place, borgmatic pings your Healthchecks project when a
backup begins, ends, or errors. Specifically, after the <a backup begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup` href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Healthchecks know that it has started if any of hooks</a> run, borgmatic lets Healthchecks know that it has started if any of
the `prune`, `create`, or `check` actions are run. the `prune`, `compact`, `create`, or `check` actions are run.
Then, if the actions complete successfully, borgmatic notifies Healthchecks of Then, if the actions complete successfully, borgmatic notifies Healthchecks of
the success after the `after_backup` hooks run, and includes borgmatic logs in the success after the `after_backup` hooks run, and includes borgmatic logs in
@ -155,11 +155,14 @@ in the Healthchecks UI, although be aware that Healthchecks currently has a
If an error occurs during any action or hook, borgmatic notifies Healthchecks If an error occurs during any action or hook, borgmatic notifies Healthchecks
after the `on_error` hooks run, also tacking on logs including the error after the `on_error` hooks run, also tacking on logs including the error
itself. But the logs are only included for errors that occur when a `prune`, itself. But the logs are only included for errors that occur when a `prune`,
`create`, or `check` action is run. `compact`, `create`, or `check` action is run.
You can customize the verbosity of the logs that are sent to Healthchecks with You can customize the verbosity of the logs that are sent to Healthchecks with
borgmatic's `--monitoring-verbosity` flag. The `--files` and `--stats` flags borgmatic's `--monitoring-verbosity` flag. The `--files` and `--stats` flags
may also be of use. See `borgmatic --help` for more information. may also be of use. See `borgmatic --help` for more information. Additionally,
see the [borgmatic configuration
file](https://torsion.org/borgmatic/docs/reference/configuration/) for
additional Healthchecks options.
You can configure Healthchecks to notify you by a [variety of You can configure Healthchecks to notify you by a [variety of
mechanisms](https://healthchecks.io/#welcome-integrations) when backups fail mechanisms](https://healthchecks.io/#welcome-integrations) when backups fail
@ -177,15 +180,16 @@ API URL" for your monitor. Here's an example:
```yaml ```yaml
hooks: hooks:
cronitor: https://cronitor.link/d3x0c1 cronitor:
ping_url: https://cronitor.link/d3x0c1
``` ```
With this hook in place, borgmatic pings your Cronitor monitor when a backup With this hook in place, borgmatic pings your Cronitor monitor when a backup
begins, ends, or errors. Specifically, after the <a begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup` href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Cronitor know that it has started if any of the hooks</a> run, borgmatic lets Cronitor know that it has started if any of the
`prune`, `create`, or `check` actions are run. Then, if the actions complete `prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
successfully, borgmatic notifies Cronitor of the success after the complete successfully, borgmatic notifies Cronitor of the success after the
`after_backup` hooks run. And if an error occurs during any action or hook, `after_backup` hooks run. And if an error occurs during any action or hook,
borgmatic notifies Cronitor after the `on_error` hooks run. borgmatic notifies Cronitor after the `on_error` hooks run.
@ -205,15 +209,16 @@ URL" for your monitor. Here's an example:
```yaml ```yaml
hooks: hooks:
cronhub: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031 cronhub:
ping_url: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031
``` ```
With this hook in place, borgmatic pings your Cronhub monitor when a backup With this hook in place, borgmatic pings your Cronhub monitor when a backup
begins, ends, or errors. Specifically, after the <a begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup` href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Cronhub know that it has started if any of the hooks</a> run, borgmatic lets Cronhub know that it has started if any of the
`prune`, `create`, or `check` actions are run. Then, if the actions complete `prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
successfully, borgmatic notifies Cronhub of the success after the complete successfully, borgmatic notifies Cronhub of the success after the
`after_backup` hooks run. And if an error occurs during any action or hook, `after_backup` hooks run. And if an error occurs during any action or hook,
borgmatic notifies Cronhub after the `on_error` hooks run. borgmatic notifies Cronhub after the `on_error` hooks run.
@ -247,14 +252,15 @@ Here's an example:
```yaml ```yaml
hooks: hooks:
pagerduty: a177cad45bd374409f78906a810a3074 pagerduty:
integration_key: a177cad45bd374409f78906a810a3074
``` ```
With this hook in place, borgmatic creates a PagerDuty event for your service With this hook in place, borgmatic creates a PagerDuty event for your service
whenever backups fail. Specifically, if an error occurs during a `create`, whenever backups fail. Specifically, if an error occurs during a `create`,
`prune`, or `check` action, borgmatic sends an event to PagerDuty before the `prune`, `compact`, or `check` action, borgmatic sends an event to PagerDuty
`on_error` hooks run. Note that borgmatic does not contact PagerDuty when a before the `on_error` hooks run. Note that borgmatic does not contact
backup starts or ends without error. PagerDuty when a backup starts or ends without error.
You can configure PagerDuty to notify you by a [variety of You can configure PagerDuty to notify you by a [variety of
mechanisms](https://support.pagerduty.com/docs/notifications) when backups mechanisms](https://support.pagerduty.com/docs/notifications) when backups
@ -264,6 +270,52 @@ If you have any issues with the integration, [please contact
us](https://torsion.org/borgmatic/#support-and-contributing). us](https://torsion.org/borgmatic/#support-and-contributing).
## ntfy hook
[ntfy](https://ntfy.sh) is a free, simple, service (either hosted or self-hosted)
which offers simple pub/sub push notifications to multiple platforms including
[web](https://ntfy.sh/stats), [Android](https://play.google.com/store/apps/details?id=io.heckel.ntfy)
and [iOS](https://apps.apple.com/us/app/ntfy/id1625396347).
Since push notifications for regular events might soon become quite annoying,
this hook only fires on any errors by default in order to instantly alert you to issues.
The `states` list can override this.
As ntfy is unauthenticated, it isn't a suitable channel for any private information
so the default messages are intentionally generic. These can be overridden, depending
on your risk assessment. Each `state` can have its own custom messages, priorities and tags
or, if none are provided, will use the default.
An example configuration is shown here, with all the available options, including
[priorities](https://ntfy.sh/docs/publish/#message-priority) and
[tags](https://ntfy.sh/docs/publish/#tags-emojis):
```yaml
hooks:
ntfy:
topic: my-unique-topic
server: https://ntfy.my-domain.com
start:
title: A Borgmatic backup started
message: Watch this space...
tags: borgmatic
priority: min
finish:
title: A Borgmatic backup completed successfully
message: Nice!
tags: borgmatic,+1
priority: min
fail:
title: A Borgmatic backup failed
message: You should probably fix it
tags: borgmatic,-1,skull
priority: max
states:
- start
- finish
- fail
```
## Scripting borgmatic ## Scripting borgmatic
To consume the output of borgmatic in other software, you can include an To consume the output of borgmatic in other software, you can include an
@ -275,40 +327,12 @@ suppressed so as not to interfere with the captured JSON. Also note that JSON
output only shows up at the console, and not in syslog. output only shows up at the console, and not in syslog.
## Related software
* [Borgmacator GNOME AppIndicator](https://github.com/N-Coder/borgmacator/)
### Successful backups
`borgmatic list` includes support for a `--successful` flag that only lists
successful (non-checkpoint) backups. This flag works via a basic heuristic: It
assumes that non-checkpoint archive names end with a digit (e.g. from a
timestamp), while checkpoint archive names do not. This means that if you're
using custom archive names that do not end in a digit, the `--successful` flag
will not work as expected.
Combined with a built-in Borg flag like `--last`, you can list the last
successful backup for use in your monitoring scripts. Here's an example
combined with `--json`:
```bash
borgmatic list --successful --last 1 --json
```
Note that this particular combination will only work if you've got a single
backup "series" in your repository. If you're instead backing up, say, from
multiple different hosts into a single repository, then you'll need to get
fancier with your archive listing. See `borg list --help` for more flags.
### Latest backups ### Latest backups
All borgmatic actions that accept an "--archive" flag allow you to specify an All borgmatic actions that accept an "--archive" flag allow you to specify an
archive name of "latest". This lets you get the latest successful archive archive name of "latest". This lets you get the latest archive without having
without having to first run "borgmatic list" manually, which can be handy in to first run "borgmatic list" manually, which can be handy in automated
automated scripts. Here's an example: scripts. Here's an example:
```bash ```bash
borgmatic info --archive latest borgmatic info --archive latest

View File

@ -0,0 +1,82 @@
---
title: How to provide your passwords
eleventyNavigation:
key: 🔒 Provide your passwords
parent: How-to guides
order: 2
---
## Environment variable interpolation
If you want to use a Borg repository passphrase or database passwords with
borgmatic, you can set them directly in your borgmatic configuration file,
treating those secrets like any other option value. But if you'd rather store
them outside of borgmatic, whether for convenience or security reasons, read
on.
<span class="minilink minilink-addedin">New in version 1.6.4</span> borgmatic
supports interpolating arbitrary environment variables directly into option
values in your configuration file. That means you can instruct borgmatic to
pull your repository passphrase, your database passwords, or any other option
values from environment variables. For instance:
```yaml
storage:
encryption_passphrase: ${MY_PASSPHRASE}
```
This uses the `MY_PASSPHRASE` environment variable as your encryption
passphrase. Note that the `{` `}` brackets are required. Just `$MY_PASSPHRASE`
will not work.
In the case of `encryption_passphrase` in particular, an alternate approach
is to use Borg's `BORG_PASSPHRASE` environment variable, which doesn't even
require setting an explicit `encryption_passphrase` value in borgmatic's
configuration file.
For [database
configuration](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/),
the same approach applies. For example:
```yaml
hooks:
postgresql_databases:
- name: users
password: ${MY_DATABASE_PASSWORD}
```
This uses the `MY_DATABASE_PASSWORD` environment variable as your database
password.
### Interpolation defaults
If you'd like to set a default for your environment variables, you can do so with the following syntax:
```yaml
storage:
encryption_passphrase: ${MY_PASSPHRASE:-defaultpass}
```
Here, "`defaultpass`" is the default passphrase if the `MY_PASSPHRASE`
environment variable is not set. Without a default, if the environment
variable doesn't exist, borgmatic will error.
### Disabling interpolation
To disable this environment variable interpolation feature entirely, you can
pass the `--no-environment-interpolation` flag on the command-line.
### Related features
Another way to override particular options within a borgmatic configuration
file is to use a [configuration
override](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides)
on the command-line. But please be aware of the security implications of
specifying secrets on the command-line.
Additionally, borgmatic action hooks support their own [variable
interpolation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation),
although in that case it's for particular borgmatic runtime values rather than
(only) environment variables.

View File

@ -0,0 +1,94 @@
---
title: How to run arbitrary Borg commands
eleventyNavigation:
key: 🔧 Run arbitrary Borg commands
parent: How-to guides
order: 11
---
## Running Borg with borgmatic
Borg has several commands (and options) that borgmatic does not currently
support. Sometimes though, as a borgmatic user, you may find yourself wanting
to take advantage of these off-the-beaten-path Borg features. You could of
course drop down to running Borg directly. But then you'd give up all the
niceties of your borgmatic configuration. You could file a [borgmatic
ticket](https://torsion.org/borgmatic/#issues) or even a [pull
request](https://torsion.org/borgmatic/#contributing) to add the feature. But
what if you need it *now*?
That's where borgmatic's support for running "arbitrary" Borg commands comes
in. Running Borg commands with borgmatic takes advantage of the following, all
based on your borgmatic configuration files or command-line arguments:
* configured repositories (automatically runs your Borg command once for each
one)
* local and remote Borg binary paths
* SSH settings and Borg environment variables
* lock wait settings
* verbosity
### borg action
The way you run Borg with borgmatic is via the `borg` action. Here's a simple
example:
```bash
borgmatic borg break-lock
```
(No `borg` action in borgmatic? Time to upgrade!)
This runs Borg's `break-lock` command once on each configured borgmatic
repository. Notice how the repository isn't present in the specified Borg
options, as that part is provided by borgmatic.
You can also specify Borg options for relevant commands:
```bash
borgmatic borg list --progress
```
This runs Borg's `list` command once on each configured borgmatic
repository. However, the native `borgmatic list` action should be preferred
for most use.
What if you only want to run Borg on a single configured borgmatic repository
when you've got several configured? Not a problem.
```bash
borgmatic borg --repository repo.borg break-lock
```
And what about a single archive?
```bash
borgmatic borg --archive your-archive-name list
```
### Limitations
borgmatic's `borg` action is not without limitations:
* The Borg command you want to run (`create`, `list`, etc.) *must* come first
after the `borg` action. If you have any other Borg options to specify,
provide them after. For instance, `borgmatic borg list --progress` will work,
but `borgmatic borg --progress list` will not.
* borgmatic supplies the repository/archive name to Borg for you (based on
your borgmatic configuration or the `borgmatic borg --repository`/`--archive`
arguments), so do not specify the repository/archive otherwise.
* The `borg` action will not currently work for any Borg commands like `borg
serve` that do not accept a repository/archive name.
* Do not specify any global borgmatic arguments to the right of the `borg`
action. (They will be passed to Borg instead of borgmatic.) If you have
global borgmatic arguments, specify them *before* the `borg` action.
* Unlike other borgmatic actions, you cannot combine the `borg` action with
other borgmatic actions. This is to prevent ambiguity in commands like
`borgmatic borg list`, in which `list` is both a valid Borg command and a
borgmatic action. In this case, only the Borg command is run.
* Unlike normal borgmatic actions that support JSON, the `borg` action will
not disable certain borgmatic logs to avoid interfering with JSON output.
In general, this `borgmatic borg` feature should be considered an escape
valve—a feature of second resort. In the long run, it's preferable to wrap
Borg commands with borgmatic actions that can support them fully.

View File

@ -1,7 +1,7 @@
--- ---
title: How to set up backups title: How to set up backups
eleventyNavigation: eleventyNavigation:
key: Set up backups key: 📥 Set up backups
parent: How-to guides parent: How-to guides
order: 0 order: 0
--- ---
@ -28,7 +28,7 @@ sudo pip3 install --user --upgrade borgmatic
This installs borgmatic and its commands at the `/root/.local/bin` path. This installs borgmatic and its commands at the `/root/.local/bin` path.
Your pip binary may have a different name than "pip3". Make sure you're using Your pip binary may have a different name than "pip3". Make sure you're using
Python 3, as borgmatic does not support Python 2. Python 3.7+, as borgmatic does not support older versions of Python.
The next step is to ensure that borgmatic's commands available are on your The next step is to ensure that borgmatic's commands available are on your
system `PATH`, so that you can run borgmatic: system `PATH`, so that you can run borgmatic:
@ -51,6 +51,11 @@ sudo borgmatic --version
If borgmatic is properly installed, that should output your borgmatic version. If borgmatic is properly installed, that should output your borgmatic version.
As an alternative to adding the path to `~/.bashrc` file, if you're using sudo
to run borgmatic, you can configure [sudo's
`secure_path` option](https://man.archlinux.org/man/sudoers.5) to include
borgmatic's path.
### Global install option ### Global install option
@ -68,7 +73,7 @@ sudo pip3 install --upgrade borgmatic
The main downside of a global install is that borgmatic is less cleanly The main downside of a global install is that borgmatic is less cleanly
separated from the rest of your Python software, and there's the theoretical separated from the rest of your Python software, and there's the theoretical
possibility of libary conflicts. But if you're okay with that, for instance possibility of library conflicts. But if you're okay with that, for instance
on a relatively dedicated system, then a global install can work out fine. on a relatively dedicated system, then a global install can work out fine.
@ -77,8 +82,8 @@ on a relatively dedicated system, then a global install can work out fine.
Besides the approaches described above, there are several other options for Besides the approaches described above, there are several other options for
installing borgmatic: installing borgmatic:
* [Docker image with scheduled backups](https://hub.docker.com/r/b3vis/borgmatic/) * [Docker image with scheduled backups](https://hub.docker.com/r/b3vis/borgmatic/) (+ Docker Compose files)
* [Docker base image](https://hub.docker.com/r/monachus/borgmatic/) * [Docker image with multi-arch and Docker CLI support](https://hub.docker.com/r/modem7/borgmatic-docker/)
* [Debian](https://tracker.debian.org/pkg/borgmatic) * [Debian](https://tracker.debian.org/pkg/borgmatic)
* [Ubuntu](https://launchpad.net/ubuntu/+source/borgmatic) * [Ubuntu](https://launchpad.net/ubuntu/+source/borgmatic)
* [Fedora official](https://bodhi.fedoraproject.org/updates/?search=borgmatic) * [Fedora official](https://bodhi.fedoraproject.org/updates/?search=borgmatic)
@ -87,23 +92,27 @@ installing borgmatic:
* [Alpine Linux](https://pkgs.alpinelinux.org/packages?name=borgmatic) * [Alpine Linux](https://pkgs.alpinelinux.org/packages?name=borgmatic)
* [OpenBSD](http://ports.su/sysutils/borgmatic) * [OpenBSD](http://ports.su/sysutils/borgmatic)
* [openSUSE](https://software.opensuse.org/package/borgmatic) * [openSUSE](https://software.opensuse.org/package/borgmatic)
* [stand-alone binary](https://github.com/cmarquardt/borgmatic-binary) * [macOS (via Homebrew)](https://formulae.brew.sh/formula/borgmatic)
* [Ansible role](https://github.com/borgbase/ansible-role-borgbackup)
* [virtualenv](https://virtualenv.pypa.io/en/stable/) * [virtualenv](https://virtualenv.pypa.io/en/stable/)
## Hosting providers ## Hosting providers
Need somewhere to store your encrypted offsite backups? The following hosting Need somewhere to store your encrypted off-site backups? The following hosting
providers include specific support for Borg/borgmatic. Using these links and providers include specific support for Borg/borgmatic—and fund borgmatic
services helps support borgmatic development and hosting. (These are referral development and hosting when you use these links to sign up. (These are
links, but without any tracking scripts or cookies.) referral links, but without any tracking scripts or cookies.)
<ul> <ul>
<li class="referral"><a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic">rsync.net</a>: Cloud Storage provider with full support for borg and any other SSH/SFTP tool</li>
<li class="referral"><a href="https://www.borgbase.com/?utm_source=borgmatic">BorgBase</a>: Borg hosting service with support for monitoring, 2FA, and append-only repos</li> <li class="referral"><a href="https://www.borgbase.com/?utm_source=borgmatic">BorgBase</a>: Borg hosting service with support for monitoring, 2FA, and append-only repos</li>
<li class="referral"><a href="https://storage.lima-labs.com/special-pricing-offer-for-borgmatic-users/">Lima-Labs</a>: Affordable, reliable cloud data storage accessable via SSH/SCP/FTP for Borg backups or any other bulk storage needs</li>
</ul> </ul>
Additionally, [rsync.net](https://www.rsync.net/products/borg.html) and
[Hetzner](https://www.hetzner.com/storage/storage-box) have compatible storage
offerings, but do not currently fund borgmatic development or hosting.
## Configuration ## Configuration
After you install borgmatic, generate a sample configuration file: After you install borgmatic, generate a sample configuration file:
@ -139,8 +148,8 @@ configuration](https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-your
### Encryption ### Encryption
If you encrypt your Borg repository with a passphrase instead of a key file, If you encrypt your Borg repository with a passphrase or a key file, you'll
you'll either need to set the borgmatic `encryption_passphrase` configuration either need to set the borgmatic `encryption_passphrase` configuration
variable or set the `BORG_PASSPHRASE` environment variable. See the variable or set the `BORG_PASSPHRASE` environment variable. See the
[repository encryption [repository encryption
section](https://borgbackup.readthedocs.io/en/stable/quickstart.html#repository-encryption) section](https://borgbackup.readthedocs.io/en/stable/quickstart.html#repository-encryption)
@ -224,8 +233,8 @@ sudo borgmatic --verbosity 1 --files
borgmatic. So try leaving it out, or upgrade borgmatic!) borgmatic. So try leaving it out, or upgrade borgmatic!)
By default, this will also prune any old backups as per the configured By default, this will also prune any old backups as per the configured
retention policy, and check backups for consistency problems due to things retention policy, compact segments to free up space (with Borg 1.2+), and
like file damage. check backups for consistency problems due to things like file damage.
The verbosity flag makes borgmatic show the steps it's performing. And the The verbosity flag makes borgmatic show the steps it's performing. And the
files flag lists each file that's new or changed since the last backup. files flag lists each file that's new or changed since the last backup.
@ -245,7 +254,7 @@ that, you can configure a separate job runner to invoke it periodically.
### cron ### cron
If you're using cron, download the [sample cron If you're using cron, download the [sample cron
file](https://projects.torsion.org/witten/borgmatic/src/master/sample/cron/borgmatic). file](https://projects.torsion.org/borgmatic-collective/borgmatic/src/master/sample/cron/borgmatic).
Then, from the directory where you downloaded it: Then, from the directory where you downloaded it:
```bash ```bash
@ -253,15 +262,26 @@ sudo mv borgmatic /etc/cron.d/borgmatic
sudo chmod +x /etc/cron.d/borgmatic sudo chmod +x /etc/cron.d/borgmatic
``` ```
You can modify the cron file if you'd like to run borgmatic more or less frequently. If borgmatic is installed at a different location than
`/root/.local/bin/borgmatic`, edit the cron file with the correct path. You
can also modify the cron file if you'd like to run borgmatic more or less
frequently.
### systemd ### systemd
If you're using systemd instead of cron to run jobs, download the [sample If you're using systemd instead of cron to run jobs, you can still configure
systemd service borgmatic to run automatically.
file](https://projects.torsion.org/witten/borgmatic/raw/branch/master/sample/systemd/borgmatic.service)
(If you installed borgmatic from [Other ways to
install](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#other-ways-to-install),
you may already have borgmatic systemd service and timer files. If so, you may
be able to skip some of the steps below.)
First, download the [sample systemd service
file](https://projects.torsion.org/borgmatic-collective/borgmatic/raw/branch/master/sample/systemd/borgmatic.service)
and the [sample systemd timer and the [sample systemd timer
file](https://projects.torsion.org/witten/borgmatic/raw/branch/master/sample/systemd/borgmatic.timer). file](https://projects.torsion.org/borgmatic-collective/borgmatic/raw/branch/master/sample/systemd/borgmatic.timer).
Then, from the directory where you downloaded them: Then, from the directory where you downloaded them:
```bash ```bash
@ -281,12 +301,46 @@ borgmatic to run.
If you run borgmatic in macOS with launchd, you may encounter permissions If you run borgmatic in macOS with launchd, you may encounter permissions
issues when reading files to backup. If that happens to you, you may be issues when reading files to backup. If that happens to you, you may be
interested in an [unofficial work-around for Full Disk interested in an [unofficial work-around for Full Disk
Access](https://projects.torsion.org/witten/borgmatic/issues/293). Access](https://projects.torsion.org/borgmatic-collective/borgmatic/issues/293).
## Colored output ## Niceties
Borgmatic produces colored terminal output by default. It is disabled when a
### Shell completion
borgmatic includes a shell completion script (currently only for Bash) to
support tab-completing borgmatic command-line actions and flags. Depending on
how you installed borgmatic, this may be enabled by default. But if it's not,
start by installing the `bash-completion` Linux package or the
[`bash-completion@2`](https://formulae.brew.sh/formula/bash-completion@2)
macOS Homebrew formula. Then, install the shell completion script globally:
```bash
sudo su -c "borgmatic --bash-completion > $(pkg-config --variable=completionsdir bash-completion)/borgmatic"
```
If you don't have `pkg-config` installed, you can try the following path
instead:
```bash
sudo su -c "borgmatic --bash-completion > /usr/share/bash-completion/completions/borgmatic"
```
Or, if you'd like to install the script for just the current user:
```bash
mkdir --parents ~/.local/share/bash-completion/completions
borgmatic --bash-completion > ~/.local/share/bash-completion/completions/borgmatic
```
Finally, restart your shell (`exit` and open a new shell) so the completions
take effect.
### Colored output
borgmatic produces colored terminal output by default. It is disabled when a
non-interactive terminal is detected (like a cron job), or when you use the non-interactive terminal is detected (like a cron job), or when you use the
`--json` flag. Otherwise, you can disable it by passing the `--no-color` flag, `--json` flag. Otherwise, you can disable it by passing the `--no-color` flag,
setting the environment variable `PY_COLORS=False`, or setting the `color` setting the environment variable `PY_COLORS=False`, or setting the `color`

View File

@ -1,9 +1,9 @@
--- ---
title: How to upgrade borgmatic title: How to upgrade borgmatic
eleventyNavigation: eleventyNavigation:
key: Upgrade borgmatic key: 📦 Upgrade borgmatic
parent: How-to guides parent: How-to guides
order: 10 order: 12
--- ---
## Upgrading ## Upgrading

View File

@ -1,7 +1,7 @@
--- ---
title: Command-line reference title: Command-line reference
eleventyNavigation: eleventyNavigation:
key: Command-line reference key: ⌨️ Command-line reference
parent: Reference guides parent: Reference guides
order: 1 order: 1
--- ---

View File

@ -1,7 +1,7 @@
--- ---
title: Configuration reference title: Configuration reference
eleventyNavigation: eleventyNavigation:
key: Configuration reference key: ⚙️ Configuration reference
parent: Reference guides parent: Reference guides
order: 0 order: 0
--- ---

BIN
docs/static/mongodb.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

BIN
docs/static/ntfy.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.3 KiB

View File

@ -1,3 +1,3 @@
# You can drop this file into /etc/cron.d/ to run borgmatic nightly. # You can drop this file into /etc/cron.d/ to run borgmatic nightly.
0 3 * * * root PATH=$PATH:/usr/bin:/usr/local/bin /root/.local/bin/borgmatic --syslog-verbosity 1 0 3 * * * root PATH=$PATH:/usr/bin:/usr/local/bin /root/.local/bin/borgmatic --verbosity -1 --syslog-verbosity 1

View File

@ -2,14 +2,16 @@
Description=borgmatic backup Description=borgmatic backup
Wants=network-online.target Wants=network-online.target
After=network-online.target After=network-online.target
# Prevent borgmatic from running unless the machine is plugged into power. Remove this line if you
# want to allow borgmatic to run anytime.
ConditionACPower=true ConditionACPower=true
[Service] [Service]
Type=oneshot Type=oneshot
# Security settings for systemd running as root # Security settings for systemd running as root, optional but recommended to improve security. You
# For more details about this settings check the systemd manuals # can disable individual settings if they cause problems for your use case. For more details, see
# https://www.freedesktop.org/software/systemd/man/systemd.exec.html # the systemd manual: https://www.freedesktop.org/software/systemd/man/systemd.exec.html
LockPersonality=true LockPersonality=true
# Certain borgmatic features like Healthchecks integration need MemoryDenyWriteExecute to be off. # Certain borgmatic features like Healthchecks integration need MemoryDenyWriteExecute to be off.
# But you can try setting it to "yes" for improved security if you don't use those features. # But you can try setting it to "yes" for improved security if you don't use those features.
@ -29,14 +31,19 @@ RestrictRealtime=yes
RestrictSUIDSGID=yes RestrictSUIDSGID=yes
SystemCallArchitectures=native SystemCallArchitectures=native
SystemCallFilter=@system-service SystemCallFilter=@system-service
# Restrict write access SystemCallErrorNumber=EPERM
# Change to 'ProtectSystem=strict' and uncomment 'ProtectHome' to make the whole file # To restrict write access further, change "ProtectSystem" to "strict" and uncomment
# system read-only be default and uncomment 'ReadWritePaths' for the required write access. # "ReadWritePaths", "ReadOnlyPaths", "ProtectHome", and "BindPaths". Then add any local repository
# Add local repositroy paths to the list of 'ReadWritePaths' like '-/mnt/my_backup_drive'. # paths to the list of "ReadWritePaths" and local backup source paths to "ReadOnlyPaths". This
# leaves most of the filesystem read-only to borgmatic.
ProtectSystem=full ProtectSystem=full
# ProtectHome=read-only # ReadWritePaths=-/mnt/my_backup_drive
# ReadWritePaths=-/root/.config/borg -/root/.cache/borg -/root/.borgmatic # ReadOnlyPaths=-/var/lib/my_backup_source
# This will mount a tmpfs on top of /root and pass through needed paths
# ProtectHome=tmpfs
# BindPaths=-/root/.cache/borg -/root/.cache/borg -/root/.borgmatic
# May interfere with running external programs within borgmatic hooks.
CapabilityBoundingSet=CAP_DAC_READ_SEARCH CAP_NET_RAW CapabilityBoundingSet=CAP_DAC_READ_SEARCH CAP_NET_RAW
# Lower CPU and I/O priority. # Lower CPU and I/O priority.
@ -54,4 +61,4 @@ LogRateLimitIntervalSec=0
# Delay start to prevent backups running during boot. Note that systemd-inhibit requires dbus and # Delay start to prevent backups running during boot. Note that systemd-inhibit requires dbus and
# dbus-user-session to be installed. # dbus-user-session to be installed.
ExecStartPre=sleep 1m ExecStartPre=sleep 1m
ExecStart=systemd-inhibit --who="borgmatic" --why="Prevent interrupting scheduled backup" /root/.local/bin/borgmatic --syslog-verbosity 1 ExecStart=systemd-inhibit --who="borgmatic" --why="Prevent interrupting scheduled backup" /root/.local/bin/borgmatic --verbosity -1 --syslog-verbosity 1

View File

@ -4,6 +4,7 @@ Description=Run borgmatic backup
[Timer] [Timer]
OnCalendar=daily OnCalendar=daily
Persistent=true Persistent=true
RandomizedDelaySec=3h
[Install] [Install]
WantedBy=timers.target WantedBy=timers.target

View File

@ -38,7 +38,7 @@ for sub_command in prune create check list info; do
| grep -v '^--json$' \ | grep -v '^--json$' \
| grep -v '^--keep-last$' \ | grep -v '^--keep-last$' \
| grep -v '^--list$' \ | grep -v '^--list$' \
| grep -v '^--nobsdflags$' \ | grep -v '^--bsdflags$' \
| grep -v '^--pattern$' \ | grep -v '^--pattern$' \
| grep -v '^--progress$' \ | grep -v '^--progress$' \
| grep -v '^--stats$' \ | grep -v '^--stats$' \
@ -54,7 +54,7 @@ for sub_command in prune create check list info; do
| grep -v '^--format' \ | grep -v '^--format' \
| grep -v '^--glob-archives' \ | grep -v '^--glob-archives' \
| grep -v '^--last' \ | grep -v '^--last' \
| grep -v '^--list-format' \ | grep -v '^--format' \
| grep -v '^--patterns-from' \ | grep -v '^--patterns-from' \
| grep -v '^--prefix' \ | grep -v '^--prefix' \
| grep -v '^--short' \ | grep -v '^--short' \

View File

@ -15,6 +15,12 @@ if [[ ! -f NEWS ]]; then
fi fi
version=$(head --lines=1 NEWS) version=$(head --lines=1 NEWS)
if [[ $version =~ .*dev* ]]; then
echo "Refusing to release a dev version: $version"
exit 1
fi
git tag $version git tag $version
git push origin $version git push origin $version
git push github $version git push github $version
@ -25,14 +31,15 @@ python3 setup.py bdist_wheel
python3 setup.py sdist python3 setup.py sdist
gpg --detach-sign --armor dist/borgmatic-*.tar.gz gpg --detach-sign --armor dist/borgmatic-*.tar.gz
gpg --detach-sign --armor dist/borgmatic-*-py3-none-any.whl gpg --detach-sign --armor dist/borgmatic-*-py3-none-any.whl
twine upload -r pypi dist/borgmatic-*.tar.gz dist/borgmatic-*.tar.gz.asc twine upload -r pypi --username __token__ dist/borgmatic-*.tar.gz dist/borgmatic-*.tar.gz.asc
twine upload -r pypi dist/borgmatic-*-py3-none-any.whl dist/borgmatic-*-py3-none-any.whl.asc twine upload -r pypi --username __token__ dist/borgmatic-*-py3-none-any.whl dist/borgmatic-*-py3-none-any.whl.asc
# Set release changelogs on projects.torsion.org and GitHub. # Set release changelogs on projects.torsion.org and GitHub.
release_changelog="$(cat NEWS | sed '/^$/q' | grep -v '^\S')" release_changelog="$(cat NEWS | sed '/^$/q' | grep -v '^\S')"
escaped_release_changelog="$(echo "$release_changelog" | sed -z 's/\n/\\n/g' | sed -z 's/\"/\\"/g')" escaped_release_changelog="$(echo "$release_changelog" | sed -z 's/\n/\\n/g' | sed -z 's/\"/\\"/g')"
curl --silent --request POST \ curl --silent --request POST \
"https://projects.torsion.org/api/v1/repos/witten/borgmatic/releases?access_token=$projects_token" \ "https://projects.torsion.org/api/v1/repos/borgmatic-collective/borgmatic/releases" \
--header "Authorization: token $projects_token" \
--header "Accept: application/json" \ --header "Accept: application/json" \
--header "Content-Type: application/json" \ --header "Content-Type: application/json" \
--data "{\"body\": \"$escaped_release_changelog\", \"draft\": false, \"name\": \"borgmatic $version\", \"prerelease\": false, \"tag_name\": \"$version\"}" --data "{\"body\": \"$escaped_release_changelog\", \"draft\": false, \"name\": \"borgmatic $version\", \"prerelease\": false, \"tag_name\": \"$version\"}"

View File

@ -10,9 +10,12 @@
set -e set -e
python -m pip install --upgrade pip==20.0.2 apk add --no-cache python3 py3-pip borgbackup postgresql-client mariadb-client mongodb-tools \
pip install tox==3.14.3 py3-ruamel.yaml py3-ruamel.yaml.clib bash
# If certain dependencies of black are available in this version of Alpine, install them.
apk add --no-cache py3-typed-ast py3-regex || true
python3 -m pip install --no-cache --upgrade pip==22.0.3 setuptools==60.8.1
pip3 install tox==3.24.5
export COVERAGE_FILE=/tmp/.coverage export COVERAGE_FILE=/tmp/.coverage
tox --workdir /tmp/.tox tox --workdir /tmp/.tox --sitepackages
apk add --no-cache borgbackup postgresql-client mariadb-client tox --workdir /tmp/.tox --sitepackages -e end-to-end
tox --workdir /tmp/.tox -e end-to-end

View File

@ -1,5 +1,5 @@
[metadata] [metadata]
description-file=README.md description_file=README.md
[tool:pytest] [tool:pytest]
testpaths = tests testpaths = tests

View File

@ -1,6 +1,6 @@
from setuptools import find_packages, setup from setuptools import find_packages, setup
VERSION = '1.5.11.dev0' VERSION = '1.6.4.dev0'
setup( setup(
@ -30,11 +30,12 @@ setup(
}, },
obsoletes=['atticmatic'], obsoletes=['atticmatic'],
install_requires=( install_requires=(
'pykwalify>=1.6.0,<14.06',
'requests',
'ruamel.yaml>0.15.0,<0.17.0',
'setuptools',
'colorama>=0.4.1,<0.5', 'colorama>=0.4.1,<0.5',
'jsonschema',
'requests',
'ruamel.yaml>0.15.0,<0.18.0',
'setuptools',
), ),
include_package_data=True, include_package_data=True,
python_requires='>=3.7',
) )

View File

@ -1,25 +1,23 @@
appdirs==1.4.3 appdirs==1.4.4; python_version >= '3.8'
atomicwrites==1.3.0 attrs==20.3.0; python_version >= '3.8'
attrs==19.3.0 black==19.10b0; python_version >= '3.8'
black==19.3b0; python_version >= '3.6' click==7.1.2; python_version >= '3.8'
click==7.0 colorama==0.4.4
colorama==0.4.1 coverage==5.3
coverage==4.5.4 flake8==4.0.1
docopt==0.6.2
flake8==3.7.9
flexmock==0.10.4 flexmock==0.10.4
isort==4.3.21 isort==5.9.1
mccabe==0.6.1 mccabe==0.6.1
more-itertools==7.2.0 pluggy==0.13.1
pluggy==0.13.0 pathspec==0.8.1; python_version >= '3.8'
py==1.8.0 py==1.10.0
pycodestyle==2.5.0 pycodestyle==2.8.0
pyflakes==2.1.1 pyflakes==2.4.0
pykwalify==1.7.0 jsonschema==3.2.0
pytest==5.2.2 pytest==6.2.5
pytest-cov==2.8.1 pytest-cov==3.0.0
python-dateutil==2.8.0 regex; python_version >= '3.8'
PyYAML==5.1.2 requests==2.25.0
requests==2.22.0 ruamel.yaml>0.15.0,<0.18.0
ruamel.yaml>0.15.0,<0.17.0 toml==0.10.2; python_version >= '3.8'
toml==0.10.0 typed-ast; python_version >= '3.8'

View File

@ -1,7 +1,7 @@
version: '3' version: '3'
services: services:
postgresql: postgresql:
image: postgres:12.2-alpine image: postgres:13.1-alpine
environment: environment:
POSTGRES_PASSWORD: test POSTGRES_PASSWORD: test
POSTGRES_DB: test POSTGRES_DB: test
@ -10,8 +10,13 @@ services:
environment: environment:
MYSQL_ROOT_PASSWORD: test MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test MYSQL_DATABASE: test
mongodb:
image: mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: test
tests: tests:
image: python:3.8-alpine3.11 image: alpine:3.13
volumes: volumes:
- "../..:/app:ro" - "../..:/app:ro"
tmpfs: tmpfs:

View File

@ -20,6 +20,7 @@ def generate_configuration(config_path, repository_path):
.read() .read()
.replace('user@backupserver:sourcehostname.borg', repository_path) .replace('user@backupserver:sourcehostname.borg', repository_path)
.replace('- user@backupserver:{fqdn}', '') .replace('- user@backupserver:{fqdn}', '')
.replace('- /home/user/path with spaces', '')
.replace('- /home', '- {}'.format(config_path)) .replace('- /home', '- {}'.format(config_path))
.replace('- /etc', '') .replace('- /etc', '')
.replace('- /var/log/syslog*', '') .replace('- /var/log/syslog*', '')

View File

@ -0,0 +1,5 @@
import subprocess
def test_bash_completion_runs_without_error():
subprocess.check_call('borgmatic --bash-completion | bash', shell=True)

View File

@ -47,13 +47,22 @@ hooks:
hostname: mysql hostname: mysql
username: root username: root
password: test password: test
mongodb_databases:
- name: test
hostname: mongodb
username: root
password: test
authentication_database: admin
- name: all
hostname: mongodb
username: root
password: test
'''.format( '''.format(
config_path, repository_path, borgmatic_source_directory, postgresql_dump_format config_path, repository_path, borgmatic_source_directory, postgresql_dump_format
) )
config_file = open(config_path, 'w') with open(config_path, 'w') as config_file:
config_file.write(config) config_file.write(config)
config_file.close()
def test_database_dump_and_restore(): def test_database_dump_and_restore():
@ -69,15 +78,15 @@ def test_database_dump_and_restore():
write_configuration(config_path, repository_path, borgmatic_source_directory) write_configuration(config_path, repository_path, borgmatic_source_directory)
subprocess.check_call( subprocess.check_call(
'borgmatic -v 2 --config {} init --encryption repokey'.format(config_path).split(' ') ['borgmatic', '-v', '2', '--config', config_path, 'init', '--encryption', 'repokey']
) )
# Run borgmatic to generate a backup archive including a database dump. # Run borgmatic to generate a backup archive including a database dump.
subprocess.check_call('borgmatic create --config {} -v 2'.format(config_path).split(' ')) subprocess.check_call(['borgmatic', 'create', '--config', config_path, '-v', '2'])
# Get the created archive name. # Get the created archive name.
output = subprocess.check_output( output = subprocess.check_output(
'borgmatic --config {} list --json'.format(config_path).split(' ') ['borgmatic', '--config', config_path, 'list', '--json']
).decode(sys.stdout.encoding) ).decode(sys.stdout.encoding)
parsed_output = json.loads(output) parsed_output = json.loads(output)
@ -87,9 +96,7 @@ def test_database_dump_and_restore():
# Restore the database from the archive. # Restore the database from the archive.
subprocess.check_call( subprocess.check_call(
'borgmatic --config {} restore --archive {}'.format(config_path, archive_name).split( ['borgmatic', '--config', config_path, 'restore', '--archive', archive_name]
' '
)
) )
finally: finally:
os.chdir(original_working_directory) os.chdir(original_working_directory)
@ -114,15 +121,15 @@ def test_database_dump_and_restore_with_directory_format():
) )
subprocess.check_call( subprocess.check_call(
'borgmatic -v 2 --config {} init --encryption repokey'.format(config_path).split(' ') ['borgmatic', '-v', '2', '--config', config_path, 'init', '--encryption', 'repokey']
) )
# Run borgmatic to generate a backup archive including a database dump. # Run borgmatic to generate a backup archive including a database dump.
subprocess.check_call('borgmatic create --config {} -v 2'.format(config_path).split(' ')) subprocess.check_call(['borgmatic', 'create', '--config', config_path, '-v', '2'])
# Restore the database from the archive. # Restore the database from the archive.
subprocess.check_call( subprocess.check_call(
'borgmatic --config {} restore --archive latest'.format(config_path).split(' ') ['borgmatic', '--config', config_path, 'restore', '--archive', 'latest']
) )
finally: finally:
os.chdir(original_working_directory) os.chdir(original_working_directory)
@ -142,7 +149,7 @@ def test_database_dump_with_error_causes_borgmatic_to_exit():
write_configuration(config_path, repository_path, borgmatic_source_directory) write_configuration(config_path, repository_path, borgmatic_source_directory)
subprocess.check_call( subprocess.check_call(
'borgmatic -v 2 --config {} init --encryption repokey'.format(config_path).split(' ') ['borgmatic', '-v', '2', '--config', config_path, 'init', '--encryption', 'repokey']
) )
# Run borgmatic with a config override such that the database dump fails. # Run borgmatic with a config override such that the database dump fails.

View File

@ -0,0 +1,17 @@
from borgmatic.borg import feature as module
def test_available_true_for_new_enough_borg_version():
assert module.available(module.Feature.COMPACT, '1.3.7')
def test_available_true_for_borg_version_introducing_feature():
assert module.available(module.Feature.COMPACT, '1.2.0a2')
def test_available_true_for_borg_stable_version_introducing_feature():
assert module.available(module.Feature.COMPACT, '1.2.0')
def test_available_false_for_too_old_borg_version():
assert not module.available(module.Feature.COMPACT, '1.1.5')

View File

@ -163,6 +163,24 @@ def test_parse_arguments_with_help_and_action_shows_action_help(capsys):
assert 'create arguments:' in captured.out assert 'create arguments:' in captured.out
def test_parse_arguments_with_action_before_global_options_parses_options():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
arguments = module.parse_arguments('prune', '--verbosity', '2')
assert 'prune' in arguments
assert arguments['global'].verbosity == 2
def test_parse_arguments_with_global_options_before_action_parses_options():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
arguments = module.parse_arguments('--verbosity', '2', 'prune')
assert 'prune' in arguments
assert arguments['global'].verbosity == 2
def test_parse_arguments_with_prune_action_leaves_other_actions_disabled(): def test_parse_arguments_with_prune_action_leaves_other_actions_disabled():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default']) flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
@ -278,15 +296,6 @@ def test_parse_arguments_disallows_init_and_dry_run():
) )
def test_parse_arguments_disallows_glob_archives_with_successful():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
with pytest.raises(ValueError):
module.parse_arguments(
'--config', 'myconfig', 'list', '--glob-archives', '*glob*', '--successful'
)
def test_parse_arguments_disallows_repository_unless_action_consumes_it(): def test_parse_arguments_disallows_repository_unless_action_consumes_it():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default']) flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])

View File

@ -0,0 +1,5 @@
from borgmatic.commands import completion as module
def test_bash_completion_does_not_raise():
assert module.bash_completion()

View File

@ -1,13 +1,25 @@
from borgmatic.commands import generate_config as module from borgmatic.commands import generate_config as module
def test_parse_arguments_with_no_arguments_uses_defaults(): def test_parse_arguments_with_no_arguments_uses_default_destination():
parser = module.parse_arguments() parser = module.parse_arguments()
assert parser.destination_filename == module.DEFAULT_DESTINATION_CONFIG_FILENAME assert parser.destination_filename == module.DEFAULT_DESTINATION_CONFIG_FILENAME
def test_parse_arguments_with_filename_argument_overrides_defaults(): def test_parse_arguments_with_destination_argument_overrides_default():
parser = module.parse_arguments('--destination', 'config.yaml') parser = module.parse_arguments('--destination', 'config.yaml')
assert parser.destination_filename == 'config.yaml' assert parser.destination_filename == 'config.yaml'
def test_parse_arguments_parses_source():
parser = module.parse_arguments('--source', 'source.yaml', '--destination', 'config.yaml')
assert parser.source_filename == 'source.yaml'
def test_parse_arguments_parses_overwrite():
parser = module.parse_arguments('--destination', 'config.yaml', '--overwrite')
assert parser.overwrite

View File

@ -87,7 +87,7 @@ location:
assert module._comment_out_optional_configuration(config.strip()) == expected_config.strip() assert module._comment_out_optional_configuration(config.strip()) == expected_config.strip()
def testrender_configuration_converts_configuration_to_yaml_string(): def test_render_configuration_converts_configuration_to_yaml_string():
yaml_string = module.render_configuration({'foo': 'bar'}) yaml_string = module.render_configuration({'foo': 'bar'})
assert yaml_string == 'foo: bar\n' assert yaml_string == 'foo: bar\n'
@ -110,6 +110,12 @@ def test_write_configuration_with_already_existing_file_raises():
module.write_configuration('config.yaml', 'config: yaml') module.write_configuration('config.yaml', 'config: yaml')
def test_write_configuration_with_already_existing_file_and_overwrite_does_not_raise():
flexmock(os.path).should_receive('exists').and_return(True)
module.write_configuration('config.yaml', 'config: yaml', overwrite=True)
def test_write_configuration_with_already_existing_directory_does_not_raise(): def test_write_configuration_with_already_existing_directory_does_not_raise():
flexmock(os.path).should_receive('exists').and_return(False) flexmock(os.path).should_receive('exists').and_return(False)
flexmock(os).should_receive('makedirs').and_raise(FileExistsError) flexmock(os).should_receive('makedirs').and_raise(FileExistsError)
@ -122,38 +128,44 @@ def test_write_configuration_with_already_existing_directory_does_not_raise():
def test_add_comments_to_configuration_sequence_of_strings_does_not_raise(): def test_add_comments_to_configuration_sequence_of_strings_does_not_raise():
config = module.yaml.comments.CommentedSeq(['foo', 'bar']) config = module.yaml.comments.CommentedSeq(['foo', 'bar'])
schema = {'seq': [{'type': 'str'}]} schema = {'type': 'array', 'items': {'type': 'string'}}
module.add_comments_to_configuration_sequence(config, schema) module.add_comments_to_configuration_sequence(config, schema)
def test_add_comments_to_configuration_sequence_of_maps_does_not_raise(): def test_add_comments_to_configuration_sequence_of_maps_does_not_raise():
config = module.yaml.comments.CommentedSeq([module.yaml.comments.CommentedMap([('foo', 'yo')])]) config = module.yaml.comments.CommentedSeq([module.yaml.comments.CommentedMap([('foo', 'yo')])])
schema = {'seq': [{'map': {'foo': {'desc': 'yo'}}}]} schema = {
'type': 'array',
'items': {'type': 'object', 'properties': {'foo': {'description': 'yo'}}},
}
module.add_comments_to_configuration_sequence(config, schema) module.add_comments_to_configuration_sequence(config, schema)
def test_add_comments_to_configuration_sequence_of_maps_without_description_does_not_raise(): def test_add_comments_to_configuration_sequence_of_maps_without_description_does_not_raise():
config = module.yaml.comments.CommentedSeq([module.yaml.comments.CommentedMap([('foo', 'yo')])]) config = module.yaml.comments.CommentedSeq([module.yaml.comments.CommentedMap([('foo', 'yo')])])
schema = {'seq': [{'map': {'foo': {}}}]} schema = {'type': 'array', 'items': {'type': 'object', 'properties': {'foo': {}}}}
module.add_comments_to_configuration_sequence(config, schema) module.add_comments_to_configuration_sequence(config, schema)
def test_add_comments_to_configuration_map_does_not_raise(): def test_add_comments_to_configuration_object_does_not_raise():
# Ensure that it can deal with fields both in the schema and missing from the schema. # Ensure that it can deal with fields both in the schema and missing from the schema.
config = module.yaml.comments.CommentedMap([('foo', 33), ('bar', 44), ('baz', 55)]) config = module.yaml.comments.CommentedMap([('foo', 33), ('bar', 44), ('baz', 55)])
schema = {'map': {'foo': {'desc': 'Foo'}, 'bar': {'desc': 'Bar'}}} schema = {
'type': 'object',
'properties': {'foo': {'description': 'Foo'}, 'bar': {'description': 'Bar'}},
}
module.add_comments_to_configuration_map(config, schema) module.add_comments_to_configuration_object(config, schema)
def test_add_comments_to_configuration_map_with_skip_first_does_not_raise(): def test_add_comments_to_configuration_object_with_skip_first_does_not_raise():
config = module.yaml.comments.CommentedMap([('foo', 33)]) config = module.yaml.comments.CommentedMap([('foo', 33)])
schema = {'map': {'foo': {'desc': 'Foo'}}} schema = {'type': 'object', 'properties': {'foo': {'description': 'Foo'}}}
module.add_comments_to_configuration_map(config, schema, skip_first=True) module.add_comments_to_configuration_object(config, schema, skip_first=True)
def test_remove_commented_out_sentinel_keeps_other_comments(): def test_remove_commented_out_sentinel_keeps_other_comments():
@ -206,6 +218,7 @@ def test_generate_sample_configuration_with_source_filename_does_not_raise():
builtins.should_receive('open').with_args('schema.yaml').and_return('') builtins.should_receive('open').with_args('schema.yaml').and_return('')
flexmock(module.yaml).should_receive('round_trip_load') flexmock(module.yaml).should_receive('round_trip_load')
flexmock(module.load).should_receive('load_configuration') flexmock(module.load).should_receive('load_configuration')
flexmock(module.normalize).should_receive('normalize')
flexmock(module).should_receive('_schema_to_sample_configuration') flexmock(module).should_receive('_schema_to_sample_configuration')
flexmock(module).should_receive('merge_source_configuration_into_destination') flexmock(module).should_receive('merge_source_configuration_into_destination')
flexmock(module).should_receive('render_configuration') flexmock(module).should_receive('render_configuration')

View File

@ -1,3 +1,4 @@
import io
import sys import sys
import pytest import pytest
@ -14,49 +15,360 @@ def test_load_configuration_parses_contents():
assert module.load_configuration('config.yaml') == {'key': 'value'} assert module.load_configuration('config.yaml') == {'key': 'value'}
def test_load_configuration_inlines_include(): def test_load_configuration_inlines_include_relative_to_current_directory():
builtins = flexmock(sys.modules['builtins']) builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('include.yaml').and_return('value') flexmock(module.os).should_receive('getcwd').and_return('/tmp')
builtins.should_receive('open').with_args('config.yaml').and_return( flexmock(module.os.path).should_receive('isabs').and_return(False)
'key: !include include.yaml' flexmock(module.os.path).should_receive('exists').and_return(True)
) include_file = io.StringIO('value')
include_file.name = 'include.yaml'
builtins.should_receive('open').with_args('/tmp/include.yaml').and_return(include_file)
config_file = io.StringIO('key: !include include.yaml')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
assert module.load_configuration('config.yaml') == {'key': 'value'} assert module.load_configuration('config.yaml') == {'key': 'value'}
def test_load_configuration_inlines_include_relative_to_config_parent_directory():
builtins = flexmock(sys.modules['builtins'])
flexmock(module.os).should_receive('getcwd').and_return('/tmp')
flexmock(module.os.path).should_receive('isabs').with_args('/etc').and_return(True)
flexmock(module.os.path).should_receive('isabs').with_args('/etc/config.yaml').and_return(True)
flexmock(module.os.path).should_receive('isabs').with_args('include.yaml').and_return(False)
flexmock(module.os.path).should_receive('exists').with_args('/tmp/include.yaml').and_return(
False
)
flexmock(module.os.path).should_receive('exists').with_args('/etc/include.yaml').and_return(
True
)
include_file = io.StringIO('value')
include_file.name = 'include.yaml'
builtins.should_receive('open').with_args('/etc/include.yaml').and_return(include_file)
config_file = io.StringIO('key: !include include.yaml')
config_file.name = '/etc/config.yaml'
builtins.should_receive('open').with_args('/etc/config.yaml').and_return(config_file)
assert module.load_configuration('/etc/config.yaml') == {'key': 'value'}
def test_load_configuration_raises_if_relative_include_does_not_exist():
builtins = flexmock(sys.modules['builtins'])
flexmock(module.os).should_receive('getcwd').and_return('/tmp')
flexmock(module.os.path).should_receive('isabs').with_args('/etc').and_return(True)
flexmock(module.os.path).should_receive('isabs').with_args('/etc/config.yaml').and_return(True)
flexmock(module.os.path).should_receive('isabs').with_args('include.yaml').and_return(False)
flexmock(module.os.path).should_receive('exists').and_return(False)
config_file = io.StringIO('key: !include include.yaml')
config_file.name = '/etc/config.yaml'
builtins.should_receive('open').with_args('/etc/config.yaml').and_return(config_file)
with pytest.raises(FileNotFoundError):
module.load_configuration('/etc/config.yaml')
def test_load_configuration_inlines_absolute_include():
builtins = flexmock(sys.modules['builtins'])
flexmock(module.os).should_receive('getcwd').and_return('/tmp')
flexmock(module.os.path).should_receive('isabs').and_return(True)
flexmock(module.os.path).should_receive('exists').never()
include_file = io.StringIO('value')
include_file.name = '/root/include.yaml'
builtins.should_receive('open').with_args('/root/include.yaml').and_return(include_file)
config_file = io.StringIO('key: !include /root/include.yaml')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
assert module.load_configuration('config.yaml') == {'key': 'value'}
def test_load_configuration_raises_if_absolute_include_does_not_exist():
builtins = flexmock(sys.modules['builtins'])
flexmock(module.os).should_receive('getcwd').and_return('/tmp')
flexmock(module.os.path).should_receive('isabs').and_return(True)
builtins.should_receive('open').with_args('/root/include.yaml').and_raise(FileNotFoundError)
config_file = io.StringIO('key: !include /root/include.yaml')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
with pytest.raises(FileNotFoundError):
assert module.load_configuration('config.yaml')
def test_load_configuration_merges_include(): def test_load_configuration_merges_include():
builtins = flexmock(sys.modules['builtins']) builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('include.yaml').and_return( flexmock(module.os).should_receive('getcwd').and_return('/tmp')
flexmock(module.os.path).should_receive('isabs').and_return(False)
flexmock(module.os.path).should_receive('exists').and_return(True)
include_file = io.StringIO(
''' '''
foo: bar foo: bar
baz: quux baz: quux
''' '''
) )
builtins.should_receive('open').with_args('config.yaml').and_return( include_file.name = 'include.yaml'
builtins.should_receive('open').with_args('/tmp/include.yaml').and_return(include_file)
config_file = io.StringIO(
''' '''
foo: override foo: override
<<: !include include.yaml <<: !include include.yaml
''' '''
) )
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
assert module.load_configuration('config.yaml') == {'foo': 'override', 'baz': 'quux'} assert module.load_configuration('config.yaml') == {'foo': 'override', 'baz': 'quux'}
def test_load_configuration_does_not_merge_include_list(): def test_load_configuration_does_not_merge_include_list():
builtins = flexmock(sys.modules['builtins']) builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('include.yaml').and_return( flexmock(module.os).should_receive('getcwd').and_return('/tmp')
flexmock(module.os.path).should_receive('isabs').and_return(False)
flexmock(module.os.path).should_receive('exists').and_return(True)
include_file = io.StringIO(
''' '''
- one - one
- two - two
''' '''
) )
builtins.should_receive('open').with_args('config.yaml').and_return( include_file.name = 'include.yaml'
builtins.should_receive('open').with_args('/tmp/include.yaml').and_return(include_file)
config_file = io.StringIO(
''' '''
foo: bar foo: bar
repositories: repositories:
<<: !include include.yaml <<: !include include.yaml
''' '''
) )
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
with pytest.raises(ruamel.yaml.error.YAMLError): with pytest.raises(ruamel.yaml.error.YAMLError):
assert module.load_configuration('config.yaml') assert module.load_configuration('config.yaml')
def test_deep_merge_nodes_replaces_colliding_scalar_values():
node_values = [
(
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='keep_hourly'
),
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:int', value='24'),
),
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='keep_daily'
),
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:int', value='7'),
),
],
),
),
(
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='keep_daily'
),
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:int', value='5'),
),
],
),
),
]
result = module.deep_merge_nodes(node_values)
assert len(result) == 1
(section_key, section_value) = result[0]
assert section_key.value == 'retention'
options = section_value.value
assert len(options) == 2
assert options[0][0].value == 'keep_hourly'
assert options[0][1].value == '24'
assert options[1][0].value == 'keep_daily'
assert options[1][1].value == '5'
def test_deep_merge_nodes_keeps_non_colliding_scalar_values():
node_values = [
(
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='keep_hourly'
),
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:int', value='24'),
),
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='keep_daily'
),
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:int', value='7'),
),
],
),
),
(
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='keep_minutely'
),
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:int', value='10'),
),
],
),
),
]
result = module.deep_merge_nodes(node_values)
assert len(result) == 1
(section_key, section_value) = result[0]
assert section_key.value == 'retention'
options = section_value.value
assert len(options) == 3
assert options[0][0].value == 'keep_hourly'
assert options[0][1].value == '24'
assert options[1][0].value == 'keep_daily'
assert options[1][1].value == '7'
assert options[2][0].value == 'keep_minutely'
assert options[2][1].value == '10'
def test_deep_merge_nodes_keeps_deeply_nested_values():
node_values = [
(
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:str', value='storage'),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='lock_wait'
),
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:int', value='5'),
),
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='extra_borg_options'
),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='init'
),
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='--init-option'
),
),
],
),
),
],
),
),
(
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:str', value='storage'),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='extra_borg_options'
),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='prune'
),
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='--prune-option'
),
),
],
),
),
],
),
),
]
result = module.deep_merge_nodes(node_values)
assert len(result) == 1
(section_key, section_value) = result[0]
assert section_key.value == 'storage'
options = section_value.value
assert len(options) == 2
assert options[0][0].value == 'lock_wait'
assert options[0][1].value == '5'
assert options[1][0].value == 'extra_borg_options'
nested_options = options[1][1].value
assert len(nested_options) == 2
assert nested_options[0][0].value == 'init'
assert nested_options[0][1].value == '--init-option'
assert nested_options[1][0].value == 'prune'
assert nested_options[1][1].value == '--prune-option'
def test_deep_merge_nodes_appends_colliding_sequence_values():
node_values = [
(
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:str', value='hooks'),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='before_backup'
),
ruamel.yaml.nodes.SequenceNode(
tag='tag:yaml.org,2002:int', value=['echo 1', 'echo 2']
),
),
],
),
),
(
ruamel.yaml.nodes.ScalarNode(tag='tag:yaml.org,2002:str', value='hooks'),
ruamel.yaml.nodes.MappingNode(
tag='tag:yaml.org,2002:map',
value=[
(
ruamel.yaml.nodes.ScalarNode(
tag='tag:yaml.org,2002:str', value='before_backup'
),
ruamel.yaml.nodes.SequenceNode(
tag='tag:yaml.org,2002:int', value=['echo 3', 'echo 4']
),
),
],
),
),
]
result = module.deep_merge_nodes(node_values)
assert len(result) == 1
(section_key, section_value) = result[0]
assert section_key.value == 'hooks'
options = section_value.value
assert len(options) == 1
assert options[0][0].value == 'before_backup'
assert options[0][1].value == ['echo 1', 'echo 2', 'echo 3', 'echo 4']

View File

@ -21,14 +21,20 @@ def mock_config_and_schema(config_yaml, schema_yaml=None):
when parsing the configuration. when parsing the configuration.
''' '''
config_stream = io.StringIO(config_yaml) config_stream = io.StringIO(config_yaml)
config_stream.name = 'config.yaml'
if schema_yaml is None: if schema_yaml is None:
schema_stream = open(module.schema_filename()) schema_stream = open(module.schema_filename())
else: else:
schema_stream = io.StringIO(schema_yaml) schema_stream = io.StringIO(schema_yaml)
schema_stream.name = 'schema.yaml'
builtins = flexmock(sys.modules['builtins']) builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('config.yaml').and_return(config_stream) flexmock(module.os).should_receive('getcwd').and_return('/tmp')
builtins.should_receive('open').with_args('schema.yaml').and_return(schema_stream) flexmock(module.os.path).should_receive('isabs').and_return(False)
flexmock(module.os.path).should_receive('exists').and_return(True)
builtins.should_receive('open').with_args('/tmp/config.yaml').and_return(config_stream)
builtins.should_receive('open').with_args('/tmp/schema.yaml').and_return(schema_stream)
def test_parse_configuration_transforms_file_into_mapping(): def test_parse_configuration_transforms_file_into_mapping():
@ -49,17 +55,17 @@ def test_parse_configuration_transforms_file_into_mapping():
consistency: consistency:
checks: checks:
- repository - name: repository
- archives - name: archives
''' '''
) )
result = module.parse_configuration('config.yaml', 'schema.yaml') result = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert result == { assert result == {
'location': {'source_directories': ['/home', '/etc'], 'repositories': ['hostname.borg']}, 'location': {'source_directories': ['/home', '/etc'], 'repositories': ['hostname.borg']},
'retention': {'keep_daily': 7, 'keep_hourly': 24, 'keep_minutely': 60}, 'retention': {'keep_daily': 7, 'keep_hourly': 24, 'keep_minutely': 60},
'consistency': {'checks': ['repository', 'archives']}, 'consistency': {'checks': [{'name': 'repository'}, {'name': 'archives'}]},
} }
@ -79,7 +85,7 @@ def test_parse_configuration_passes_through_quoted_punctuation():
) )
) )
result = module.parse_configuration('config.yaml', 'schema.yaml') result = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert result == { assert result == {
'location': { 'location': {
@ -115,7 +121,7 @@ def test_parse_configuration_with_schema_lacking_examples_does_not_raise():
''', ''',
) )
module.parse_configuration('config.yaml', 'schema.yaml') module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
def test_parse_configuration_inlines_include(): def test_parse_configuration_inlines_include():
@ -133,14 +139,16 @@ def test_parse_configuration_inlines_include():
''' '''
) )
builtins = flexmock(sys.modules['builtins']) builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('include.yaml').and_return( include_file = io.StringIO(
''' '''
keep_daily: 7 keep_daily: 7
keep_hourly: 24 keep_hourly: 24
''' '''
) )
include_file.name = 'include.yaml'
builtins.should_receive('open').with_args('/tmp/include.yaml').and_return(include_file)
result = module.parse_configuration('config.yaml', 'schema.yaml') result = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert result == { assert result == {
'location': {'source_directories': ['/home'], 'repositories': ['hostname.borg']}, 'location': {'source_directories': ['/home'], 'repositories': ['hostname.borg']},
@ -164,14 +172,16 @@ def test_parse_configuration_merges_include():
''' '''
) )
builtins = flexmock(sys.modules['builtins']) builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('include.yaml').and_return( include_file = io.StringIO(
''' '''
keep_daily: 7 keep_daily: 7
keep_hourly: 24 keep_hourly: 24
''' '''
) )
include_file.name = 'include.yaml'
builtins.should_receive('open').with_args('/tmp/include.yaml').and_return(include_file)
result = module.parse_configuration('config.yaml', 'schema.yaml') result = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert result == { assert result == {
'location': {'source_directories': ['/home'], 'repositories': ['hostname.borg']}, 'location': {'source_directories': ['/home'], 'repositories': ['hostname.borg']},
@ -181,23 +191,23 @@ def test_parse_configuration_merges_include():
def test_parse_configuration_raises_for_missing_config_file(): def test_parse_configuration_raises_for_missing_config_file():
with pytest.raises(FileNotFoundError): with pytest.raises(FileNotFoundError):
module.parse_configuration('config.yaml', 'schema.yaml') module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
def test_parse_configuration_raises_for_missing_schema_file(): def test_parse_configuration_raises_for_missing_schema_file():
mock_config_and_schema('') mock_config_and_schema('')
builtins = flexmock(sys.modules['builtins']) builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('schema.yaml').and_raise(FileNotFoundError) builtins.should_receive('open').with_args('/tmp/schema.yaml').and_raise(FileNotFoundError)
with pytest.raises(FileNotFoundError): with pytest.raises(FileNotFoundError):
module.parse_configuration('config.yaml', 'schema.yaml') module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
def test_parse_configuration_raises_for_syntax_error(): def test_parse_configuration_raises_for_syntax_error():
mock_config_and_schema('foo:\nbar') mock_config_and_schema('foo:\nbar')
with pytest.raises(ValueError): with pytest.raises(ValueError):
module.parse_configuration('config.yaml', 'schema.yaml') module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
def test_parse_configuration_raises_for_validation_error(): def test_parse_configuration_raises_for_validation_error():
@ -211,7 +221,7 @@ def test_parse_configuration_raises_for_validation_error():
) )
with pytest.raises(module.Validation_error): with pytest.raises(module.Validation_error):
module.parse_configuration('config.yaml', 'schema.yaml') module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
def test_parse_configuration_applies_overrides(): def test_parse_configuration_applies_overrides():
@ -229,7 +239,7 @@ def test_parse_configuration_applies_overrides():
) )
result = module.parse_configuration( result = module.parse_configuration(
'config.yaml', 'schema.yaml', overrides=['location.local_path=borg2'] '/tmp/config.yaml', '/tmp/schema.yaml', overrides=['location.local_path=borg2']
) )
assert result == { assert result == {
@ -255,7 +265,7 @@ def test_parse_configuration_applies_normalization():
''' '''
) )
result = module.parse_configuration('config.yaml', 'schema.yaml') result = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert result == { assert result == {
'location': { 'location': {

View File

@ -1,5 +1,6 @@
import logging import logging
import subprocess import subprocess
import sys
import pytest import pytest
from flexmock import flexmock from flexmock import flexmock
@ -88,13 +89,21 @@ def test_log_outputs_skips_error_output_in_exception_for_process_with_none_stdou
def test_log_outputs_kills_other_processes_when_one_errors(): def test_log_outputs_kills_other_processes_when_one_errors():
flexmock(module.logger).should_receive('log') flexmock(module.logger).should_receive('log')
flexmock(module).should_receive('exit_code_indicates_error').and_return(True)
flexmock(module).should_receive('command_for_process').and_return('grep') flexmock(module).should_receive('command_for_process').and_return('grep')
process = subprocess.Popen(['grep'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) process = subprocess.Popen(['grep'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
flexmock(module).should_receive('exit_code_indicates_error').with_args(
process, None, 'borg'
).and_return(False)
flexmock(module).should_receive('exit_code_indicates_error').with_args(
process, 2, 'borg'
).and_return(True)
other_process = subprocess.Popen( other_process = subprocess.Popen(
['watch', 'true'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT ['sleep', '2'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
) )
flexmock(module).should_receive('exit_code_indicates_error').with_args(
other_process, None, 'borg'
).and_return(False)
flexmock(module).should_receive('output_buffer_for_process').with_args(process, ()).and_return( flexmock(module).should_receive('output_buffer_for_process').with_args(process, ()).and_return(
process.stdout process.stdout
) )
@ -115,13 +124,87 @@ def test_log_outputs_kills_other_processes_when_one_errors():
assert error.value.output assert error.value.output
def test_log_outputs_vents_other_processes_when_one_exits():
'''
Execute a command to generate a longish random string and pipe it into another command that
exits quickly. The test is basically to ensure we don't hang forever waiting for the exited
process to read the pipe, and that the string-generating process eventually gets vented and
exits.
'''
flexmock(module.logger).should_receive('log')
flexmock(module).should_receive('command_for_process').and_return('grep')
process = subprocess.Popen(
[
sys.executable,
'-c',
"import random, string; print(''.join(random.choice(string.ascii_letters) for _ in range(40000)))",
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
other_process = subprocess.Popen(
['true'], stdin=process.stdout, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
flexmock(module).should_receive('output_buffer_for_process').with_args(
process, (process.stdout,)
).and_return(process.stderr)
flexmock(module).should_receive('output_buffer_for_process').with_args(
other_process, (process.stdout,)
).and_return(other_process.stdout)
flexmock(process.stdout).should_call('readline').at_least().once()
module.log_outputs(
(process, other_process),
exclude_stdouts=(process.stdout,),
output_log_level=logging.INFO,
borg_local_path='borg',
)
def test_log_outputs_does_not_error_when_one_process_exits():
flexmock(module.logger).should_receive('log')
flexmock(module).should_receive('command_for_process').and_return('grep')
process = subprocess.Popen(
[
sys.executable,
'-c',
"import random, string; print(''.join(random.choice(string.ascii_letters) for _ in range(40000)))",
],
stdout=None, # Specifically test the case of a process without stdout captured.
stderr=None,
)
other_process = subprocess.Popen(
['true'], stdin=process.stdout, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
flexmock(module).should_receive('output_buffer_for_process').with_args(
process, (process.stdout,)
).and_return(process.stderr)
flexmock(module).should_receive('output_buffer_for_process').with_args(
other_process, (process.stdout,)
).and_return(other_process.stdout)
module.log_outputs(
(process, other_process),
exclude_stdouts=(process.stdout,),
output_log_level=logging.INFO,
borg_local_path='borg',
)
def test_log_outputs_truncates_long_error_output(): def test_log_outputs_truncates_long_error_output():
flexmock(module).ERROR_OUTPUT_MAX_LINE_COUNT = 0 flexmock(module).ERROR_OUTPUT_MAX_LINE_COUNT = 0
flexmock(module.logger).should_receive('log') flexmock(module.logger).should_receive('log')
flexmock(module).should_receive('exit_code_indicates_error').and_return(True)
flexmock(module).should_receive('command_for_process').and_return('grep') flexmock(module).should_receive('command_for_process').and_return('grep')
process = subprocess.Popen(['grep'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) process = subprocess.Popen(['grep'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
flexmock(module).should_receive('exit_code_indicates_error').with_args(
process, None, 'borg'
).and_return(False)
flexmock(module).should_receive('exit_code_indicates_error').with_args(
process, 2, 'borg'
).and_return(True)
flexmock(module).should_receive('output_buffer_for_process').and_return(process.stdout) flexmock(module).should_receive('output_buffer_for_process').and_return(process.stdout)
with pytest.raises(subprocess.CalledProcessError) as error: with pytest.raises(subprocess.CalledProcessError) as error:

View File

@ -0,0 +1,167 @@
import logging
from flexmock import flexmock
from borgmatic.borg import borg as module
from ..test_verbosity import insert_logging_mock
def test_run_arbitrary_borg_calls_borg_with_parameters():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'break-lock', 'repo'), output_log_level=logging.WARNING, borg_local_path='borg'
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['break-lock'],
)
def test_run_arbitrary_borg_with_log_info_calls_borg_with_info_parameter():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'break-lock', 'repo', '--info'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
insert_logging_mock(logging.INFO)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['break-lock'],
)
def test_run_arbitrary_borg_with_log_debug_calls_borg_with_debug_parameter():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'break-lock', 'repo', '--debug', '--show-rc'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
insert_logging_mock(logging.DEBUG)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['break-lock'],
)
def test_run_arbitrary_borg_with_lock_wait_calls_borg_with_lock_wait_parameters():
storage_config = {'lock_wait': 5}
flexmock(module).should_receive('execute_command').with_args(
('borg', 'break-lock', 'repo', '--lock-wait', '5'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config=storage_config, options=['break-lock'],
)
def test_run_arbitrary_borg_with_archive_calls_borg_with_archive_parameter():
storage_config = {}
flexmock(module).should_receive('execute_command').with_args(
('borg', 'break-lock', 'repo::archive'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config=storage_config, options=['break-lock'], archive='archive',
)
def test_run_arbitrary_borg_with_local_path_calls_borg_via_local_path():
flexmock(module).should_receive('execute_command').with_args(
('borg1', 'break-lock', 'repo'), output_log_level=logging.WARNING, borg_local_path='borg1'
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['break-lock'], local_path='borg1',
)
def test_run_arbitrary_borg_with_remote_path_calls_borg_with_remote_path_parameters():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'break-lock', 'repo', '--remote-path', 'borg1'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['break-lock'], remote_path='borg1',
)
def test_run_arbitrary_borg_passes_borg_specific_parameters_to_borg():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', 'repo', '--progress'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['list', '--progress'],
)
def test_run_arbitrary_borg_omits_dash_dash_in_parameters_passed_to_borg():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'break-lock', 'repo'), output_log_level=logging.WARNING, borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['--', 'break-lock'],
)
def test_run_arbitrary_borg_without_borg_specific_parameters_does_not_raise():
flexmock(module).should_receive('execute_command').with_args(
('borg',), output_log_level=logging.WARNING, borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=[],
)
def test_run_arbitrary_borg_passes_key_sub_command_to_borg_before_repository():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'key', 'export', 'repo'), output_log_level=logging.WARNING, borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['key', 'export'],
)
def test_run_arbitrary_borg_passes_debug_sub_command_to_borg_before_repository():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'debug', 'dump-manifest', 'repo', 'path'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['debug', 'dump-manifest', 'path'],
)
def test_run_arbitrary_borg_with_debug_info_command_does_not_pass_borg_repository():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'debug', 'info'), output_log_level=logging.WARNING, borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['debug', 'info'],
)
def test_run_arbitrary_borg_with_debug_convert_profile_command_does_not_pass_borg_repository():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'debug', 'convert-profile', 'in', 'out'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
module.run_arbitrary_borg(
repository='repo', storage_config={}, options=['debug', 'convert-profile', 'in', 'out'],
)

View File

@ -17,172 +17,336 @@ def insert_execute_command_never():
def test_parse_checks_returns_them_as_tuple(): def test_parse_checks_returns_them_as_tuple():
checks = module._parse_checks({'checks': ['foo', 'disabled', 'bar']}) checks = module.parse_checks({'checks': [{'name': 'foo'}, {'name': 'bar'}]})
assert checks == ('foo', 'bar') assert checks == ('foo', 'bar')
def test_parse_checks_with_missing_value_returns_defaults(): def test_parse_checks_with_missing_value_returns_defaults():
checks = module._parse_checks({}) checks = module.parse_checks({})
assert checks == module.DEFAULT_CHECKS assert checks == ('repository', 'archives')
def test_parse_checks_with_blank_value_returns_defaults(): def test_parse_checks_with_empty_list_returns_defaults():
checks = module._parse_checks({'checks': []}) checks = module.parse_checks({'checks': []})
assert checks == module.DEFAULT_CHECKS assert checks == ('repository', 'archives')
def test_parse_checks_with_none_value_returns_defaults(): def test_parse_checks_with_none_value_returns_defaults():
checks = module._parse_checks({'checks': None}) checks = module.parse_checks({'checks': None})
assert checks == module.DEFAULT_CHECKS assert checks == ('repository', 'archives')
def test_parse_checks_with_disabled_returns_no_checks(): def test_parse_checks_with_disabled_returns_no_checks():
checks = module._parse_checks({'checks': ['disabled']}) checks = module.parse_checks({'checks': [{'name': 'foo'}, {'name': 'disabled'}]})
assert checks == () assert checks == ()
def test_parse_checks_with_data_check_also_injects_archives(): def test_parse_checks_with_data_check_also_injects_archives():
checks = module._parse_checks({'checks': ['data']}) checks = module.parse_checks({'checks': [{'name': 'data'}]})
assert checks == ('data', 'archives') assert checks == ('data', 'archives')
def test_parse_checks_with_data_check_passes_through_archives(): def test_parse_checks_with_data_check_passes_through_archives():
checks = module._parse_checks({'checks': ['data', 'archives']}) checks = module.parse_checks({'checks': [{'name': 'data'}, {'name': 'archives'}]})
assert checks == ('data', 'archives') assert checks == ('data', 'archives')
def test_parse_checks_prefers_override_checks_to_configured_checks(): def test_parse_checks_prefers_override_checks_to_configured_checks():
checks = module._parse_checks({'checks': ['archives']}, only_checks=['repository', 'extract']) checks = module.parse_checks(
{'checks': [{'name': 'archives'}]}, only_checks=['repository', 'extract']
)
assert checks == ('repository', 'extract') assert checks == ('repository', 'extract')
def test_parse_checks_with_override_data_check_also_injects_archives(): def test_parse_checks_with_override_data_check_also_injects_archives():
checks = module._parse_checks({'checks': ['extract']}, only_checks=['data']) checks = module.parse_checks({'checks': [{'name': 'extract'}]}, only_checks=['data'])
assert checks == ('data', 'archives') assert checks == ('data', 'archives')
@pytest.mark.parametrize(
'frequency,expected_result',
(
(None, None),
('always', None),
('1 hour', module.datetime.timedelta(hours=1)),
('2 hours', module.datetime.timedelta(hours=2)),
('1 day', module.datetime.timedelta(days=1)),
('2 days', module.datetime.timedelta(days=2)),
('1 week', module.datetime.timedelta(weeks=1)),
('2 weeks', module.datetime.timedelta(weeks=2)),
('1 month', module.datetime.timedelta(days=30)),
('2 months', module.datetime.timedelta(days=60)),
('1 year', module.datetime.timedelta(days=365)),
('2 years', module.datetime.timedelta(days=365 * 2)),
),
)
def test_parse_frequency_parses_into_timedeltas(frequency, expected_result):
assert module.parse_frequency(frequency) == expected_result
@pytest.mark.parametrize(
'frequency', ('sometime', 'x days', '3 decades',),
)
def test_parse_frequency_raises_on_parse_error(frequency):
with pytest.raises(ValueError):
module.parse_frequency(frequency)
def test_filter_checks_on_frequency_without_config_uses_default_checks():
flexmock(module).should_receive('parse_frequency').and_return(
module.datetime.timedelta(weeks=4)
)
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('read_check_time').and_return(None)
assert module.filter_checks_on_frequency(
location_config={},
consistency_config={},
borg_repository_id='repo',
checks=('repository', 'archives'),
force=False,
) == ('repository', 'archives')
def test_filter_checks_on_frequency_retains_unconfigured_check():
assert module.filter_checks_on_frequency(
location_config={},
consistency_config={},
borg_repository_id='repo',
checks=('data',),
force=False,
) == ('data',)
def test_filter_checks_on_frequency_retains_check_without_frequency():
flexmock(module).should_receive('parse_frequency').and_return(None)
assert module.filter_checks_on_frequency(
location_config={},
consistency_config={'checks': [{'name': 'archives'}]},
borg_repository_id='repo',
checks=('archives',),
force=False,
) == ('archives',)
def test_filter_checks_on_frequency_retains_check_with_elapsed_frequency():
flexmock(module).should_receive('parse_frequency').and_return(
module.datetime.timedelta(hours=1)
)
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('read_check_time').and_return(
module.datetime.datetime(year=module.datetime.MINYEAR, month=1, day=1)
)
assert module.filter_checks_on_frequency(
location_config={},
consistency_config={'checks': [{'name': 'archives', 'frequency': '1 hour'}]},
borg_repository_id='repo',
checks=('archives',),
force=False,
) == ('archives',)
def test_filter_checks_on_frequency_retains_check_with_missing_check_time_file():
flexmock(module).should_receive('parse_frequency').and_return(
module.datetime.timedelta(hours=1)
)
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('read_check_time').and_return(None)
assert module.filter_checks_on_frequency(
location_config={},
consistency_config={'checks': [{'name': 'archives', 'frequency': '1 hour'}]},
borg_repository_id='repo',
checks=('archives',),
force=False,
) == ('archives',)
def test_filter_checks_on_frequency_skips_check_with_unelapsed_frequency():
flexmock(module).should_receive('parse_frequency').and_return(
module.datetime.timedelta(hours=1)
)
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('read_check_time').and_return(module.datetime.datetime.now())
assert (
module.filter_checks_on_frequency(
location_config={},
consistency_config={'checks': [{'name': 'archives', 'frequency': '1 hour'}]},
borg_repository_id='repo',
checks=('archives',),
force=False,
)
== ()
)
def test_filter_checks_on_frequency_restains_check_with_unelapsed_frequency_and_force():
assert module.filter_checks_on_frequency(
location_config={},
consistency_config={'checks': [{'name': 'archives', 'frequency': '1 hour'}]},
borg_repository_id='repo',
checks=('archives',),
force=True,
) == ('archives',)
def test_make_check_flags_with_repository_check_returns_flag(): def test_make_check_flags_with_repository_check_returns_flag():
flags = module._make_check_flags(('repository',)) flags = module.make_check_flags(('repository',))
assert flags == ('--repository-only',) assert flags == ('--repository-only',)
def test_make_check_flags_with_archives_check_returns_flag(): def test_make_check_flags_with_archives_check_returns_flag():
flags = module._make_check_flags(('archives',)) flags = module.make_check_flags(('archives',))
assert flags == ('--archives-only',) assert flags == ('--archives-only',)
def test_make_check_flags_with_data_check_returns_flag(): def test_make_check_flags_with_data_check_returns_flag():
flags = module._make_check_flags(('data',)) flags = module.make_check_flags(('data',))
assert flags == ('--verify-data',) assert flags == ('--verify-data',)
def test_make_check_flags_with_extract_omits_extract_flag(): def test_make_check_flags_with_extract_omits_extract_flag():
flags = module._make_check_flags(('extract',)) flags = module.make_check_flags(('extract',))
assert flags == () assert flags == ()
def test_make_check_flags_with_default_checks_and_default_prefix_returns_default_flags(): def test_make_check_flags_with_default_checks_and_default_prefix_returns_default_flags():
flags = module._make_check_flags(module.DEFAULT_CHECKS, prefix=module.DEFAULT_PREFIX) flags = module.make_check_flags(('repository', 'archives'), prefix=module.DEFAULT_PREFIX)
assert flags == ('--prefix', module.DEFAULT_PREFIX) assert flags == ('--prefix', module.DEFAULT_PREFIX)
def test_make_check_flags_with_all_checks_and_default_prefix_returns_default_flags(): def test_make_check_flags_with_all_checks_and_default_prefix_returns_default_flags():
flags = module._make_check_flags( flags = module.make_check_flags(
module.DEFAULT_CHECKS + ('extract',), prefix=module.DEFAULT_PREFIX ('repository', 'archives', 'extract'), prefix=module.DEFAULT_PREFIX
) )
assert flags == ('--prefix', module.DEFAULT_PREFIX) assert flags == ('--prefix', module.DEFAULT_PREFIX)
def test_make_check_flags_with_archives_check_and_last_includes_last_flag(): def test_make_check_flags_with_archives_check_and_last_includes_last_flag():
flags = module._make_check_flags(('archives',), check_last=3) flags = module.make_check_flags(('archives',), check_last=3)
assert flags == ('--archives-only', '--last', '3') assert flags == ('--archives-only', '--last', '3')
def test_make_check_flags_with_repository_check_and_last_omits_last_flag(): def test_make_check_flags_with_repository_check_and_last_omits_last_flag():
flags = module._make_check_flags(('repository',), check_last=3) flags = module.make_check_flags(('repository',), check_last=3)
assert flags == ('--repository-only',) assert flags == ('--repository-only',)
def test_make_check_flags_with_default_checks_and_last_includes_last_flag(): def test_make_check_flags_with_default_checks_and_last_includes_last_flag():
flags = module._make_check_flags(module.DEFAULT_CHECKS, check_last=3) flags = module.make_check_flags(('repository', 'archives'), check_last=3)
assert flags == ('--last', '3') assert flags == ('--last', '3')
def test_make_check_flags_with_archives_check_and_prefix_includes_prefix_flag(): def test_make_check_flags_with_archives_check_and_prefix_includes_prefix_flag():
flags = module._make_check_flags(('archives',), prefix='foo-') flags = module.make_check_flags(('archives',), prefix='foo-')
assert flags == ('--archives-only', '--prefix', 'foo-') assert flags == ('--archives-only', '--prefix', 'foo-')
def test_make_check_flags_with_archives_check_and_empty_prefix_omits_prefix_flag(): def test_make_check_flags_with_archives_check_and_empty_prefix_omits_prefix_flag():
flags = module._make_check_flags(('archives',), prefix='') flags = module.make_check_flags(('archives',), prefix='')
assert flags == ('--archives-only',) assert flags == ('--archives-only',)
def test_make_check_flags_with_archives_check_and_none_prefix_omits_prefix_flag(): def test_make_check_flags_with_archives_check_and_none_prefix_omits_prefix_flag():
flags = module._make_check_flags(('archives',), prefix=None) flags = module.make_check_flags(('archives',), prefix=None)
assert flags == ('--archives-only',) assert flags == ('--archives-only',)
def test_make_check_flags_with_repository_check_and_prefix_omits_prefix_flag(): def test_make_check_flags_with_repository_check_and_prefix_omits_prefix_flag():
flags = module._make_check_flags(('repository',), prefix='foo-') flags = module.make_check_flags(('repository',), prefix='foo-')
assert flags == ('--repository-only',) assert flags == ('--repository-only',)
def test_make_check_flags_with_default_checks_and_prefix_includes_prefix_flag(): def test_make_check_flags_with_default_checks_and_prefix_includes_prefix_flag():
flags = module._make_check_flags(module.DEFAULT_CHECKS, prefix='foo-') flags = module.make_check_flags(('repository', 'archives'), prefix='foo-')
assert flags == ('--prefix', 'foo-') assert flags == ('--prefix', 'foo-')
def test_read_check_time_does_not_raise():
flexmock(module.os).should_receive('stat').and_return(flexmock(st_mtime=123))
assert module.read_check_time('/path')
def test_read_check_time_on_missing_file_does_not_raise():
flexmock(module.os).should_receive('stat').and_raise(FileNotFoundError)
assert module.read_check_time('/path') is None
def test_check_archives_with_progress_calls_borg_with_progress_parameter(): def test_check_archives_with_progress_calls_borg_with_progress_parameter():
checks = ('repository',) checks = ('repository',)
consistency_config = {'check_last': None} consistency_config = {'check_last': None}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').and_return(()) flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').and_return(())
flexmock(module).should_receive('execute_command').never() flexmock(module).should_receive('execute_command').never()
flexmock(module).should_receive('execute_command').with_args( flexmock(module).should_receive('execute_command').with_args(
('borg', 'check', '--progress', 'repo'), output_file=module.DO_NOT_CAPTURE ('borg', 'check', '--progress', 'repo'), output_file=module.DO_NOT_CAPTURE
).once() ).once()
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', storage_config={}, consistency_config=consistency_config, progress=True repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
progress=True,
) )
def test_check_archives_with_repair_calls_borg_with_repair_parameter(): def test_check_archives_with_repair_calls_borg_with_repair_parameter():
checks = ('repository',) checks = ('repository',)
consistency_config = {'check_last': None} consistency_config = {'check_last': None}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').and_return(()) flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').and_return(())
flexmock(module).should_receive('execute_command').never() flexmock(module).should_receive('execute_command').never()
flexmock(module).should_receive('execute_command').with_args( flexmock(module).should_receive('execute_command').with_args(
('borg', 'check', '--repair', 'repo'), output_file=module.DO_NOT_CAPTURE ('borg', 'check', '--repair', 'repo'), output_file=module.DO_NOT_CAPTURE
).once() ).once()
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', storage_config={}, consistency_config=consistency_config, repair=True repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
repair=True,
) )
@ -198,64 +362,142 @@ def test_check_archives_with_repair_calls_borg_with_repair_parameter():
def test_check_archives_calls_borg_with_parameters(checks): def test_check_archives_calls_borg_with_parameters(checks):
check_last = flexmock() check_last = flexmock()
consistency_config = {'check_last': check_last} consistency_config = {'check_last': check_last}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').with_args( flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').with_args(
checks, check_last, module.DEFAULT_PREFIX checks, check_last, module.DEFAULT_PREFIX
).and_return(()) ).and_return(())
insert_execute_command_mock(('borg', 'check', 'repo')) insert_execute_command_mock(('borg', 'check', 'repo'))
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', storage_config={}, consistency_config=consistency_config repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
) )
def test_check_archives_with_json_error_raises():
checks = ('archives',)
check_last = flexmock()
consistency_config = {'check_last': check_last}
flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"unexpected": {"id": "repo"}}'
)
with pytest.raises(ValueError):
module.check_archives(
repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
)
def test_check_archives_with_missing_json_keys_raises():
checks = ('archives',)
check_last = flexmock()
consistency_config = {'check_last': check_last}
flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return('{invalid JSON')
with pytest.raises(ValueError):
module.check_archives(
repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
)
def test_check_archives_with_extract_check_calls_extract_only(): def test_check_archives_with_extract_check_calls_extract_only():
checks = ('extract',) checks = ('extract',)
check_last = flexmock() check_last = flexmock()
consistency_config = {'check_last': check_last} consistency_config = {'check_last': check_last}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').never() flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').never()
flexmock(module.extract).should_receive('extract_last_archive_dry_run').once() flexmock(module.extract).should_receive('extract_last_archive_dry_run').once()
flexmock(module).should_receive('write_check_time')
insert_execute_command_never() insert_execute_command_never()
module.check_archives( module.check_archives(
repository='repo', storage_config={}, consistency_config=consistency_config repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
) )
def test_check_archives_with_log_info_calls_borg_with_info_parameter(): def test_check_archives_with_log_info_calls_borg_with_info_parameter():
checks = ('repository',) checks = ('repository',)
consistency_config = {'check_last': None} consistency_config = {'check_last': None}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').and_return(()) flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').and_return(())
insert_logging_mock(logging.INFO) insert_logging_mock(logging.INFO)
insert_execute_command_mock(('borg', 'check', '--info', 'repo')) insert_execute_command_mock(('borg', 'check', '--info', 'repo'))
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', storage_config={}, consistency_config=consistency_config repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
) )
def test_check_archives_with_log_debug_calls_borg_with_debug_parameter(): def test_check_archives_with_log_debug_calls_borg_with_debug_parameter():
checks = ('repository',) checks = ('repository',)
consistency_config = {'check_last': None} consistency_config = {'check_last': None}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').and_return(()) flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').and_return(())
insert_logging_mock(logging.DEBUG) insert_logging_mock(logging.DEBUG)
insert_execute_command_mock(('borg', 'check', '--debug', '--show-rc', 'repo')) insert_execute_command_mock(('borg', 'check', '--debug', '--show-rc', 'repo'))
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', storage_config={}, consistency_config=consistency_config repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
) )
def test_check_archives_without_any_checks_bails(): def test_check_archives_without_any_checks_bails():
consistency_config = {'check_last': None} consistency_config = {'check_last': None}
flexmock(module).should_receive('_parse_checks').and_return(()) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('filter_checks_on_frequency').and_return(())
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
insert_execute_command_never() insert_execute_command_never()
module.check_archives( module.check_archives(
repository='repo', storage_config={}, consistency_config=consistency_config repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
) )
@ -263,14 +505,21 @@ def test_check_archives_with_local_path_calls_borg_via_local_path():
checks = ('repository',) checks = ('repository',)
check_last = flexmock() check_last = flexmock()
consistency_config = {'check_last': check_last} consistency_config = {'check_last': check_last}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').with_args( flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').with_args(
checks, check_last, module.DEFAULT_PREFIX checks, check_last, module.DEFAULT_PREFIX
).and_return(()) ).and_return(())
insert_execute_command_mock(('borg1', 'check', 'repo')) insert_execute_command_mock(('borg1', 'check', 'repo'))
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', repository='repo',
location_config={},
storage_config={}, storage_config={},
consistency_config=consistency_config, consistency_config=consistency_config,
local_path='borg1', local_path='borg1',
@ -281,14 +530,21 @@ def test_check_archives_with_remote_path_calls_borg_with_remote_path_parameters(
checks = ('repository',) checks = ('repository',)
check_last = flexmock() check_last = flexmock()
consistency_config = {'check_last': check_last} consistency_config = {'check_last': check_last}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').with_args( flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').with_args(
checks, check_last, module.DEFAULT_PREFIX checks, check_last, module.DEFAULT_PREFIX
).and_return(()) ).and_return(())
insert_execute_command_mock(('borg', 'check', '--remote-path', 'borg1', 'repo')) insert_execute_command_mock(('borg', 'check', '--remote-path', 'borg1', 'repo'))
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', repository='repo',
location_config={},
storage_config={}, storage_config={},
consistency_config=consistency_config, consistency_config=consistency_config,
remote_path='borg1', remote_path='borg1',
@ -299,14 +555,23 @@ def test_check_archives_with_lock_wait_calls_borg_with_lock_wait_parameters():
checks = ('repository',) checks = ('repository',)
check_last = flexmock() check_last = flexmock()
consistency_config = {'check_last': check_last} consistency_config = {'check_last': check_last}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').with_args( flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').with_args(
checks, check_last, module.DEFAULT_PREFIX checks, check_last, module.DEFAULT_PREFIX
).and_return(()) ).and_return(())
insert_execute_command_mock(('borg', 'check', '--lock-wait', '5', 'repo')) insert_execute_command_mock(('borg', 'check', '--lock-wait', '5', 'repo'))
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', storage_config={'lock_wait': 5}, consistency_config=consistency_config repository='repo',
location_config={},
storage_config={'lock_wait': 5},
consistency_config=consistency_config,
) )
@ -315,26 +580,42 @@ def test_check_archives_with_retention_prefix():
check_last = flexmock() check_last = flexmock()
prefix = 'foo-' prefix = 'foo-'
consistency_config = {'check_last': check_last, 'prefix': prefix} consistency_config = {'check_last': check_last, 'prefix': prefix}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').with_args( flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').with_args(
checks, check_last, prefix checks, check_last, prefix
).and_return(()) ).and_return(())
insert_execute_command_mock(('borg', 'check', 'repo')) insert_execute_command_mock(('borg', 'check', 'repo'))
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', storage_config={}, consistency_config=consistency_config repository='repo',
location_config={},
storage_config={},
consistency_config=consistency_config,
) )
def test_check_archives_with_extra_borg_options_calls_borg_with_extra_options(): def test_check_archives_with_extra_borg_options_calls_borg_with_extra_options():
checks = ('repository',) checks = ('repository',)
consistency_config = {'check_last': None} consistency_config = {'check_last': None}
flexmock(module).should_receive('_parse_checks').and_return(checks) flexmock(module).should_receive('parse_checks')
flexmock(module).should_receive('_make_check_flags').and_return(()) flexmock(module).should_receive('filter_checks_on_frequency').and_return(checks)
flexmock(module.info).should_receive('display_archives_info').and_return(
'{"repository": {"id": "repo"}}'
)
flexmock(module).should_receive('make_check_flags').and_return(())
insert_execute_command_mock(('borg', 'check', '--extra', '--options', 'repo')) insert_execute_command_mock(('borg', 'check', '--extra', '--options', 'repo'))
flexmock(module).should_receive('make_check_time_path')
flexmock(module).should_receive('write_check_time')
module.check_archives( module.check_archives(
repository='repo', repository='repo',
location_config={},
storage_config={'extra_borg_options': {'check': '--extra --options'}}, storage_config={'extra_borg_options': {'check': '--extra --options'}},
consistency_config=consistency_config, consistency_config=consistency_config,
) )

View File

@ -0,0 +1,110 @@
import logging
from flexmock import flexmock
from borgmatic.borg import compact as module
from ..test_verbosity import insert_logging_mock
def insert_execute_command_mock(compact_command, output_log_level):
flexmock(module).should_receive('execute_command').with_args(
compact_command, output_log_level=output_log_level, borg_local_path=compact_command[0]
).once()
COMPACT_COMMAND = ('borg', 'compact')
def test_compact_segments_calls_borg_with_parameters():
insert_execute_command_mock(COMPACT_COMMAND + ('repo',), logging.INFO)
module.compact_segments(dry_run=False, repository='repo', storage_config={})
def test_compact_segments_with_log_info_calls_borg_with_info_parameter():
insert_execute_command_mock(COMPACT_COMMAND + ('--info', 'repo'), logging.INFO)
insert_logging_mock(logging.INFO)
module.compact_segments(repository='repo', storage_config={}, dry_run=False)
def test_compact_segments_with_log_debug_calls_borg_with_debug_parameter():
insert_execute_command_mock(COMPACT_COMMAND + ('--debug', '--show-rc', 'repo'), logging.INFO)
insert_logging_mock(logging.DEBUG)
module.compact_segments(repository='repo', storage_config={}, dry_run=False)
def test_compact_segments_with_dry_run_skips_borg_call():
flexmock(module).should_receive('execute_command').never()
module.compact_segments(repository='repo', storage_config={}, dry_run=True)
def test_compact_segments_with_local_path_calls_borg_via_local_path():
insert_execute_command_mock(('borg1',) + COMPACT_COMMAND[1:] + ('repo',), logging.INFO)
module.compact_segments(
dry_run=False, repository='repo', storage_config={}, local_path='borg1',
)
def test_compact_segments_with_remote_path_calls_borg_with_remote_path_parameters():
insert_execute_command_mock(COMPACT_COMMAND + ('--remote-path', 'borg1', 'repo'), logging.INFO)
module.compact_segments(
dry_run=False, repository='repo', storage_config={}, remote_path='borg1',
)
def test_compact_segments_with_progress_calls_borg_with_progress_parameter():
insert_execute_command_mock(COMPACT_COMMAND + ('--progress', 'repo'), logging.INFO)
module.compact_segments(
dry_run=False, repository='repo', storage_config={}, progress=True,
)
def test_compact_segments_with_cleanup_commits_calls_borg_with_cleanup_commits_parameter():
insert_execute_command_mock(COMPACT_COMMAND + ('--cleanup-commits', 'repo'), logging.INFO)
module.compact_segments(
dry_run=False, repository='repo', storage_config={}, cleanup_commits=True,
)
def test_compact_segments_with_threshold_calls_borg_with_threshold_parameter():
insert_execute_command_mock(COMPACT_COMMAND + ('--threshold', '20', 'repo'), logging.INFO)
module.compact_segments(
dry_run=False, repository='repo', storage_config={}, threshold=20,
)
def test_compact_segments_with_umask_calls_borg_with_umask_parameters():
storage_config = {'umask': '077'}
insert_execute_command_mock(COMPACT_COMMAND + ('--umask', '077', 'repo'), logging.INFO)
module.compact_segments(
dry_run=False, repository='repo', storage_config=storage_config,
)
def test_compact_segments_with_lock_wait_calls_borg_with_lock_wait_parameters():
storage_config = {'lock_wait': 5}
insert_execute_command_mock(COMPACT_COMMAND + ('--lock-wait', '5', 'repo'), logging.INFO)
module.compact_segments(
dry_run=False, repository='repo', storage_config=storage_config,
)
def test_compact_segments_with_extra_borg_options_calls_borg_with_extra_options():
insert_execute_command_mock(COMPACT_COMMAND + ('--extra', '--options', 'repo'), logging.INFO)
module.compact_segments(
dry_run=False,
repository='repo',
storage_config={'extra_borg_options': {'compact': '--extra --options'}},
)

File diff suppressed because it is too large Load Diff

View File

@ -25,12 +25,14 @@ def test_extract_last_archive_dry_run_calls_borg_with_last_archive():
('borg', 'list', '--short', 'repo'), result='archive1\narchive2\n' ('borg', 'list', '--short', 'repo'), result='archive1\narchive2\n'
) )
insert_execute_command_mock(('borg', 'extract', '--dry-run', 'repo::archive2')) insert_execute_command_mock(('borg', 'extract', '--dry-run', 'repo::archive2'))
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_last_archive_dry_run(repository='repo', lock_wait=None) module.extract_last_archive_dry_run(repository='repo', lock_wait=None)
def test_extract_last_archive_dry_run_without_any_archives_should_not_raise(): def test_extract_last_archive_dry_run_without_any_archives_should_not_raise():
insert_execute_command_output_mock(('borg', 'list', '--short', 'repo'), result='\n') insert_execute_command_output_mock(('borg', 'list', '--short', 'repo'), result='\n')
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_last_archive_dry_run(repository='repo', lock_wait=None) module.extract_last_archive_dry_run(repository='repo', lock_wait=None)
@ -41,6 +43,7 @@ def test_extract_last_archive_dry_run_with_log_info_calls_borg_with_info_paramet
) )
insert_execute_command_mock(('borg', 'extract', '--dry-run', '--info', 'repo::archive2')) insert_execute_command_mock(('borg', 'extract', '--dry-run', '--info', 'repo::archive2'))
insert_logging_mock(logging.INFO) insert_logging_mock(logging.INFO)
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_last_archive_dry_run(repository='repo', lock_wait=None) module.extract_last_archive_dry_run(repository='repo', lock_wait=None)
@ -53,6 +56,7 @@ def test_extract_last_archive_dry_run_with_log_debug_calls_borg_with_debug_param
('borg', 'extract', '--dry-run', '--debug', '--show-rc', '--list', 'repo::archive2') ('borg', 'extract', '--dry-run', '--debug', '--show-rc', '--list', 'repo::archive2')
) )
insert_logging_mock(logging.DEBUG) insert_logging_mock(logging.DEBUG)
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_last_archive_dry_run(repository='repo', lock_wait=None) module.extract_last_archive_dry_run(repository='repo', lock_wait=None)
@ -62,6 +66,7 @@ def test_extract_last_archive_dry_run_calls_borg_via_local_path():
('borg1', 'list', '--short', 'repo'), result='archive1\narchive2\n' ('borg1', 'list', '--short', 'repo'), result='archive1\narchive2\n'
) )
insert_execute_command_mock(('borg1', 'extract', '--dry-run', 'repo::archive2')) insert_execute_command_mock(('borg1', 'extract', '--dry-run', 'repo::archive2'))
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_last_archive_dry_run(repository='repo', lock_wait=None, local_path='borg1') module.extract_last_archive_dry_run(repository='repo', lock_wait=None, local_path='borg1')
@ -73,6 +78,7 @@ def test_extract_last_archive_dry_run_calls_borg_with_remote_path_parameters():
insert_execute_command_mock( insert_execute_command_mock(
('borg', 'extract', '--dry-run', '--remote-path', 'borg1', 'repo::archive2') ('borg', 'extract', '--dry-run', '--remote-path', 'borg1', 'repo::archive2')
) )
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_last_archive_dry_run(repository='repo', lock_wait=None, remote_path='borg1') module.extract_last_archive_dry_run(repository='repo', lock_wait=None, remote_path='borg1')
@ -84,6 +90,7 @@ def test_extract_last_archive_dry_run_calls_borg_with_lock_wait_parameters():
insert_execute_command_mock( insert_execute_command_mock(
('borg', 'extract', '--dry-run', '--lock-wait', '5', 'repo::archive2') ('borg', 'extract', '--dry-run', '--lock-wait', '5', 'repo::archive2')
) )
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_last_archive_dry_run(repository='repo', lock_wait=5) module.extract_last_archive_dry_run(repository='repo', lock_wait=5)
@ -91,6 +98,7 @@ def test_extract_last_archive_dry_run_calls_borg_with_lock_wait_parameters():
def test_extract_archive_calls_borg_with_path_parameters(): def test_extract_archive_calls_borg_with_path_parameters():
flexmock(module.os.path).should_receive('abspath').and_return('repo') flexmock(module.os.path).should_receive('abspath').and_return('repo')
insert_execute_command_mock(('borg', 'extract', 'repo::archive', 'path1', 'path2')) insert_execute_command_mock(('borg', 'extract', 'repo::archive', 'path1', 'path2'))
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -99,12 +107,14 @@ def test_extract_archive_calls_borg_with_path_parameters():
paths=['path1', 'path2'], paths=['path1', 'path2'],
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
) )
def test_extract_archive_calls_borg_with_remote_path_parameters(): def test_extract_archive_calls_borg_with_remote_path_parameters():
flexmock(module.os.path).should_receive('abspath').and_return('repo') flexmock(module.os.path).should_receive('abspath').and_return('repo')
insert_execute_command_mock(('borg', 'extract', '--remote-path', 'borg1', 'repo::archive')) insert_execute_command_mock(('borg', 'extract', '--remote-path', 'borg1', 'repo::archive'))
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -113,13 +123,18 @@ def test_extract_archive_calls_borg_with_remote_path_parameters():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
remote_path='borg1', remote_path='borg1',
) )
def test_extract_archive_calls_borg_with_numeric_owner_parameter(): @pytest.mark.parametrize(
'feature_available,option_flag', ((True, '--numeric-ids'), (False, '--numeric-owner'),),
)
def test_extract_archive_calls_borg_with_numeric_ids_parameter(feature_available, option_flag):
flexmock(module.os.path).should_receive('abspath').and_return('repo') flexmock(module.os.path).should_receive('abspath').and_return('repo')
insert_execute_command_mock(('borg', 'extract', '--numeric-owner', 'repo::archive')) insert_execute_command_mock(('borg', 'extract', option_flag, 'repo::archive'))
flexmock(module.feature).should_receive('available').and_return(feature_available)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -128,12 +143,14 @@ def test_extract_archive_calls_borg_with_numeric_owner_parameter():
paths=None, paths=None,
location_config={'numeric_owner': True}, location_config={'numeric_owner': True},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
) )
def test_extract_archive_calls_borg_with_umask_parameters(): def test_extract_archive_calls_borg_with_umask_parameters():
flexmock(module.os.path).should_receive('abspath').and_return('repo') flexmock(module.os.path).should_receive('abspath').and_return('repo')
insert_execute_command_mock(('borg', 'extract', '--umask', '0770', 'repo::archive')) insert_execute_command_mock(('borg', 'extract', '--umask', '0770', 'repo::archive'))
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -142,12 +159,14 @@ def test_extract_archive_calls_borg_with_umask_parameters():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={'umask': '0770'}, storage_config={'umask': '0770'},
local_borg_version='1.2.3',
) )
def test_extract_archive_calls_borg_with_lock_wait_parameters(): def test_extract_archive_calls_borg_with_lock_wait_parameters():
flexmock(module.os.path).should_receive('abspath').and_return('repo') flexmock(module.os.path).should_receive('abspath').and_return('repo')
insert_execute_command_mock(('borg', 'extract', '--lock-wait', '5', 'repo::archive')) insert_execute_command_mock(('borg', 'extract', '--lock-wait', '5', 'repo::archive'))
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -156,6 +175,7 @@ def test_extract_archive_calls_borg_with_lock_wait_parameters():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={'lock_wait': '5'}, storage_config={'lock_wait': '5'},
local_borg_version='1.2.3',
) )
@ -163,6 +183,7 @@ def test_extract_archive_with_log_info_calls_borg_with_info_parameter():
flexmock(module.os.path).should_receive('abspath').and_return('repo') flexmock(module.os.path).should_receive('abspath').and_return('repo')
insert_execute_command_mock(('borg', 'extract', '--info', 'repo::archive')) insert_execute_command_mock(('borg', 'extract', '--info', 'repo::archive'))
insert_logging_mock(logging.INFO) insert_logging_mock(logging.INFO)
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -171,6 +192,7 @@ def test_extract_archive_with_log_info_calls_borg_with_info_parameter():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
) )
@ -180,6 +202,7 @@ def test_extract_archive_with_log_debug_calls_borg_with_debug_parameters():
('borg', 'extract', '--debug', '--list', '--show-rc', 'repo::archive') ('borg', 'extract', '--debug', '--list', '--show-rc', 'repo::archive')
) )
insert_logging_mock(logging.DEBUG) insert_logging_mock(logging.DEBUG)
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -188,12 +211,14 @@ def test_extract_archive_with_log_debug_calls_borg_with_debug_parameters():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
) )
def test_extract_archive_calls_borg_with_dry_run_parameter(): def test_extract_archive_calls_borg_with_dry_run_parameter():
flexmock(module.os.path).should_receive('abspath').and_return('repo') flexmock(module.os.path).should_receive('abspath').and_return('repo')
insert_execute_command_mock(('borg', 'extract', '--dry-run', 'repo::archive')) insert_execute_command_mock(('borg', 'extract', '--dry-run', 'repo::archive'))
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=True, dry_run=True,
@ -202,12 +227,14 @@ def test_extract_archive_calls_borg_with_dry_run_parameter():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
) )
def test_extract_archive_calls_borg_with_destination_path(): def test_extract_archive_calls_borg_with_destination_path():
flexmock(module.os.path).should_receive('abspath').and_return('repo') flexmock(module.os.path).should_receive('abspath').and_return('repo')
insert_execute_command_mock(('borg', 'extract', 'repo::archive'), working_directory='/dest') insert_execute_command_mock(('borg', 'extract', 'repo::archive'), working_directory='/dest')
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -216,6 +243,7 @@ def test_extract_archive_calls_borg_with_destination_path():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
destination_path='/dest', destination_path='/dest',
) )
@ -223,6 +251,7 @@ def test_extract_archive_calls_borg_with_destination_path():
def test_extract_archive_calls_borg_with_strip_components(): def test_extract_archive_calls_borg_with_strip_components():
flexmock(module.os.path).should_receive('abspath').and_return('repo') flexmock(module.os.path).should_receive('abspath').and_return('repo')
insert_execute_command_mock(('borg', 'extract', '--strip-components', '5', 'repo::archive')) insert_execute_command_mock(('borg', 'extract', '--strip-components', '5', 'repo::archive'))
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -231,6 +260,7 @@ def test_extract_archive_calls_borg_with_strip_components():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
strip_components=5, strip_components=5,
) )
@ -242,6 +272,7 @@ def test_extract_archive_calls_borg_with_progress_parameter():
output_file=module.DO_NOT_CAPTURE, output_file=module.DO_NOT_CAPTURE,
working_directory=None, working_directory=None,
).once() ).once()
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -250,6 +281,7 @@ def test_extract_archive_calls_borg_with_progress_parameter():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
progress=True, progress=True,
) )
@ -265,6 +297,7 @@ def test_extract_archive_with_progress_and_extract_to_stdout_raises():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
progress=True, progress=True,
extract_to_stdout=True, extract_to_stdout=True,
) )
@ -279,6 +312,7 @@ def test_extract_archive_calls_borg_with_stdout_parameter_and_returns_process():
working_directory=None, working_directory=None,
run_to_completion=False, run_to_completion=False,
).and_return(process).once() ).and_return(process).once()
flexmock(module.feature).should_receive('available').and_return(True)
assert ( assert (
module.extract_archive( module.extract_archive(
@ -288,6 +322,7 @@ def test_extract_archive_calls_borg_with_stdout_parameter_and_returns_process():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
extract_to_stdout=True, extract_to_stdout=True,
) )
== process == process
@ -299,6 +334,7 @@ def test_extract_archive_skips_abspath_for_remote_repository():
flexmock(module).should_receive('execute_command').with_args( flexmock(module).should_receive('execute_command').with_args(
('borg', 'extract', 'server:repo::archive'), working_directory=None ('borg', 'extract', 'server:repo::archive'), working_directory=None
).once() ).once()
flexmock(module.feature).should_receive('available').and_return(True)
module.extract_archive( module.extract_archive(
dry_run=False, dry_run=False,
@ -307,4 +343,5 @@ def test_extract_archive_skips_abspath_for_remote_repository():
paths=None, paths=None,
location_config={}, location_config={},
storage_config={}, storage_config={},
local_borg_version='1.2.3',
) )

View File

@ -13,11 +13,11 @@ INIT_COMMAND = ('borg', 'init', '--encryption', 'repokey')
def insert_info_command_found_mock(): def insert_info_command_found_mock():
flexmock(module).should_receive('execute_command') flexmock(module.info).should_receive('display_archives_info')
def insert_info_command_not_found_mock(): def insert_info_command_not_found_mock():
flexmock(module).should_receive('execute_command').and_raise( flexmock(module.info).should_receive('display_archives_info').and_raise(
subprocess.CalledProcessError(module.INFO_REPOSITORY_NOT_FOUND_EXIT_CODE, []) subprocess.CalledProcessError(module.INFO_REPOSITORY_NOT_FOUND_EXIT_CODE, [])
) )
@ -48,13 +48,13 @@ def test_initialize_repository_raises_for_borg_init_error():
def test_initialize_repository_skips_initialization_when_repository_already_exists(): def test_initialize_repository_skips_initialization_when_repository_already_exists():
flexmock(module).should_receive('execute_command').once() insert_info_command_found_mock()
module.initialize_repository(repository='repo', storage_config={}, encryption_mode='repokey') module.initialize_repository(repository='repo', storage_config={}, encryption_mode='repokey')
def test_initialize_repository_raises_for_unknown_info_command_error(): def test_initialize_repository_raises_for_unknown_info_command_error():
flexmock(module).should_receive('execute_command').and_raise( flexmock(module.info).should_receive('display_archives_info').and_raise(
subprocess.CalledProcessError(INFO_SOME_UNKNOWN_EXIT_CODE, []) subprocess.CalledProcessError(INFO_SOME_UNKNOWN_EXIT_CODE, [])
) )

View File

@ -1,3 +1,4 @@
import argparse
import logging import logging
import pytest import pytest
@ -8,8 +9,6 @@ from borgmatic.borg import list as module
from ..test_verbosity import insert_logging_mock from ..test_verbosity import insert_logging_mock
BORG_LIST_LATEST_ARGUMENTS = ( BORG_LIST_LATEST_ARGUMENTS = (
'--glob-archives',
module.BORG_EXCLUDE_CHECKPOINTS_GLOB,
'--last', '--last',
'1', '1',
'--short', '--short',
@ -108,156 +107,125 @@ def test_resolve_archive_name_with_lock_wait_calls_borg_with_lock_wait_parameter
) )
def test_list_archives_calls_borg_with_parameters(): def test_make_list_command_includes_log_info():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', 'repo'), output_log_level=logging.WARNING, borg_local_path='borg'
)
module.list_archives(
repository='repo',
storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False, successful=False),
)
def test_list_archives_with_log_info_calls_borg_with_info_parameter():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', '--info', 'repo'), output_log_level=logging.WARNING, borg_local_path='borg'
)
insert_logging_mock(logging.INFO) insert_logging_mock(logging.INFO)
module.list_archives( command = module.make_list_command(
repository='repo', repository='repo',
storage_config={}, storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False, successful=False), list_arguments=flexmock(archive=None, paths=None, json=False),
) )
assert command == ('borg', 'list', '--info', 'repo')
def test_list_archives_with_log_info_and_json_suppresses_most_borg_output():
flexmock(module).should_receive('execute_command').with_args( def test_make_list_command_includes_json_but_not_info():
('borg', 'list', '--json', 'repo'), output_log_level=None, borg_local_path='borg'
)
insert_logging_mock(logging.INFO) insert_logging_mock(logging.INFO)
module.list_archives( command = module.make_list_command(
repository='repo', repository='repo',
storage_config={}, storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=True, successful=False), list_arguments=flexmock(archive=None, paths=None, json=True),
) )
assert command == ('borg', 'list', '--json', 'repo')
def test_list_archives_with_log_debug_calls_borg_with_debug_parameter():
flexmock(module).should_receive('execute_command').with_args( def test_make_list_command_includes_log_debug():
('borg', 'list', '--debug', '--show-rc', 'repo'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
insert_logging_mock(logging.DEBUG) insert_logging_mock(logging.DEBUG)
module.list_archives( command = module.make_list_command(
repository='repo', repository='repo',
storage_config={}, storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False, successful=False), list_arguments=flexmock(archive=None, paths=None, json=False),
) )
assert command == ('borg', 'list', '--debug', '--show-rc', 'repo')
def test_list_archives_with_log_debug_and_json_suppresses_most_borg_output():
flexmock(module).should_receive('execute_command').with_args( def test_make_list_command_includes_json_but_not_debug():
('borg', 'list', '--json', 'repo'), output_log_level=None, borg_local_path='borg'
)
insert_logging_mock(logging.DEBUG) insert_logging_mock(logging.DEBUG)
module.list_archives( command = module.make_list_command(
repository='repo', repository='repo',
storage_config={}, storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=True, successful=False), list_arguments=flexmock(archive=None, paths=None, json=True),
) )
assert command == ('borg', 'list', '--json', 'repo')
def test_list_archives_with_lock_wait_calls_borg_with_lock_wait_parameters():
storage_config = {'lock_wait': 5}
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', '--lock-wait', '5', 'repo'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
module.list_archives(
repository='repo',
storage_config=storage_config,
list_arguments=flexmock(archive=None, paths=None, json=False, successful=False),
)
def test_list_archives_with_archive_calls_borg_with_archive_parameter(): def test_make_list_command_includes_json():
storage_config = {} command = module.make_list_command(
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', 'repo::archive'), output_log_level=logging.WARNING, borg_local_path='borg'
)
module.list_archives(
repository='repo',
storage_config=storage_config,
list_arguments=flexmock(archive='archive', paths=None, json=False, successful=False),
)
def test_list_archives_with_path_calls_borg_with_path_parameter():
storage_config = {}
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', 'repo::archive', 'var/lib'),
output_log_level=logging.WARNING,
borg_local_path='borg',
)
module.list_archives(
repository='repo',
storage_config=storage_config,
list_arguments=flexmock(archive='archive', paths=['var/lib'], json=False, successful=False),
)
def test_list_archives_with_local_path_calls_borg_via_local_path():
flexmock(module).should_receive('execute_command').with_args(
('borg1', 'list', 'repo'), output_log_level=logging.WARNING, borg_local_path='borg1'
)
module.list_archives(
repository='repo', repository='repo',
storage_config={}, storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False, successful=False), list_arguments=flexmock(archive=None, paths=None, json=True),
local_path='borg1',
) )
assert command == ('borg', 'list', '--json', 'repo')
def test_list_archives_with_remote_path_calls_borg_with_remote_path_parameters():
flexmock(module).should_receive('execute_command').with_args( def test_make_list_command_includes_lock_wait():
('borg', 'list', '--remote-path', 'borg1', 'repo'), command = module.make_list_command(
output_log_level=logging.WARNING, repository='repo',
borg_local_path='borg', storage_config={'lock_wait': 5},
list_arguments=flexmock(archive=None, paths=None, json=False),
) )
module.list_archives( assert command == ('borg', 'list', '--lock-wait', '5', 'repo')
def test_make_list_command_includes_archive():
command = module.make_list_command(
repository='repo', repository='repo',
storage_config={}, storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False, successful=False), list_arguments=flexmock(archive='archive', paths=None, json=False),
remote_path='borg1',
) )
assert command == ('borg', 'list', 'repo::archive')
def test_list_archives_with_short_calls_borg_with_short_parameter():
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', '--short', 'repo'),
output_log_level=logging.WARNING,
borg_local_path='borg',
).and_return('[]')
module.list_archives( def test_make_list_command_includes_archive_and_path():
command = module.make_list_command(
repository='repo', repository='repo',
storage_config={}, storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False, successful=False, short=True), list_arguments=flexmock(archive='archive', paths=['var/lib'], json=False),
) )
assert command == ('borg', 'list', 'repo::archive', 'var/lib')
def test_make_list_command_includes_local_path():
command = module.make_list_command(
repository='repo',
storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False),
local_path='borg2',
)
assert command == ('borg2', 'list', 'repo')
def test_make_list_command_includes_remote_path():
command = module.make_list_command(
repository='repo',
storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False),
remote_path='borg2',
)
assert command == ('borg', 'list', '--remote-path', 'borg2', 'repo')
def test_make_list_command_includes_short():
command = module.make_list_command(
repository='repo',
storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False, short=True),
)
assert command == ('borg', 'list', '--short', 'repo')
@pytest.mark.parametrize( @pytest.mark.parametrize(
'argument_name', 'argument_name',
@ -273,45 +241,156 @@ def test_list_archives_with_short_calls_borg_with_short_parameter():
'patterns_from', 'patterns_from',
), ),
) )
def test_list_archives_passes_through_arguments_to_borg(argument_name): def test_make_list_command_includes_additional_flags(argument_name):
flexmock(module).should_receive('execute_command').with_args( command = module.make_list_command(
('borg', 'list', '--' + argument_name.replace('_', '-'), 'value', 'repo'),
output_log_level=logging.WARNING,
borg_local_path='borg',
).and_return('[]')
module.list_archives(
repository='repo', repository='repo',
storage_config={}, storage_config={},
list_arguments=flexmock( list_arguments=flexmock(
archive=None, paths=None, json=False, successful=False, **{argument_name: 'value'} archive=None,
paths=None,
json=False,
find_paths=None,
format=None,
**{argument_name: 'value'}
), ),
) )
assert command == ('borg', 'list', '--' + argument_name.replace('_', '-'), 'value', 'repo')
def test_list_archives_with_successful_calls_borg_to_exclude_checkpoints():
def test_make_find_paths_considers_none_as_empty_paths():
assert module.make_find_paths(None) == ()
def test_make_find_paths_passes_through_patterns():
find_paths = (
'fm:*',
'sh:**/*.txt',
're:^.*$',
'pp:root/somedir',
'pf:root/foo.txt',
'R /',
'r /',
'p /',
'P /',
'+ /',
'- /',
'! /',
)
assert module.make_find_paths(find_paths) == find_paths
def test_make_find_paths_adds_globs_to_path_fragments():
assert module.make_find_paths(('foo.txt',)) == ('sh:**/*foo.txt*/**',)
def test_list_archives_calls_borg_with_parameters():
list_arguments = argparse.Namespace(archive=None, paths=None, json=False, find_paths=None)
flexmock(module).should_receive('make_list_command').with_args(
repository='repo',
storage_config={},
list_arguments=list_arguments,
local_path='borg',
remote_path=None,
).and_return(('borg', 'list', 'repo'))
flexmock(module).should_receive('make_find_paths').and_return(())
flexmock(module).should_receive('execute_command').with_args( flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', '--glob-archives', module.BORG_EXCLUDE_CHECKPOINTS_GLOB, 'repo'), ('borg', 'list', 'repo'), output_log_level=logging.WARNING, borg_local_path='borg'
output_log_level=logging.WARNING, ).once()
borg_local_path='borg',
).and_return('[]')
module.list_archives( module.list_archives(
repository='repo', repository='repo', storage_config={}, list_arguments=list_arguments,
storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=False, successful=True),
) )
def test_list_archives_with_json_calls_borg_with_json_parameter(): def test_list_archives_with_json_suppresses_most_borg_output():
list_arguments = argparse.Namespace(archive=None, paths=None, json=True, find_paths=None)
flexmock(module).should_receive('make_list_command').with_args(
repository='repo',
storage_config={},
list_arguments=list_arguments,
local_path='borg',
remote_path=None,
).and_return(('borg', 'list', 'repo'))
flexmock(module).should_receive('make_find_paths').and_return(())
flexmock(module).should_receive('execute_command').with_args( flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', '--json', 'repo'), output_log_level=None, borg_local_path='borg' ('borg', 'list', 'repo'), output_log_level=None, borg_local_path='borg'
).and_return('[]') ).once()
json_output = module.list_archives( module.list_archives(
repository='repo', repository='repo', storage_config={}, list_arguments=list_arguments,
storage_config={},
list_arguments=flexmock(archive=None, paths=None, json=True, successful=False),
) )
assert json_output == '[]'
def test_list_archives_calls_borg_with_local_path():
list_arguments = argparse.Namespace(archive=None, paths=None, json=False, find_paths=None)
flexmock(module).should_receive('make_list_command').with_args(
repository='repo',
storage_config={},
list_arguments=list_arguments,
local_path='borg2',
remote_path=None,
).and_return(('borg2', 'list', 'repo'))
flexmock(module).should_receive('make_find_paths').and_return(())
flexmock(module).should_receive('execute_command').with_args(
('borg2', 'list', 'repo'), output_log_level=logging.WARNING, borg_local_path='borg2'
).once()
module.list_archives(
repository='repo', storage_config={}, list_arguments=list_arguments, local_path='borg2',
)
def test_list_archives_calls_borg_multiple_times_with_find_paths():
glob_paths = ('**/*foo.txt*/**',)
list_arguments = argparse.Namespace(
archive=None, paths=None, json=False, find_paths=['foo.txt'], format=None
)
flexmock(module).should_receive('make_list_command').and_return(
('borg', 'list', 'repo')
).and_return(('borg', 'list', 'repo::archive1')).and_return(('borg', 'list', 'repo::archive2'))
flexmock(module).should_receive('make_find_paths').and_return(glob_paths)
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', 'repo'), output_log_level=None, borg_local_path='borg'
).and_return(
'archive1 Sun, 2022-05-29 15:27:04 [abc]\narchive2 Mon, 2022-05-30 19:47:15 [xyz]'
).once()
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', 'repo::archive1') + glob_paths,
output_log_level=logging.WARNING,
borg_local_path='borg',
).once()
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', 'repo::archive2') + glob_paths,
output_log_level=logging.WARNING,
borg_local_path='borg',
).once()
module.list_archives(
repository='repo', storage_config={}, list_arguments=list_arguments,
)
def test_list_archives_calls_borg_with_archive():
list_arguments = argparse.Namespace(archive='archive', paths=None, json=False, find_paths=None)
flexmock(module).should_receive('make_list_command').with_args(
repository='repo',
storage_config={},
list_arguments=list_arguments,
local_path='borg',
remote_path=None,
).and_return(('borg', 'list', 'repo::archive'))
flexmock(module).should_receive('make_find_paths').and_return(())
flexmock(module).should_receive('execute_command').with_args(
('borg', 'list', 'repo::archive'), output_log_level=logging.WARNING, borg_local_path='borg'
).once()
module.list_archives(
repository='repo', storage_config={}, list_arguments=list_arguments,
)

View File

@ -0,0 +1,49 @@
import logging
import pytest
from flexmock import flexmock
from borgmatic.borg import version as module
from ..test_verbosity import insert_logging_mock
VERSION = '1.2.3'
def insert_execute_command_mock(command, borg_local_path='borg', version_output=f'borg {VERSION}'):
flexmock(module).should_receive('execute_command').with_args(
command, output_log_level=None, borg_local_path=borg_local_path
).once().and_return(version_output)
def test_local_borg_version_calls_borg_with_required_parameters():
insert_execute_command_mock(('borg', '--version'))
assert module.local_borg_version() == VERSION
def test_local_borg_version_with_log_info_calls_borg_with_info_parameter():
insert_execute_command_mock(('borg', '--version', '--info'))
insert_logging_mock(logging.INFO)
assert module.local_borg_version() == VERSION
def test_local_borg_version_with_log_debug_calls_borg_with_debug_parameters():
insert_execute_command_mock(('borg', '--version', '--debug', '--show-rc'))
insert_logging_mock(logging.DEBUG)
assert module.local_borg_version() == VERSION
def test_local_borg_version_with_local_borg_path_calls_borg_with_it():
insert_execute_command_mock(('borg1', '--version'), borg_local_path='borg1')
assert module.local_borg_version('borg1') == VERSION
def test_local_borg_version_with_invalid_version_raises():
insert_execute_command_mock(('borg', '--version'), version_output='wtf')
with pytest.raises(ValueError):
module.local_borg_version()

View File

@ -5,146 +5,140 @@ from borgmatic.commands import arguments as module
def test_parse_subparser_arguments_consumes_subparser_arguments_before_subparser_name(): def test_parse_subparser_arguments_consumes_subparser_arguments_before_subparser_name():
action_namespace = flexmock(foo=True) action_namespace = flexmock(foo=True)
subparsers = flexmock( subparsers = {
choices={ 'action': flexmock(parse_known_args=lambda arguments: (action_namespace, ['action'])),
'action': flexmock(parse_known_args=lambda arguments: (action_namespace, [])), 'other': flexmock(),
'other': flexmock(), }
}
arguments, remaining_arguments = module.parse_subparser_arguments(
('--foo', 'true', 'action'), subparsers
) )
arguments = module.parse_subparser_arguments(('--foo', 'true', 'action'), subparsers)
assert arguments == {'action': action_namespace} assert arguments == {'action': action_namespace}
assert remaining_arguments == []
def test_parse_subparser_arguments_consumes_subparser_arguments_after_subparser_name(): def test_parse_subparser_arguments_consumes_subparser_arguments_after_subparser_name():
action_namespace = flexmock(foo=True) action_namespace = flexmock(foo=True)
subparsers = flexmock( subparsers = {
choices={ 'action': flexmock(parse_known_args=lambda arguments: (action_namespace, ['action'])),
'action': flexmock(parse_known_args=lambda arguments: (action_namespace, [])), 'other': flexmock(),
'other': flexmock(), }
}
arguments, remaining_arguments = module.parse_subparser_arguments(
('action', '--foo', 'true'), subparsers
) )
arguments = module.parse_subparser_arguments(('action', '--foo', 'true'), subparsers)
assert arguments == {'action': action_namespace} assert arguments == {'action': action_namespace}
assert remaining_arguments == []
def test_parse_subparser_arguments_consumes_subparser_arguments_with_alias(): def test_parse_subparser_arguments_consumes_subparser_arguments_with_alias():
action_namespace = flexmock(foo=True) action_namespace = flexmock(foo=True)
action_subparser = flexmock(parse_known_args=lambda arguments: (action_namespace, [])) action_subparser = flexmock(parse_known_args=lambda arguments: (action_namespace, ['action']))
subparsers = flexmock( subparsers = {
choices={ 'action': action_subparser,
'action': action_subparser, '-a': action_subparser,
'-a': action_subparser, 'other': flexmock(),
'other': flexmock(), '-o': flexmock(),
'-o': flexmock(), }
}
)
flexmock(module).SUBPARSER_ALIASES = {'action': ['-a'], 'other': ['-o']} flexmock(module).SUBPARSER_ALIASES = {'action': ['-a'], 'other': ['-o']}
arguments = module.parse_subparser_arguments(('-a', '--foo', 'true'), subparsers) arguments, remaining_arguments = module.parse_subparser_arguments(
('-a', '--foo', 'true'), subparsers
)
assert arguments == {'action': action_namespace} assert arguments == {'action': action_namespace}
assert remaining_arguments == []
def test_parse_subparser_arguments_consumes_multiple_subparser_arguments(): def test_parse_subparser_arguments_consumes_multiple_subparser_arguments():
action_namespace = flexmock(foo=True) action_namespace = flexmock(foo=True)
other_namespace = flexmock(bar=3) other_namespace = flexmock(bar=3)
subparsers = flexmock( subparsers = {
choices={ 'action': flexmock(
'action': flexmock( parse_known_args=lambda arguments: (action_namespace, ['action', '--bar', '3'])
parse_known_args=lambda arguments: (action_namespace, ['--bar', '3']) ),
), 'other': flexmock(parse_known_args=lambda arguments: (other_namespace, [])),
'other': flexmock(parse_known_args=lambda arguments: (other_namespace, [])), }
}
)
arguments = module.parse_subparser_arguments( arguments, remaining_arguments = module.parse_subparser_arguments(
('action', '--foo', 'true', 'other', '--bar', '3'), subparsers ('action', '--foo', 'true', 'other', '--bar', '3'), subparsers
) )
assert arguments == {'action': action_namespace, 'other': other_namespace} assert arguments == {'action': action_namespace, 'other': other_namespace}
assert remaining_arguments == []
def test_parse_subparser_arguments_applies_default_subparsers(): def test_parse_subparser_arguments_applies_default_subparsers():
prune_namespace = flexmock() prune_namespace = flexmock()
compact_namespace = flexmock()
create_namespace = flexmock(progress=True) create_namespace = flexmock(progress=True)
check_namespace = flexmock() check_namespace = flexmock()
subparsers = flexmock( subparsers = {
choices={ 'prune': flexmock(
'prune': flexmock(parse_known_args=lambda arguments: (prune_namespace, ['--progress'])), parse_known_args=lambda arguments: (prune_namespace, ['prune', '--progress'])
'create': flexmock(parse_known_args=lambda arguments: (create_namespace, [])), ),
'check': flexmock(parse_known_args=lambda arguments: (check_namespace, [])), 'compact': flexmock(parse_known_args=lambda arguments: (compact_namespace, [])),
'other': flexmock(), 'create': flexmock(parse_known_args=lambda arguments: (create_namespace, [])),
} 'check': flexmock(parse_known_args=lambda arguments: (check_namespace, [])),
) 'other': flexmock(),
}
arguments = module.parse_subparser_arguments(('--progress'), subparsers) arguments, remaining_arguments = module.parse_subparser_arguments(('--progress'), subparsers)
assert arguments == { assert arguments == {
'prune': prune_namespace, 'prune': prune_namespace,
'compact': compact_namespace,
'create': create_namespace, 'create': create_namespace,
'check': check_namespace, 'check': check_namespace,
} }
assert remaining_arguments == []
def test_parse_global_arguments_with_help_does_not_apply_default_subparsers(): def test_parse_subparser_arguments_passes_through_unknown_arguments_before_subparser_name():
global_namespace = flexmock(verbosity='lots')
action_namespace = flexmock() action_namespace = flexmock()
top_level_parser = flexmock(parse_args=lambda arguments: global_namespace) subparsers = {
subparsers = flexmock( 'action': flexmock(
choices={ parse_known_args=lambda arguments: (action_namespace, ['action', '--verbosity', 'lots'])
'action': flexmock( ),
parse_known_args=lambda arguments: (action_namespace, ['--verbosity', 'lots']) 'other': flexmock(),
), }
'other': flexmock(),
} arguments, remaining_arguments = module.parse_subparser_arguments(
('--verbosity', 'lots', 'action'), subparsers
) )
arguments = module.parse_global_arguments( assert arguments == {'action': action_namespace}
('--verbosity', 'lots', '--help'), top_level_parser, subparsers assert remaining_arguments == ['--verbosity', 'lots']
)
assert arguments == global_namespace
def test_parse_global_arguments_consumes_global_arguments_before_subparser_name(): def test_parse_subparser_arguments_passes_through_unknown_arguments_after_subparser_name():
global_namespace = flexmock(verbosity='lots')
action_namespace = flexmock() action_namespace = flexmock()
top_level_parser = flexmock(parse_args=lambda arguments: global_namespace) subparsers = {
subparsers = flexmock( 'action': flexmock(
choices={ parse_known_args=lambda arguments: (action_namespace, ['action', '--verbosity', 'lots'])
'action': flexmock( ),
parse_known_args=lambda arguments: (action_namespace, ['--verbosity', 'lots']) 'other': flexmock(),
), }
'other': flexmock(),
} arguments, remaining_arguments = module.parse_subparser_arguments(
('action', '--verbosity', 'lots'), subparsers
) )
arguments = module.parse_global_arguments( assert arguments == {'action': action_namespace}
('--verbosity', 'lots', 'action'), top_level_parser, subparsers assert remaining_arguments == ['--verbosity', 'lots']
)
assert arguments == global_namespace
def test_parse_global_arguments_consumes_global_arguments_after_subparser_name(): def test_parse_subparser_arguments_parses_borg_options_and_skips_other_subparsers():
global_namespace = flexmock(verbosity='lots') action_namespace = flexmock(options=[])
action_namespace = flexmock() subparsers = {
top_level_parser = flexmock(parse_args=lambda arguments: global_namespace) 'borg': flexmock(parse_known_args=lambda arguments: (action_namespace, ['borg', 'list'])),
subparsers = flexmock( 'list': flexmock(),
choices={ }
'action': flexmock(
parse_known_args=lambda arguments: (action_namespace, ['--verbosity', 'lots'])
),
'other': flexmock(),
}
)
arguments = module.parse_global_arguments( arguments, remaining_arguments = module.parse_subparser_arguments(('borg', 'list'), subparsers)
('action', '--verbosity', 'lots'), top_level_parser, subparsers
)
assert arguments == global_namespace assert arguments == {'borg': action_namespace}
assert arguments['borg'].options == ['list']
assert remaining_arguments == []

View File

@ -1,5 +1,6 @@
import logging import logging
import subprocess import subprocess
import time
from flexmock import flexmock from flexmock import flexmock
@ -9,6 +10,7 @@ from borgmatic.commands import borgmatic as module
def test_run_configuration_runs_actions_for_each_repository(): def test_run_configuration_runs_actions_for_each_repository():
flexmock(module.borg_environment).should_receive('initialize') flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
expected_results = [flexmock(), flexmock()] expected_results = [flexmock(), flexmock()]
flexmock(module).should_receive('run_actions').and_return(expected_results[:1]).and_return( flexmock(module).should_receive('run_actions').and_return(expected_results[:1]).and_return(
expected_results[1:] expected_results[1:]
@ -21,81 +23,26 @@ def test_run_configuration_runs_actions_for_each_repository():
assert results == expected_results assert results == expected_results
def test_run_configuration_calls_hooks_for_prune_action(): def test_run_configuration_with_invalid_borg_version_errors():
flexmock(module.borg_environment).should_receive('initialize') flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.command).should_receive('execute_hook').twice() flexmock(module.borg_version).should_receive('local_borg_version').and_raise(ValueError)
flexmock(module.dispatch).should_receive('call_hooks').at_least().twice() flexmock(module.command).should_receive('execute_hook').never()
flexmock(module).should_receive('run_actions').and_return([]) flexmock(module.dispatch).should_receive('call_hooks').never()
flexmock(module).should_receive('run_actions').never()
config = {'location': {'repositories': ['foo']}} config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'prune': flexmock()} arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'prune': flexmock()}
list(module.run_configuration('test.yaml', config, arguments)) list(module.run_configuration('test.yaml', config, arguments))
def test_run_configuration_executes_and_calls_hooks_for_create_action(): def test_run_configuration_logs_monitor_start_error():
flexmock(module.borg_environment).should_receive('initialize') flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.command).should_receive('execute_hook').twice() flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.dispatch).should_receive('call_hooks').at_least().twice() flexmock(module.dispatch).should_receive('call_hooks').and_raise(OSError).and_return(
flexmock(module).should_receive('run_actions').and_return([]) None
config = {'location': {'repositories': ['foo']}} ).and_return(None)
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
list(module.run_configuration('test.yaml', config, arguments))
def test_run_configuration_calls_hooks_for_check_action():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.command).should_receive('execute_hook').twice()
flexmock(module.dispatch).should_receive('call_hooks').at_least().twice()
flexmock(module).should_receive('run_actions').and_return([])
config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'check': flexmock()}
list(module.run_configuration('test.yaml', config, arguments))
def test_run_configuration_calls_hooks_for_extract_action():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.command).should_receive('execute_hook').twice()
flexmock(module.dispatch).should_receive('call_hooks').never()
flexmock(module).should_receive('run_actions').and_return([])
config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'extract': flexmock()}
list(module.run_configuration('test.yaml', config, arguments))
def test_run_configuration_does_not_trigger_hooks_for_list_action():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.command).should_receive('execute_hook').never()
flexmock(module.dispatch).should_receive('call_hooks').never()
flexmock(module).should_receive('run_actions').and_return([])
config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'list': flexmock()}
list(module.run_configuration('test.yaml', config, arguments))
def test_run_configuration_logs_actions_error():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.command).should_receive('execute_hook')
flexmock(module.dispatch).should_receive('call_hooks')
expected_results = [flexmock()] expected_results = [flexmock()]
flexmock(module).should_receive('make_error_log_records').and_return(expected_results) flexmock(module).should_receive('log_error_records').and_return(expected_results)
flexmock(module).should_receive('run_actions').and_raise(OSError)
config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False)}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == expected_results
def test_run_configuration_logs_pre_hook_error():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.command).should_receive('execute_hook').and_raise(OSError).and_return(None)
expected_results = [flexmock()]
flexmock(module).should_receive('make_error_log_records').and_return(expected_results)
flexmock(module).should_receive('run_actions').never() flexmock(module).should_receive('run_actions').never()
config = {'location': {'repositories': ['foo']}} config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()} arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
@ -105,11 +52,12 @@ def test_run_configuration_logs_pre_hook_error():
assert results == expected_results assert results == expected_results
def test_run_configuration_bails_for_pre_hook_soft_failure(): def test_run_configuration_bails_for_monitor_start_soft_failure():
flexmock(module.borg_environment).should_receive('initialize') flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
error = subprocess.CalledProcessError(borgmatic.hooks.command.SOFT_FAIL_EXIT_CODE, 'try again') error = subprocess.CalledProcessError(borgmatic.hooks.command.SOFT_FAIL_EXIT_CODE, 'try again')
flexmock(module.command).should_receive('execute_hook').and_raise(error).and_return(None) flexmock(module.dispatch).should_receive('call_hooks').and_raise(error)
flexmock(module).should_receive('make_error_log_records').never() flexmock(module).should_receive('log_error_records').never()
flexmock(module).should_receive('run_actions').never() flexmock(module).should_receive('run_actions').never()
config = {'location': {'repositories': ['foo']}} config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()} arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
@ -119,14 +67,46 @@ def test_run_configuration_bails_for_pre_hook_soft_failure():
assert results == [] assert results == []
def test_run_configuration_logs_post_hook_error(): def test_run_configuration_logs_actions_error():
flexmock(module.borg_environment).should_receive('initialize') flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.command).should_receive('execute_hook').and_return(None).and_raise( flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
OSError flexmock(module.command).should_receive('execute_hook')
).and_return(None)
flexmock(module.dispatch).should_receive('call_hooks') flexmock(module.dispatch).should_receive('call_hooks')
expected_results = [flexmock()] expected_results = [flexmock()]
flexmock(module).should_receive('make_error_log_records').and_return(expected_results) flexmock(module).should_receive('log_error_records').and_return(expected_results)
flexmock(module).should_receive('run_actions').and_raise(OSError)
config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False)}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == expected_results
def test_run_configuration_bails_for_actions_soft_failure():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.dispatch).should_receive('call_hooks')
error = subprocess.CalledProcessError(borgmatic.hooks.command.SOFT_FAIL_EXIT_CODE, 'try again')
flexmock(module).should_receive('run_actions').and_raise(error)
flexmock(module).should_receive('log_error_records').never()
flexmock(module.command).should_receive('considered_soft_failure').and_return(True)
config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == []
def test_run_configuration_logs_monitor_finish_error():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.dispatch).should_receive('call_hooks').and_return(None).and_return(
None
).and_raise(OSError)
expected_results = [flexmock()]
flexmock(module).should_receive('log_error_records').and_return(expected_results)
flexmock(module).should_receive('run_actions').and_return([]) flexmock(module).should_receive('run_actions').and_return([])
config = {'location': {'repositories': ['foo']}} config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()} arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
@ -136,15 +116,16 @@ def test_run_configuration_logs_post_hook_error():
assert results == expected_results assert results == expected_results
def test_run_configuration_bails_for_post_hook_soft_failure(): def test_run_configuration_bails_for_monitor_finish_soft_failure():
flexmock(module.borg_environment).should_receive('initialize') flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
error = subprocess.CalledProcessError(borgmatic.hooks.command.SOFT_FAIL_EXIT_CODE, 'try again') error = subprocess.CalledProcessError(borgmatic.hooks.command.SOFT_FAIL_EXIT_CODE, 'try again')
flexmock(module.command).should_receive('execute_hook').and_return(None).and_raise( flexmock(module.dispatch).should_receive('call_hooks').and_return(None).and_return(
error None
).and_return(None) ).and_raise(error)
flexmock(module.dispatch).should_receive('call_hooks') flexmock(module).should_receive('log_error_records').never()
flexmock(module).should_receive('make_error_log_records').never()
flexmock(module).should_receive('run_actions').and_return([]) flexmock(module).should_receive('run_actions').and_return([])
flexmock(module.command).should_receive('considered_soft_failure').and_return(True)
config = {'location': {'repositories': ['foo']}} config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()} arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
@ -155,9 +136,10 @@ def test_run_configuration_bails_for_post_hook_soft_failure():
def test_run_configuration_logs_on_error_hook_error(): def test_run_configuration_logs_on_error_hook_error():
flexmock(module.borg_environment).should_receive('initialize') flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.command).should_receive('execute_hook').and_raise(OSError) flexmock(module.command).should_receive('execute_hook').and_raise(OSError)
expected_results = [flexmock(), flexmock()] expected_results = [flexmock(), flexmock()]
flexmock(module).should_receive('make_error_log_records').and_return( flexmock(module).should_receive('log_error_records').and_return(
expected_results[:1] expected_results[:1]
).and_return(expected_results[1:]) ).and_return(expected_results[1:])
flexmock(module).should_receive('run_actions').and_raise(OSError) flexmock(module).should_receive('run_actions').and_raise(OSError)
@ -171,10 +153,11 @@ def test_run_configuration_logs_on_error_hook_error():
def test_run_configuration_bails_for_on_error_hook_soft_failure(): def test_run_configuration_bails_for_on_error_hook_soft_failure():
flexmock(module.borg_environment).should_receive('initialize') flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
error = subprocess.CalledProcessError(borgmatic.hooks.command.SOFT_FAIL_EXIT_CODE, 'try again') error = subprocess.CalledProcessError(borgmatic.hooks.command.SOFT_FAIL_EXIT_CODE, 'try again')
flexmock(module.command).should_receive('execute_hook').and_return(None).and_raise(error) flexmock(module.command).should_receive('execute_hook').and_raise(error)
expected_results = [flexmock()] expected_results = [flexmock()]
flexmock(module).should_receive('make_error_log_records').and_return(expected_results) flexmock(module).should_receive('log_error_records').and_return(expected_results)
flexmock(module).should_receive('run_actions').and_raise(OSError) flexmock(module).should_receive('run_actions').and_raise(OSError)
config = {'location': {'repositories': ['foo']}} config = {'location': {'repositories': ['foo']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()} arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
@ -184,6 +167,505 @@ def test_run_configuration_bails_for_on_error_hook_soft_failure():
assert results == expected_results assert results == expected_results
def test_run_configuration_retries_soft_error():
# Run action first fails, second passes
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.command).should_receive('execute_hook')
flexmock(module).should_receive('run_actions').and_raise(OSError).and_return([])
flexmock(module).should_receive('log_error_records').and_return([flexmock()]).once()
config = {'location': {'repositories': ['foo']}, 'storage': {'retries': 1}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == []
def test_run_configuration_retries_hard_error():
# Run action fails twice
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.command).should_receive('execute_hook')
flexmock(module).should_receive('run_actions').and_raise(OSError).times(2)
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return([flexmock()])
error_logs = [flexmock()]
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository', OSError,
).and_return(error_logs)
config = {'location': {'repositories': ['foo']}, 'storage': {'retries': 1}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == error_logs
def test_run_repos_ordered():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.command).should_receive('execute_hook')
flexmock(module).should_receive('run_actions').and_raise(OSError).times(2)
expected_results = [flexmock(), flexmock()]
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository', OSError
).and_return(expected_results[:1]).ordered()
flexmock(module).should_receive('log_error_records').with_args(
'bar: Error running actions for repository', OSError
).and_return(expected_results[1:]).ordered()
config = {'location': {'repositories': ['foo', 'bar']}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == expected_results
def test_run_configuration_retries_round_robbin():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.command).should_receive('execute_hook')
flexmock(module).should_receive('run_actions').and_raise(OSError).times(4)
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return([flexmock()]).ordered()
flexmock(module).should_receive('log_error_records').with_args(
'bar: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return([flexmock()]).ordered()
foo_error_logs = [flexmock()]
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository', OSError
).and_return(foo_error_logs).ordered()
bar_error_logs = [flexmock()]
flexmock(module).should_receive('log_error_records').with_args(
'bar: Error running actions for repository', OSError
).and_return(bar_error_logs).ordered()
config = {'location': {'repositories': ['foo', 'bar']}, 'storage': {'retries': 1}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == foo_error_logs + bar_error_logs
def test_run_configuration_retries_one_passes():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.command).should_receive('execute_hook')
flexmock(module).should_receive('run_actions').and_raise(OSError).and_raise(OSError).and_return(
[]
).and_raise(OSError).times(4)
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return([flexmock()]).ordered()
flexmock(module).should_receive('log_error_records').with_args(
'bar: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return(flexmock()).ordered()
error_logs = [flexmock()]
flexmock(module).should_receive('log_error_records').with_args(
'bar: Error running actions for repository', OSError
).and_return(error_logs).ordered()
config = {'location': {'repositories': ['foo', 'bar']}, 'storage': {'retries': 1}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == error_logs
def test_run_configuration_retry_wait():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.command).should_receive('execute_hook')
flexmock(module).should_receive('run_actions').and_raise(OSError).times(4)
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return([flexmock()]).ordered()
flexmock(time).should_receive('sleep').with_args(10).and_return().ordered()
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return([flexmock()]).ordered()
flexmock(time).should_receive('sleep').with_args(20).and_return().ordered()
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return([flexmock()]).ordered()
flexmock(time).should_receive('sleep').with_args(30).and_return().ordered()
error_logs = [flexmock()]
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository', OSError
).and_return(error_logs).ordered()
config = {'location': {'repositories': ['foo']}, 'storage': {'retries': 3, 'retry_wait': 10}}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == error_logs
def test_run_configuration_retries_timeout_multiple_repos():
flexmock(module.borg_environment).should_receive('initialize')
flexmock(module.borg_version).should_receive('local_borg_version').and_return(flexmock())
flexmock(module.command).should_receive('execute_hook')
flexmock(module).should_receive('run_actions').and_raise(OSError).and_raise(OSError).and_return(
[]
).and_raise(OSError).times(4)
flexmock(module).should_receive('log_error_records').with_args(
'foo: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return([flexmock()]).ordered()
flexmock(module).should_receive('log_error_records').with_args(
'bar: Error running actions for repository',
OSError,
levelno=logging.WARNING,
log_command_error_output=True,
).and_return([flexmock()]).ordered()
# Sleep before retrying foo (and passing)
flexmock(time).should_receive('sleep').with_args(10).and_return().ordered()
# Sleep before retrying bar (and failing)
flexmock(time).should_receive('sleep').with_args(10).and_return().ordered()
error_logs = [flexmock()]
flexmock(module).should_receive('log_error_records').with_args(
'bar: Error running actions for repository', OSError
).and_return(error_logs).ordered()
config = {
'location': {'repositories': ['foo', 'bar']},
'storage': {'retries': 1, 'retry_wait': 10},
}
arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
results = list(module.run_configuration('test.yaml', config, arguments))
assert results == error_logs
def test_run_actions_does_not_raise_for_init_action():
flexmock(module.borg_init).should_receive('initialize_repository')
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'init': flexmock(
encryption_mode=flexmock(), append_only=flexmock(), storage_quota=flexmock()
),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_calls_hooks_for_prune_action():
flexmock(module.borg_prune).should_receive('prune_archives')
flexmock(module.command).should_receive('execute_hook').twice()
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'prune': flexmock(stats=flexmock(), files=flexmock()),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_calls_hooks_for_compact_action():
flexmock(module.borg_feature).should_receive('available').and_return(True)
flexmock(module.borg_compact).should_receive('compact_segments')
flexmock(module.command).should_receive('execute_hook').twice()
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'compact': flexmock(progress=flexmock(), cleanup_commits=flexmock(), threshold=flexmock()),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_executes_and_calls_hooks_for_create_action():
flexmock(module.borg_create).should_receive('create_archive')
flexmock(module.command).should_receive('execute_hook').twice()
flexmock(module.dispatch).should_receive('call_hooks').and_return({}).times(3)
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'create': flexmock(
progress=flexmock(), stats=flexmock(), json=flexmock(), files=flexmock()
),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_calls_hooks_for_check_action():
flexmock(module.checks).should_receive('repository_enabled_for_checks').and_return(True)
flexmock(module.borg_check).should_receive('check_archives')
flexmock(module.command).should_receive('execute_hook').twice()
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'check': flexmock(
progress=flexmock(), repair=flexmock(), only=flexmock(), force=flexmock()
),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_calls_hooks_for_extract_action():
flexmock(module.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borg_extract).should_receive('extract_archive')
flexmock(module.command).should_receive('execute_hook').twice()
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'extract': flexmock(
paths=flexmock(),
progress=flexmock(),
destination=flexmock(),
strip_components=flexmock(),
archive=flexmock(),
repository='repo',
),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_does_not_raise_for_export_tar_action():
flexmock(module.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borg_export_tar).should_receive('export_tar_archive')
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'export-tar': flexmock(
repository=flexmock(),
archive=flexmock(),
paths=flexmock(),
destination=flexmock(),
tar_filter=flexmock(),
files=flexmock(),
strip_components=flexmock(),
),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_does_not_raise_for_mount_action():
flexmock(module.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borg_mount).should_receive('mount_archive')
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'mount': flexmock(
repository=flexmock(),
archive=flexmock(),
mount_point=flexmock(),
paths=flexmock(),
foreground=flexmock(),
options=flexmock(),
),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_does_not_raise_for_list_action():
flexmock(module.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borg_list).should_receive('resolve_archive_name').and_return(flexmock())
flexmock(module.borg_list).should_receive('list_archives')
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'list': flexmock(repository=flexmock(), archive=flexmock(), json=flexmock()),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_does_not_raise_for_info_action():
flexmock(module.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borg_list).should_receive('resolve_archive_name').and_return(flexmock())
flexmock(module.borg_info).should_receive('display_archives_info')
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'info': flexmock(repository=flexmock(), archive=flexmock(), json=flexmock()),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_run_actions_does_not_raise_for_borg_action():
flexmock(module.validate).should_receive('repositories_match').and_return(True)
flexmock(module.borg_list).should_receive('resolve_archive_name').and_return(flexmock())
flexmock(module.borg_borg).should_receive('run_arbitrary_borg')
arguments = {
'global': flexmock(monitoring_verbosity=1, dry_run=False),
'borg': flexmock(repository=flexmock(), archive=flexmock(), options=flexmock()),
}
list(
module.run_actions(
arguments=arguments,
config_filename='test.yaml',
location={'repositories': ['repo']},
storage={},
retention={},
consistency={},
hooks={},
local_path=None,
remote_path=None,
local_borg_version=None,
repository_path='repo',
)
)
def test_load_configurations_collects_parsed_configurations(): def test_load_configurations_collects_parsed_configurations():
configuration = flexmock() configuration = flexmock()
other_configuration = flexmock() other_configuration = flexmock()
@ -197,6 +679,15 @@ def test_load_configurations_collects_parsed_configurations():
assert logs == [] assert logs == []
def test_load_configurations_logs_warning_for_permission_error():
flexmock(module.validate).should_receive('parse_configuration').and_raise(PermissionError)
configs, logs = tuple(module.load_configurations(('test.yaml',)))
assert configs == {}
assert {log.levelno for log in logs} == {logging.WARNING}
def test_load_configurations_logs_critical_for_parse_error(): def test_load_configurations_logs_critical_for_parse_error():
flexmock(module.validate).should_receive('parse_configuration').and_raise(ValueError) flexmock(module.validate).should_receive('parse_configuration').and_raise(ValueError)
@ -214,48 +705,46 @@ def test_log_record_with_suppress_does_not_raise():
module.log_record(levelno=1, foo='bar', baz='quux', suppress_log=True) module.log_record(levelno=1, foo='bar', baz='quux', suppress_log=True)
def test_make_error_log_records_generates_output_logs_for_message_only(): def test_log_error_records_generates_output_logs_for_message_only():
flexmock(module).should_receive('log_record').replace_with(dict) flexmock(module).should_receive('log_record').replace_with(dict)
logs = tuple(module.make_error_log_records('Error')) logs = tuple(module.log_error_records('Error'))
assert {log['levelno'] for log in logs} == {logging.CRITICAL} assert {log['levelno'] for log in logs} == {logging.CRITICAL}
def test_make_error_log_records_generates_output_logs_for_called_process_error(): def test_log_error_records_generates_output_logs_for_called_process_error():
flexmock(module).should_receive('log_record').replace_with(dict) flexmock(module).should_receive('log_record').replace_with(dict)
flexmock(module.logger).should_receive('getEffectiveLevel').and_return(logging.WARNING) flexmock(module.logger).should_receive('getEffectiveLevel').and_return(logging.WARNING)
logs = tuple( logs = tuple(
module.make_error_log_records( module.log_error_records('Error', subprocess.CalledProcessError(1, 'ls', 'error output'))
'Error', subprocess.CalledProcessError(1, 'ls', 'error output')
)
) )
assert {log['levelno'] for log in logs} == {logging.CRITICAL} assert {log['levelno'] for log in logs} == {logging.CRITICAL}
assert any(log for log in logs if 'error output' in str(log)) assert any(log for log in logs if 'error output' in str(log))
def test_make_error_log_records_generates_logs_for_value_error(): def test_log_error_records_generates_logs_for_value_error():
flexmock(module).should_receive('log_record').replace_with(dict) flexmock(module).should_receive('log_record').replace_with(dict)
logs = tuple(module.make_error_log_records('Error', ValueError())) logs = tuple(module.log_error_records('Error', ValueError()))
assert {log['levelno'] for log in logs} == {logging.CRITICAL} assert {log['levelno'] for log in logs} == {logging.CRITICAL}
def test_make_error_log_records_generates_logs_for_os_error(): def test_log_error_records_generates_logs_for_os_error():
flexmock(module).should_receive('log_record').replace_with(dict) flexmock(module).should_receive('log_record').replace_with(dict)
logs = tuple(module.make_error_log_records('Error', OSError())) logs = tuple(module.log_error_records('Error', OSError()))
assert {log['levelno'] for log in logs} == {logging.CRITICAL} assert {log['levelno'] for log in logs} == {logging.CRITICAL}
def test_make_error_log_records_generates_nothing_for_other_error(): def test_log_error_records_generates_nothing_for_other_error():
flexmock(module).should_receive('log_record').replace_with(dict) flexmock(module).should_receive('log_record').replace_with(dict)
logs = tuple(module.make_error_log_records('Error', KeyError())) logs = tuple(module.log_error_records('Error', KeyError()))
assert logs == () assert logs == ()
@ -312,7 +801,7 @@ def test_collect_configuration_run_summary_logs_extract_with_repository_error():
ValueError ValueError
) )
expected_logs = (flexmock(),) expected_logs = (flexmock(),)
flexmock(module).should_receive('make_error_log_records').and_return(expected_logs) flexmock(module).should_receive('log_error_records').and_return(expected_logs)
arguments = {'extract': flexmock(repository='repo')} arguments = {'extract': flexmock(repository='repo')}
logs = tuple( logs = tuple(
@ -339,7 +828,7 @@ def test_collect_configuration_run_summary_logs_mount_with_repository_error():
ValueError ValueError
) )
expected_logs = (flexmock(),) expected_logs = (flexmock(),)
flexmock(module).should_receive('make_error_log_records').and_return(expected_logs) flexmock(module).should_receive('log_error_records').and_return(expected_logs)
arguments = {'mount': flexmock(repository='repo')} arguments = {'mount': flexmock(repository='repo')}
logs = tuple( logs = tuple(
@ -352,7 +841,7 @@ def test_collect_configuration_run_summary_logs_mount_with_repository_error():
def test_collect_configuration_run_summary_logs_missing_configs_error(): def test_collect_configuration_run_summary_logs_missing_configs_error():
arguments = {'global': flexmock(config_paths=[])} arguments = {'global': flexmock(config_paths=[])}
expected_logs = (flexmock(),) expected_logs = (flexmock(),)
flexmock(module).should_receive('make_error_log_records').and_return(expected_logs) flexmock(module).should_receive('log_error_records').and_return(expected_logs)
logs = tuple(module.collect_configuration_run_summary_logs({}, arguments=arguments)) logs = tuple(module.collect_configuration_run_summary_logs({}, arguments=arguments))
@ -362,7 +851,7 @@ def test_collect_configuration_run_summary_logs_missing_configs_error():
def test_collect_configuration_run_summary_logs_pre_hook_error(): def test_collect_configuration_run_summary_logs_pre_hook_error():
flexmock(module.command).should_receive('execute_hook').and_raise(ValueError) flexmock(module.command).should_receive('execute_hook').and_raise(ValueError)
expected_logs = (flexmock(),) expected_logs = (flexmock(),)
flexmock(module).should_receive('make_error_log_records').and_return(expected_logs) flexmock(module).should_receive('log_error_records').and_return(expected_logs)
arguments = {'create': flexmock(), 'global': flexmock(monitoring_verbosity=1, dry_run=False)} arguments = {'create': flexmock(), 'global': flexmock(monitoring_verbosity=1, dry_run=False)}
logs = tuple( logs = tuple(
@ -376,7 +865,7 @@ def test_collect_configuration_run_summary_logs_post_hook_error():
flexmock(module.command).should_receive('execute_hook').and_return(None).and_raise(ValueError) flexmock(module.command).should_receive('execute_hook').and_return(None).and_raise(ValueError)
flexmock(module).should_receive('run_configuration').and_return([]) flexmock(module).should_receive('run_configuration').and_return([])
expected_logs = (flexmock(),) expected_logs = (flexmock(),)
flexmock(module).should_receive('make_error_log_records').and_return(expected_logs) flexmock(module).should_receive('log_error_records').and_return(expected_logs)
arguments = {'create': flexmock(), 'global': flexmock(monitoring_verbosity=1, dry_run=False)} arguments = {'create': flexmock(), 'global': flexmock(monitoring_verbosity=1, dry_run=False)}
logs = tuple( logs = tuple(
@ -391,7 +880,7 @@ def test_collect_configuration_run_summary_logs_for_list_with_archive_and_reposi
ValueError ValueError
) )
expected_logs = (flexmock(),) expected_logs = (flexmock(),)
flexmock(module).should_receive('make_error_log_records').and_return(expected_logs) flexmock(module).should_receive('log_error_records').and_return(expected_logs)
arguments = {'list': flexmock(repository='repo', archive='test')} arguments = {'list': flexmock(repository='repo', archive='test')}
logs = tuple( logs = tuple(
@ -417,7 +906,7 @@ def test_collect_configuration_run_summary_logs_run_configuration_error():
flexmock(module).should_receive('run_configuration').and_return( flexmock(module).should_receive('run_configuration').and_return(
[logging.makeLogRecord(dict(levelno=logging.CRITICAL, levelname='CRITICAL', msg='Error'))] [logging.makeLogRecord(dict(levelno=logging.CRITICAL, levelname='CRITICAL', msg='Error'))]
) )
flexmock(module).should_receive('make_error_log_records').and_return([]) flexmock(module).should_receive('log_error_records').and_return([])
arguments = {} arguments = {}
logs = tuple( logs = tuple(
@ -431,7 +920,7 @@ def test_collect_configuration_run_summary_logs_run_umount_error():
flexmock(module.validate).should_receive('guard_configuration_contains_repository') flexmock(module.validate).should_receive('guard_configuration_contains_repository')
flexmock(module).should_receive('run_configuration').and_return([]) flexmock(module).should_receive('run_configuration').and_return([])
flexmock(module.borg_umount).should_receive('unmount_archive').and_raise(OSError) flexmock(module.borg_umount).should_receive('unmount_archive').and_raise(OSError)
flexmock(module).should_receive('make_error_log_records').and_return( flexmock(module).should_receive('log_error_records').and_return(
[logging.makeLogRecord(dict(levelno=logging.CRITICAL, levelname='CRITICAL', msg='Error'))] [logging.makeLogRecord(dict(levelno=logging.CRITICAL, levelname='CRITICAL', msg='Error'))]
) )
arguments = {'umount': flexmock(mount_point='/mnt')} arguments = {'umount': flexmock(mount_point='/mnt')}
@ -447,7 +936,9 @@ def test_collect_configuration_run_summary_logs_outputs_merged_json_results():
flexmock(module).should_receive('run_configuration').and_return(['foo', 'bar']).and_return( flexmock(module).should_receive('run_configuration').and_return(['foo', 'bar']).and_return(
['baz'] ['baz']
) )
flexmock(module.sys.stdout).should_receive('write').with_args('["foo", "bar", "baz"]').once() stdout = flexmock()
stdout.should_receive('write').with_args('["foo", "bar", "baz"]').once()
flexmock(module.sys).stdout = stdout
arguments = {} arguments = {}
tuple( tuple(

View File

@ -12,7 +12,7 @@ Parsed_config = namedtuple('Parsed_config', ('location', 'storage', 'retention',
def test_convert_section_generates_integer_value_for_integer_type_in_schema(): def test_convert_section_generates_integer_value_for_integer_type_in_schema():
flexmock(module.yaml.comments).should_receive('CommentedMap').replace_with(OrderedDict) flexmock(module.yaml.comments).should_receive('CommentedMap').replace_with(OrderedDict)
source_section_config = OrderedDict([('check_last', '3')]) source_section_config = OrderedDict([('check_last', '3')])
section_schema = {'map': {'check_last': {'type': 'int'}}} section_schema = {'type': 'object', 'properties': {'check_last': {'type': 'integer'}}}
destination_config = module._convert_section(source_section_config, section_schema) destination_config = module._convert_section(source_section_config, section_schema)
@ -21,7 +21,7 @@ def test_convert_section_generates_integer_value_for_integer_type_in_schema():
def test_convert_legacy_parsed_config_transforms_source_config_to_mapping(): def test_convert_legacy_parsed_config_transforms_source_config_to_mapping():
flexmock(module.yaml.comments).should_receive('CommentedMap').replace_with(OrderedDict) flexmock(module.yaml.comments).should_receive('CommentedMap').replace_with(OrderedDict)
flexmock(module.generate).should_receive('add_comments_to_configuration_map') flexmock(module.generate).should_receive('add_comments_to_configuration_object')
source_config = Parsed_config( source_config = Parsed_config(
location=OrderedDict([('source_directories', '/home'), ('repository', 'hostname.borg')]), location=OrderedDict([('source_directories', '/home'), ('repository', 'hostname.borg')]),
storage=OrderedDict([('encryption_passphrase', 'supersecret')]), storage=OrderedDict([('encryption_passphrase', 'supersecret')]),
@ -29,7 +29,10 @@ def test_convert_legacy_parsed_config_transforms_source_config_to_mapping():
consistency=OrderedDict([('checks', 'repository')]), consistency=OrderedDict([('checks', 'repository')]),
) )
source_excludes = ['/var'] source_excludes = ['/var']
schema = {'map': defaultdict(lambda: {'map': {}})} schema = {
'type': 'object',
'properties': defaultdict(lambda: {'type': 'object', 'properties': {}}),
}
destination_config = module.convert_legacy_parsed_config(source_config, source_excludes, schema) destination_config = module.convert_legacy_parsed_config(source_config, source_excludes, schema)
@ -54,7 +57,7 @@ def test_convert_legacy_parsed_config_transforms_source_config_to_mapping():
def test_convert_legacy_parsed_config_splits_space_separated_values(): def test_convert_legacy_parsed_config_splits_space_separated_values():
flexmock(module.yaml.comments).should_receive('CommentedMap').replace_with(OrderedDict) flexmock(module.yaml.comments).should_receive('CommentedMap').replace_with(OrderedDict)
flexmock(module.generate).should_receive('add_comments_to_configuration_map') flexmock(module.generate).should_receive('add_comments_to_configuration_object')
source_config = Parsed_config( source_config = Parsed_config(
location=OrderedDict( location=OrderedDict(
[('source_directories', '/home /etc'), ('repository', 'hostname.borg')] [('source_directories', '/home /etc'), ('repository', 'hostname.borg')]
@ -64,7 +67,10 @@ def test_convert_legacy_parsed_config_splits_space_separated_values():
consistency=OrderedDict([('checks', 'repository archives')]), consistency=OrderedDict([('checks', 'repository archives')]),
) )
source_excludes = ['/var'] source_excludes = ['/var']
schema = {'map': defaultdict(lambda: {'map': {}})} schema = {
'type': 'object',
'properties': defaultdict(lambda: {'type': 'object', 'properties': {}}),
}
destination_config = module.convert_legacy_parsed_config(source_config, source_excludes, schema) destination_config = module.convert_legacy_parsed_config(source_config, source_excludes, schema)

Some files were not shown because too many files have changed in this diff Show More