Compare commits

...

199 Commits
1.8.1 ... main

Author SHA1 Message Date
Dan Helfman 7f735cbe59 Fix a traceback with "check --only spot" when the "spot" check is unconfigured (#857).
build / test (push) Successful in 7m42s Details
build / docs (push) Successful in 2m10s Details
2024-04-24 16:12:58 -07:00
Dan Helfman a690ea4016 Add Healtchecks auto-provisioning to NEWS (#815).
build / test (push) Successful in 5m49s Details
build / docs (push) Successful in 2m16s Details
2024-04-23 09:25:29 -07:00
Dan Helfman 7a110c7acd Add Healthchecks auto-provisionning (#815).
build / docs (push) Blocked by required conditions Details
build / test (push) Has been cancelled Details
Reviewed-on: #852
Reviewed-by: Dan Helfman <witten@torsion.org>
2024-04-23 16:23:26 +00:00
estebanthilliez 407bb33359 Fix schema.yaml to comply with maximum line length 2024-04-22 20:47:03 +02:00
estebanthilliez 4b7f7bba04 Issue warning if using UUID URL scheme with create_slug 2024-04-22 20:45:36 +02:00
estebanthilliez cfdc0a1f2a Fix Healthchecks UUID regex 2024-04-22 20:44:31 +02:00
Dan Helfman f926055e67 Fix a traceback when the "data" consistency check is used (#854).
build / test (push) Successful in 7m36s Details
build / docs (push) Successful in 2m26s Details
2024-04-21 14:55:02 -07:00
Dan Helfman 058af95d70 Document limitation about using database hooks and "one_file_system" (#853).
build / test (push) Successful in 4m20s Details
build / docs (push) Successful in 52s Details
2024-04-20 14:53:41 -07:00
Dan Helfman 54facdc391 Clarify Apprise states configuration.
build / test (push) Successful in 6m2s Details
build / docs (push) Successful in 1m29s Details
2024-04-20 08:26:06 -07:00
estebanthi 2e4c0cc7e7 Support for healthchecks auto provisionning 2024-04-19 10:43:45 +02:00
Dan Helfman cb2fd7c5e8 Fix lack of file extraction when using "extract --strip-components all" on a path with a leading slash (#851).
build / test (push) Successful in 6m0s Details
build / docs (push) Successful in 1m30s Details
2024-04-17 16:50:09 -07:00
Dan Helfman 94133cc8b1 Add note about running spot check on a separate schedule (#656).
build / test (push) Successful in 4m15s Details
build / docs (push) Successful in 52s Details
2024-04-16 10:57:34 -07:00
Dan Helfman dcec89be90 Wording tweak (#656).
build / test (push) Successful in 4m17s Details
build / docs (push) Has been cancelled Details
2024-04-16 10:52:56 -07:00
Dan Helfman fefd5d1d0e Wording tweak (#656).
build / docs (push) Blocked by required conditions Details
build / test (push) Has been cancelled Details
2024-04-16 10:50:37 -07:00
Dan Helfman 163c37d77f Bump version for release. 2024-04-16 10:43:35 -07:00
Dan Helfman b0e49ebce0 When "--match-archives *" is used with "check" action, don't skip Borg's orphaned objects check (#779).
build / test (push) Successful in 4m17s Details
build / docs (push) Successful in 49s Details
2024-04-16 10:38:14 -07:00
Dan Helfman 7e51c41ebf Mask the password when logging a MongoDB dump or restore command (#848).
build / test (push) Successful in 5m55s Details
build / docs (push) Successful in 1m28s Details
2024-04-16 10:20:15 -07:00
Dan Helfman f9182514d8 Add spot consistency check (#656).
build / test (push) Successful in 4m18s Details
build / docs (push) Successful in 1m28s Details
Reviewed-on: #849
2024-04-15 21:25:50 +00:00
Dan Helfman 7700b87b60 Test requirements security upgrade.
build / test (push) Failing after 3m19s Details
build / docs (push) Has been skipped Details
2024-04-15 14:21:01 -07:00
Dan Helfman 75bdbe6087 Spot check documentation and edge case tweaks (#656). 2024-04-15 14:18:42 -07:00
Dan Helfman d243a8c836 Add spot check documentation (#656). 2024-04-15 12:51:07 -07:00
Dan Helfman 4c2eb2bfe3 Spot check basically complete other than docs (#656). 2024-04-15 11:02:05 -07:00
Dan Helfman 89ce060dbd Merge branch 'main' into spot-check 2024-04-05 12:25:50 -07:00
Dan Helfman ad7dcb4615 Fix "--json" error when Borg includes non-JSON warnings in JSON output (#847).
build / test (push) Failing after 3m16s Details
build / docs (push) Has been skipped Details
2024-04-05 12:23:50 -07:00
Dan Helfman 6680aece5a Split out (most of) command construction from create_archive() in preparation for reuse in spot check (#656). 2024-04-04 14:23:56 -07:00
Dan Helfman 57eb93760f Merge branch 'main' into spot-check 2024-03-22 11:27:24 -07:00
Dan Helfman f21a2c06e3 Add documentation link to Loki dashboard for borgmatic (#843).
build / test (push) Successful in 6m37s Details
build / docs (push) Successful in 1m29s Details
2024-03-22 11:25:33 -07:00
Dan Helfman 2212539cb0 Merge branch 'main' into spot-check. 2024-03-20 12:01:52 -07:00
Dan Helfman 36d10fecb1 Upgrade black in test requirements.
build / test (push) Successful in 5m56s Details
build / docs (push) Successful in 1m28s Details
2024-03-20 12:01:24 -07:00
Dan Helfman 3ecd0e731e Initial work on spot check schema and preparatory refactoring (#656). 2024-03-20 11:58:59 -07:00
Dan Helfman ecf5a7e294 When a command hook exits with a soft failure, ping the log and finish states for any configured monitoring hooks (#842).
build / test (push) Successful in 6m0s Details
build / docs (push) Successful in 1m28s Details
2024-03-18 23:15:28 -07:00
Dan Helfman 893fbcf9ff Add documentation about backing up containerized databases by configuring borgmatic to exec into a container to run a dump command.
build / test (push) Successful in 5m57s Details
build / docs (push) Successful in 1m29s Details
2024-03-14 18:00:52 -07:00
Dan Helfman f8f6560502 Fix handling of the NO_COLOR environment variable to ignore an empty value (#835).
build / test (push) Successful in 6m0s Details
build / docs (push) Successful in 1m29s Details
2024-03-13 09:35:19 -07:00
Dan Helfman 8c301ba688 Bump version for release.
build / test (push) Successful in 5m15s Details
build / docs (push) Successful in 1m5s Details
2024-03-11 13:27:08 -07:00
Dan Helfman 035e96156a Add an "access_token" option to the ntfy monitoring hook for authenticating without username/password (#811).
build / test (push) Successful in 5m2s Details
build / docs (push) Successful in 1m10s Details
2024-03-11 12:48:58 -07:00
Dan Helfman a08c7fc77a When running the "rcreate" action and the repository already exists but with a different encryption mode than requested, error (#840).
build / test (push) Successful in 4m55s Details
build / docs (push) Successful in 1m0s Details
2024-03-11 11:24:36 -07:00
Dan Helfman cf9e387811 Document a potentially breaking shell quoting edge case within error hooks (#839).
build / test (push) Successful in 6m43s Details
build / docs (push) Successful in 1m44s Details
2024-03-11 10:42:51 -07:00
Dan Helfman e37224606a Clarify dev-CI parity.
build / test (push) Successful in 5m12s Details
build / docs (push) Successful in 1m13s Details
2024-03-10 19:14:18 -07:00
Dan Helfman 9647301b99 Add log sending for the Apprise logging hook, enabled by default.
build / test (push) Successful in 7m4s Details
build / docs (push) Successful in 1m53s Details
2024-03-10 16:18:49 -07:00
Dan Helfman a0e5dbff96 Remove list of command in Bash script.
build / test (push) Successful in 5m5s Details
build / docs (push) Successful in 1m11s Details
2024-03-06 21:24:44 -08:00
Dan Helfman 86117edccf Remove build.torsion.org references from documentation.
build / test (push) Successful in 7m7s Details
build / docs (push) Successful in 1m56s Details
2024-03-06 20:01:32 -08:00
Dan Helfman 440f3eeb63 Remove Drone configuration/tests.
build / test (push) Successful in 5m1s Details
build / docs (push) Successful in 1m2s Details
2024-03-06 19:04:29 -08:00
Dan Helfman 181051eae1 Add new build server to NEWS.
build / test (push) Successful in 4m45s Details
build / docs (push) Successful in 2m5s Details
2024-03-06 18:52:27 -08:00
Dan Helfman ec0ee971ed Attempt to use secrets.
build / test (push) Successful in 4m46s Details
build / docs (push) Failing after 4s Details
2024-03-06 18:38:45 -08:00
Dan Helfman b83ffa0cf6 Attempt to fix trigger.
build / test (push) Successful in 4m45s Details
build / docs (push) Failing after 4s Details
2024-03-06 16:53:41 -08:00
Dan Helfman cf88665d37 Fix typo.
build / docs (push) Blocked by required conditions Details
build / test (push) Has been cancelled Details
2024-03-06 16:52:33 -08:00
Dan Helfman b233adba63 Fix build? 2024-03-06 16:51:49 -08:00
Dan Helfman 018f5e3315 Merge workflows, since Gitea doesn't yet support workflow_run. 2024-03-06 16:49:50 -08:00
Dan Helfman 284f26b49d Only run tests on pushes to main branch. 2024-03-06 16:40:39 -08:00
Dan Helfman 11b437794e Attempt to build documentation.
test / test (push) Successful in 4m53s Details
2024-03-06 16:38:34 -08:00
Dan Helfman 0665b50d57 Fixed debugging.
test / test (push) Successful in 4m40s Details
2024-03-06 16:17:12 -08:00
Dan Helfman 0586b80e5b More debugging.
test / test (push) Failing after 4m44s Details
2024-03-06 15:53:30 -08:00
Dan Helfman 272a7b4866 Actually kill other containers after tests finish.
test / test (push) Successful in 4m45s Details
2024-03-06 15:41:03 -08:00
Dan Helfman 98d4a59459 Another iteration.
test / test (push) Successful in 4m46s Details
2024-03-06 15:29:56 -08:00
Dan Helfman 744139cf97 Disable progress.
test / test (push) Has been cancelled Details
2024-03-06 15:21:45 -08:00
Dan Helfman 1339509e9b Flag order apparently matters to Docker Compose.
test / test (push) Waiting to run Details
2024-03-06 14:55:55 -08:00
Dan Helfman e14f61415b Fix spew in test script.
test / test (push) Failing after 2s Details
2024-03-06 14:54:53 -08:00
Dan Helfman 98cf8f7e20 Another try at exiting tests properly.
test / test (push) Has been cancelled Details
2024-03-06 14:42:06 -08:00
Dan Helfman 5f16b64639 Attempt to exit test containers on tests exit while also showing test output.
test / test (push) Failing after 3s Details
2024-03-06 14:39:23 -08:00
Dan Helfman fe62a81151 Add missing service name to test scrits.
test / test (push) Successful in 4m54s Details
2024-03-06 14:32:26 -08:00
Dan Helfman 585b1573ae Attempt to make containers stop after tests run.
test / test (push) Failing after 7s Details
2024-03-06 14:30:49 -08:00
Dan Helfman 141ba2771d Attempt to fix and debug read-only filesystem issue at build.
test / test (push) Has been cancelled Details
2024-03-06 11:10:20 -08:00
Dan Helfman a527f76d08 Add back checkout now that NodeJS is installed on the host.
continuous-integration/drone/push Build was killed Details
test / test (push) Has been cancelled Details
2024-03-06 08:49:53 -08:00
Dan Helfman a97c68b4c8 Debugging ls.
test / test (push) Failing after 0s Details
continuous-integration/drone/push Build was killed Details
2024-03-06 08:35:50 -08:00
Dan Helfman ef07005a75 Remove duplicative(?) checkout step.
test / test (push) Failing after 0s Details
continuous-integration/drone/push Build was killed Details
2024-03-06 08:35:05 -08:00
Dan Helfman 43c7c3b6be First attempt at using Gitea Actions to run tests.
test / test (push) Failing after 15s Details
continuous-integration/drone/push Build was killed Details
2024-03-06 08:32:55 -08:00
Dan Helfman 2f6ad9d173 Add NO_COLOR support to NEWS (#835).
continuous-integration/drone/push Build was killed Details
2024-03-04 13:49:54 -08:00
Dan Helfman 16bc0de3fb
Support for NO_COLOR environment variable (#835).
Merge pull request #82 from shivansh02/feature/support-no-color-env-var
2024-03-04 13:46:09 -08:00
shivansh02 458d157e62 NO_COLOR set to any value returns false 2024-03-05 00:15:52 +05:30
shivansh02 40c3a28620 support for NO_COLOR env var 2024-03-04 18:21:28 +05:30
Dan Helfman 60107f1ee8 Add custom dump/restore command options for MySQL and MariaDB (#311).
continuous-integration/drone/push Build was killed Details
2024-03-03 14:32:49 -08:00
Dan Helfman a1153a21fa
Custom dump command options for MySQL and MariaDB.
Merge pull request #81 from shivansh02/feature/custom-dump-restore-commands-mysql
2024-03-03 14:27:14 -08:00
shivansh02 b6cb7da98e custom dump commands for mariadb 2024-03-04 00:24:22 +05:30
shivansh02 9e3d19a406 custom commands escaped 2024-03-03 23:31:02 +05:30
shivansh02 2b755d8ade custom show command for mysql and schema description 2024-03-03 23:15:07 +05:30
shivansh02 925f99cfef custom dump command for mysql 2024-03-03 03:47:02 +05:30
Dan Helfman c9f20eb260 Fix "--override" values containing deprecated section headers not actually overriding configuration options under deprecated section headers (#829). 2024-02-15 21:12:42 -08:00
Dan Helfman f4744826fe When the "--json" flag is given, suppress console escape codes so as not to interfere with JSON output (#827).
continuous-integration/drone/push Build is passing Details
2024-02-11 17:44:43 -08:00
Dan Helfman 5586aab967 Clarify documentation about restoring a database: borgmatic does not create the database upon restore.
continuous-integration/drone/push Build is passing Details
2024-02-09 15:35:29 -08:00
Dan Helfman 6fa5dff79b Fix broken escaping logic for "pg_dump_command" (#822) + bonus shell injection fixes.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2024-01-31 10:53:32 -08:00
Dan Helfman 75d11aa9cd Pass the PostgreSQL "PGSSLMODE" environment variable through to Borg (#370).
continuous-integration/drone/push Build is passing Details
2024-01-25 14:18:01 -08:00
Dan Helfman ad1d104d65 Fix broken repository detection in the "rcreate" action with Borg 1.4 (#820).
continuous-integration/drone/push Build is passing Details
2024-01-24 15:45:51 -08:00
Dan Helfman 009062128d Remove Python 3.8+ restriction, as only Python 3.8+ is supported. 2024-01-22 09:41:43 -08:00
Dan Helfman e9813d2539 Allow the "--repository" flag to match across multiple configuration files (#818). 2024-01-21 18:25:44 -08:00
Dan Helfman f9998b50e8 Rephrase documentation and link to docs on exit codes feature (#798).
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2024-01-21 14:47:21 -08:00
Dan Helfman 5f921a7f80 Add documentation heading (#798).
continuous-integration/drone/push Build is passing Details
2024-01-21 11:48:23 -08:00
Dan Helfman abf2b3a8c7 Elevate specific Borg warnings to errors or squash errors to warnings (#798).
continuous-integration/drone/push Build is passing Details
2024-01-21 11:34:40 -08:00
Dan Helfman 34f3c2bb16 Clarify "--override" command-line help (#814)
continuous-integration/drone/push Build is passing Details
2024-01-19 11:55:00 -08:00
Dan Helfman 4d79f582df Fix a traceback when providing an invalid "--override" value for a list option (#814).
continuous-integration/drone/push Build is passing Details
2024-01-18 10:39:40 -08:00
Dan Helfman 63198088c4 Store included configuration files within each backup archive in support of the "config bootstrap" action (#736).
continuous-integration/drone/push Build is passing Details
2024-01-09 13:47:20 -08:00
Dan Helfman 3c22a8ec16 Prevent various shell injection attacks (#810).
continuous-integration/drone/push Build is passing Details
2024-01-07 10:21:49 -08:00
Dan Helfman ca49109ce7 Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2024-01-03 10:08:05 -08:00
Dan Helfman 6a7f71f92f Clarify prune action help concerning running compact afterwards (#808).
continuous-integration/drone/push Build is passing Details
2024-01-03 10:03:35 -08:00
Dan Helfman 5f3dc1cfb0 Stream SQLite databases directly to Borg instead of dumping to an intermediate file (#807).
continuous-integration/drone/push Build is passing Details
2023-12-31 11:07:59 -08:00
Dan Helfman f2023aed22 Fix typo.
continuous-integration/drone/push Build is passing Details
2023-12-30 15:48:55 -08:00
Dan Helfman a03c2744e5 Update docs/how-to/provide-your-passwords.md (#805).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #805
Reviewed-by: Dan Helfman <witten@torsion.org>
2023-12-30 23:48:32 +00:00
axel simon 4176532317 Update docs/how-to/provide-your-passwords.md
Provide an explanation of encryption_passcommand.
Also, adjust headers for consistency.
2023-12-30 23:45:56 +00:00
Dan Helfman 9d6025e902 Validate the configured action names in the "skip_actions" option (#804).
continuous-integration/drone/push Build is passing Details
2023-12-28 20:07:57 -08:00
Dan Helfman cf739bc997 The "check --force" flag now runs checks even if "check" is in "skip_actions" (#802).
continuous-integration/drone/push Build is passing Details
2023-12-28 10:22:48 -08:00
Dan Helfman 84823dfb91 Clarify constants/placeholders interaction and improve examples (#763).
continuous-integration/drone/push Build is passing Details
2023-12-24 11:18:17 -08:00
Dan Helfman 20cf0f7089 Add an "--ssh-command" flag to the "config bootstrap" action (#767).
continuous-integration/drone/push Build is passing Details
2023-12-24 10:33:55 -08:00
Dan Helfman 67af0f5734 Document limitation with constant interpolation at the start of a value (#741).
continuous-integration/drone/push Build is passing Details
2023-12-22 21:39:44 -08:00
Dan Helfman e80e0a253c Add configured repository labels to the JSON output for all actions (#800). 2023-12-20 09:17:41 -08:00
Dan Helfman 72587a3b72 Merge branch 'main' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic
continuous-integration/drone/push Build is passing Details
2023-12-04 11:17:59 -08:00
Dan Helfman 8b49a59aff Fix a traceback when the "repositories" option contains both strings and key/value pairs (#794). 2023-12-04 11:17:13 -08:00
Dan Helfman e120dff9ff Add debug message that logs borg version for every config (#714).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #792
2023-11-25 03:59:40 +00:00
Tobias Hodapp 257678b66f Fixed borg -> Borg 2023-11-24 18:47:37 +01:00
Tobias Hodapp 422c5e32f4 Added debug message that logs borg version for every config 2023-11-23 11:46:10 +01:00
Dan Helfman c34ad7dde7 Update documentation about configuration includes and constants (#745).
continuous-integration/drone/push Build is passing Details
2023-11-19 21:22:10 -08:00
Dan Helfman fdb353d358 Bump version for release.
continuous-integration/drone/tag Build is passing Details
2023-11-19 21:14:56 -08:00
Dan Helfman 3b99f7c75a Constants support includes and command-line overrides (#745, #782) 2023-11-19 21:13:35 -08:00
Dan Helfman 8b9abc6cf8 Documentation clarifications (#791).
continuous-integration/drone/push Build is passing Details
2023-11-15 09:05:52 -08:00
Dan Helfman da034c316a Add another mention of "skip_actions" to the docs (#701).
continuous-integration/drone/push Build is passing Details
2023-11-08 18:22:17 -08:00
Dan Helfman 08d01d8bcd Documentation formatting.
continuous-integration/drone/push Build is passing Details
2023-11-08 17:57:31 -08:00
Dan Helfman eef69e23ee Document the possible units of times for a configured check frequency.
continuous-integration/drone/push Build is passing Details
2023-11-08 17:53:59 -08:00
Dan Helfman 26bb54a9dd Remove broken link in documentation (#786).
continuous-integration/drone/push Build is passing Details
2023-11-08 10:26:09 -08:00
Dan Helfman 715e2ac127 Add test support for Python 3.12.
continuous-integration/drone/push Build is passing Details
2023-11-07 10:17:55 -08:00
Dan Helfman f39cea4abf Remove additional Python 3.7-isms (#784).
continuous-integration/drone/push Build is passing Details
2023-11-07 10:17:16 -08:00
Dan Helfman 22101bdd49 Drop support for Python 3.7, which has been end-of-lifed (#784).
continuous-integration/drone/push Build is passing Details
2023-11-07 10:11:29 -08:00
Dan Helfman 13cf863d89 Fix tests (#783).
continuous-integration/drone/push Build is passing Details
2023-11-07 10:09:31 -08:00
Dan Helfman dcf25fa041 Upgrade ruamel.yaml dependency to support version 0.18.x (#783).
continuous-integration/drone/push Build is failing Details
2023-11-07 10:00:13 -08:00
Dan Helfman 12b75f9075 Update documentation about logging changes from version 1.8.3 (#665).
continuous-integration/drone/push Build is passing Details
2023-11-06 21:13:46 -08:00
Dan Helfman 9baf06a2f7
Fix typo.
continuous-integration/drone/push Build is passing Details
Typo
2023-11-04 08:56:39 -07:00
tdltdc 56302e22cd
Typo 2023-11-04 15:05:19 +01:00
Dan Helfman 6cc93c4eb9 Fix environment variable interpolation within configured repository paths (#782).
continuous-integration/drone/push Build is passing Details
2023-11-03 21:16:04 -07:00
Dan Helfman 2da43239f6 Fix docs: minor typos (#781).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #781
2023-11-03 00:59:29 +00:00
debuglevel 4beef36d3c Update docs/how-to/inspect-your-backups.md 2023-11-02 23:14:31 +00:00
debuglevel eacfbd742b Typo 2023-11-02 23:13:45 +00:00
debuglevel 82a85986b6 Typo 2023-11-02 22:57:46 +00:00
Dan Helfman ef448e2dd1 Add a "skip_actions" option to skip running particular actions (#701).
continuous-integration/drone/push Build is passing Details
2023-10-31 21:54:41 -07:00
Dan Helfman c3efe1b90e Only parse "--override" values as complex data types when they're for options of those types (#779).
continuous-integration/drone/push Build is passing Details
2023-10-29 19:02:28 -07:00
Dan Helfman d85c1ee216 Correct changelog addition (#779).
continuous-integration/drone/push Build is passing Details
2023-10-29 16:25:40 -07:00
Dan Helfman b47088067c Add a "--match-archives" flag to the "check" action (#779).
continuous-integration/drone/push Build is passing Details
2023-10-29 16:22:39 -07:00
Dan Helfman c5732aa4fc Fix home page CSS layout to prevent overflow at certain window widths (#777).
continuous-integration/drone/push Build is passing Details
2023-10-27 14:12:35 -07:00
Dan Helfman a0323d9d6c Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2023-10-26 22:20:26 -07:00
Dan Helfman 8ad7b473f1 When an archive filter causes no matching archives for the "rlist" or "info" actions, warn (#748).
continuous-integration/drone/push Build is passing Details
2023-10-26 22:12:13 -07:00
Dan Helfman 895a0ccb3c Upgrade to tox 4. (Now a minimum requirement.)
continuous-integration/drone/push Build is passing Details
2023-10-23 17:39:27 -07:00
Dan Helfman 257ab77bea Disallow the "--dry-run" flag with the "borg" action (#774).
continuous-integration/drone/push Build is passing Details
2023-10-23 17:23:04 -07:00
Dan Helfman dccaa4014b
Update systemd service example with better filesystem protection options.
continuous-integration/drone/push Build is passing Details
Merge pull request #78 from Alphix/update-systemd-service
2023-10-15 08:59:39 -07:00
David Härdeman 2f3c0bec5b Update systemd .service example
First, ProtectSystem=strict will make the entire file system hierarchy (except
/dev, /proc/ and /sys) read-only, so separate ReadOnlyPaths= is not necessary.

Second, ProtectHome=tmpfs will not just mount an empty tmpfs on /root, but also
on /home and /run/user. As it's likely quite common to want to backup /home,
this seems like a footgun.

Finally, it's quite likely that borgbackup will want access to root's SSH keys
in order to connect to remote backup servers.

Note that all these options are commented out by default, so this is more of
a documentation change than any real change in functionality.
2023-10-15 11:30:11 +02:00
Dan Helfman 487d8ffd32 Fix normalization of deprecated sections to support empty sections without erroring (#771).
continuous-integration/drone/push Build is passing Details
2023-10-14 13:04:18 -07:00
Dan Helfman 30523a7c89 Update home page example of Healthchecks configuration not to use deprecated config.
continuous-integration/drone/push Build is passing Details
2023-10-11 12:56:21 -07:00
Dan Helfman 77b1907d03 Update Healthchecks deprecation warning message for clarity.
continuous-integration/drone/push Build is passing Details
2023-10-11 12:17:57 -07:00
Dan Helfman 09594c85bf Be more explicit in documentation that you don't have to use an environment variable for passphrases.
continuous-integration/drone/push Build is passing Details
2023-10-10 09:34:55 -07:00
Dan Helfman e07efdf68f Add documentation note about using includes for specifying passphrases (#769).
continuous-integration/drone/push Build is passing Details
2023-10-10 09:16:58 -07:00
Dan Helfman 1fed44f905 Add documentation note about sudo and sudoers "secure_path" option (#757). 2023-10-09 14:15:54 -07:00
Dan Helfman c687dafdd2 Fix a traceback when an invalid command-line flag or action is used (#768).
continuous-integration/drone/push Build is passing Details
2023-10-06 21:00:23 -07:00
Dan Helfman 3eff2c4248 Add Grafana Loki badge to integrations documentation.
continuous-integration/drone/push Build is passing Details
2023-10-05 09:06:06 -07:00
Dan Helfman d94fdb6faf Add apprise logo to integrations in readme (#715).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #765
2023-10-05 15:51:04 +00:00
Pim Kunis a83282faf0 add apprise logo to integrations in readme 2023-10-05 15:38:32 +02:00
Dan Helfman e7169f6fb2 Upgrade certifi test dependency to fix security alert.
continuous-integration/drone/push Build is passing Details
2023-10-04 22:59:15 -07:00
Dan Helfman 9587fc2366 Update Apprise documentation to use sudo for pipx install (#715).
continuous-integration/drone/push Build is passing Details
2023-10-04 22:54:11 -07:00
Dan Helfman 5f06884d5a Fix Apprise/PyYAML end-to-end test breakage (#715).
continuous-integration/drone/push Build encountered an error Details
2023-10-04 22:51:05 -07:00
Dan Helfman f011431463 Apprise hook documentation (#715).
continuous-integration/drone/push Build encountered an error Details
2023-10-04 19:23:53 -07:00
Dan Helfman 9e14f209f1 Merge branch 'main' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic
continuous-integration/drone/push Build is failing Details
2023-10-04 14:58:48 -07:00
Dan Helfman 9d34d2eec5 Support for Apprise (#759).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #759
2023-10-04 21:58:20 +00:00
Pim Kunis 7a9625cd44 fix PR comments 2023-10-04 13:19:40 +02:00
Pim Kunis 4763c323d0 add unit tests for apprise hook 2023-10-01 16:59:59 +00:00
Pim Kunis eaa22be3db fix PR comments 2023-10-01 16:59:59 +00:00
Pim Kunis a587e207f9 pin Apprise dependencies for test requirements 2023-10-01 16:59:59 +00:00
Pim Kunis db8079b699 fix typo in setup.py
handle if apprise cannot be imported
2023-10-01 16:59:59 +00:00
Pim Kunis 5a989826a1 convert map to list for apprise function call
fix apprise config schema
remove apprise from required dependencies
2023-10-01 16:59:59 +00:00
Pim Kunis 21f4266273 incorporate PR review comments 2023-10-01 16:59:59 +00:00
Pim Kunis e7252c7545 remove comments about tags 2023-10-01 16:59:59 +00:00
Pim Kunis 86011c8418 default apprise notify type per borgmatic state 2023-10-01 16:59:59 +00:00
Pim Kunis f3295ccb4a add support for apprise 2023-10-01 16:59:59 +00:00
Dan Helfman cacb81f086 Bump version for release. 2023-09-30 13:37:10 -07:00
Dan Helfman 06c2154e6a Build docs regardless of Drone "event" (push, etc.).
continuous-integration/drone/push Build was killed Details
continuous-integration/drone Build is passing Details
2023-09-29 19:49:09 -07:00
Dan Helfman ac1e1a9407 Simplify logging logic (#665).
continuous-integration/drone/push Build was killed Details
continuous-integration/drone Build is passing Details
2023-09-29 14:16:47 -07:00
Dan Helfman 10933fd55b Fix for borgmatic not stopping Borg immediately when the user presses ctrl-C (#761).
continuous-integration/drone/push Build was killed Details
2023-09-27 08:52:00 -07:00
Dan Helfman af422ad705 Add documentation note about upgrading multiple pipx installations of borgmatic.
continuous-integration/drone/push Build was killed Details
2023-09-18 13:46:41 -07:00
Dan Helfman d9d35491fb Fix tense typo.
continuous-integration/drone/push Build is passing Details
2023-09-17 23:25:57 -07:00
Dan Helfman b540e63c0e Updated documentation so "sudo borgmatic" works for pipx borgmatic installations (#757).
continuous-integration/drone Build was killed Details
2023-09-17 22:46:33 -07:00
Dan Helfman 5a56208922 Fix documentation typo.
continuous-integration/drone/push Build is passing Details
2023-09-15 10:18:35 -07:00
Dan Helfman 5912769273 Fix error handling to log command output as one record per line (#754).
continuous-integration/drone/push Build is passing Details
2023-09-14 21:10:52 -07:00
Dan Helfman bac2aabe66 Attempt to unbreak ticket filing.
continuous-integration/drone/push Build is passing Details
2023-09-12 09:50:38 -07:00
Dan Helfman 9f3328781b When "archive_name_format" is not set, filter archives using the default archive name format (#753).
continuous-integration/drone/push Build is passing Details
2023-09-06 23:13:40 -07:00
Dan Helfman 0205748db8 Update documentation to recommend installing/upgrading borgmatic with pipx instead of pip.
continuous-integration/drone/push Build is passing Details
2023-09-04 16:25:10 -07:00
Dan Helfman d0a8251ad2 Add borgmatic version introducing Loki hook to docs (#743).
continuous-integration/drone/push Build is passing Details
2023-08-27 20:30:13 -07:00
Dan Helfman 32019ea8f3 Add documentation for Grafana Loki hook (#743).
continuous-integration/drone/push Build is passing Details
2023-08-25 10:52:00 -07:00
Dan Helfman fa9a061033 Merge branch 'main' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic 2023-08-25 09:29:02 -07:00
Dan Helfman b3d2560563 Added support for grafana loki (#743).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #747
2023-08-25 16:28:19 +00:00
Dan Helfman 4b4f56da42 Fix another database -> data source instance (#685). 2023-08-24 14:00:29 -07:00
Dan Helfman b96d1898f7 Prep work for eventual container-dumping hooks: Generalize internal database hook "API" (#685).
continuous-integration/drone/push Build is passing Details
2023-08-24 13:50:10 -07:00
Tobias Hodapp 099a712e53 Added more documentation to the test
Split tests to integration tests
2023-08-24 13:17:42 +02:00
Tobias Hodapp 9e2674ea5a Added unit tests
Removed useless dry run check
2023-08-23 17:17:23 +02:00
Tobias Hodapp 7e419ec995 Fixed spelling errors
Added documentation
Added log messages for dry run
2023-08-22 23:03:14 +02:00
Tobias Hodapp a3edf757ee Added changes of formatting tools 2023-08-22 13:40:05 +02:00
Tobias Hodapp e576403b64 Added support for grafana loki 2023-08-22 03:13:39 +02:00
Dan Helfman 7313430178 Make warning about sections a little more explicit (#721).
continuous-integration/drone/push Build is passing Details
2023-08-19 22:51:20 -07:00
Dan Helfman 962daaa8b9 Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2023-08-14 12:54:38 -07:00
Dan Helfman cd51e9c1ea Fix for database "restore" action not actually restore anything (#738).
continuous-integration/drone/push Build is passing Details
2023-08-14 12:43:21 -07:00
Dan Helfman 6dca7c1c15 Add "key export" action to export a copy of the repository key (#345).
continuous-integration/drone/push Build is passing Details
2023-08-07 12:28:39 -07:00
Dan Helfman fd8c56c6be Add brief source code reference documentation.
continuous-integration/drone/push Build is passing Details
2023-08-06 23:44:31 -07:00
Dan Helfman 065057c966
Fix typos.
continuous-integration/drone/push Build is passing Details
Merge pull request #77 from hop/main
2023-08-05 17:19:57 -07:00
Christoph Schindler c04517f843 Fix typos. 2023-08-06 02:16:31 +02:00
Dan Helfman 5d80c366fb Fix "borg create" flags/argument interleaving.
continuous-integration/drone/push Build is passing Details
2023-08-04 20:02:09 -07:00
Dan Helfman 193dd93de2 Fork a MariaDB database hook from the MySQL database hook (#727).
continuous-integration/drone/push Build is passing Details
2023-08-04 13:22:44 -07:00
Dan Helfman 8a94b9e2f1 Mention "store_config_files" in docs (#725).
continuous-integration/drone/push Build is passing Details
2023-08-03 22:11:02 -07:00
155 changed files with 11805 additions and 5546 deletions

View File

@ -1,86 +0,0 @@
---
kind: pipeline
name: python-3-8-alpine-3-13
services:
- name: postgresql
image: docker.io/postgres:13.1-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: postgresql2
image: docker.io/postgres:13.1-alpine
environment:
POSTGRES_PASSWORD: test2
POSTGRES_DB: test
POSTGRES_USER: postgres2
commands:
- docker-entrypoint.sh -p 5433
- name: mysql
image: docker.io/mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
- name: mysql2
image: docker.io/mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: test2
MYSQL_DATABASE: test
commands:
- docker-entrypoint.sh --port=3307
- name: mongodb
image: docker.io/mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: test
- name: mongodb2
image: docker.io/mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root2
MONGO_INITDB_ROOT_PASSWORD: test2
commands:
- docker-entrypoint.sh --port=27018
clone:
skip_verify: true
steps:
- name: build
image: docker.io/alpine:3.13
environment:
TEST_CONTAINER: true
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: documentation
type: exec
platform:
os: linux
arch: amd64
clone:
skip_verify: true
steps:
- name: build
environment:
USERNAME:
from_secret: docker_username
PASSWORD:
from_secret: docker_password
IMAGE_NAME: projects.torsion.org/borgmatic-collective/borgmatic:docs
commands:
- podman login --username "$USERNAME" --password "$PASSWORD" projects.torsion.org
- podman build --tag "$IMAGE_NAME" --file docs/Dockerfile --storage-opt "overlay.mount_program=/usr/bin/fuse-overlayfs" .
- podman push "$IMAGE_NAME"
trigger:
repo:
- borgmatic-collective/borgmatic
branch:
- main
event:
- push

View File

@ -1 +1 @@
blank_issues_enabled: false
blank_issues_enabled: true

View File

@ -0,0 +1,28 @@
name: build
run-name: ${{ gitea.actor }} is building
on:
push:
branches: [main]
jobs:
test:
runs-on: host
steps:
- uses: actions/checkout@v4
- run: scripts/run-end-to-end-tests
docs:
needs: [test]
runs-on: host
env:
IMAGE_NAME: projects.torsion.org/borgmatic-collective/borgmatic:docs
steps:
- uses: actions/checkout@v4
- run: podman login --username "$USERNAME" --password "$PASSWORD" projects.torsion.org
env:
USERNAME: "${{ secrets.REGISTRY_USERNAME }}"
PASSWORD: "${{ secrets.REGISTRY_PASSWORD }}"
- run: podman build --tag "$IMAGE_NAME" --file docs/Dockerfile --storage-opt "overlay.mount_program=/usr/bin/fuse-overlayfs" .
- run: podman push "$IMAGE_NAME"

152
NEWS
View File

@ -1,3 +1,147 @@
1.8.11.dev0
* #815: Add optional Healthchecks auto-provisioning via "create_slug" option.
* #851: Fix lack of file extraction when using "extract --strip-components all" on a path with a
leading slash.
* #854: Fix a traceback when the "data" consistency check is used.
# #857: Fix a traceback with "check --only spot" when the "spot" check is unconfigured.
1.8.10
* #656 (beta): Add a "spot" consistency check that compares file counts and contents between your
source files and the latest archive, ensuring they fall within configured tolerances. This can
catch problems like incorrect excludes, inadvertent deletes, files changed by malware, etc. See
the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#spot-check
* #779: When "--match-archives *" is used with "check" action, don't skip Borg's orphaned objects
check.
* #842: When a command hook exits with a soft failure, ping the log and finish states for any
configured monitoring hooks.
* #843: Add documentation link to Loki dashboard for borgmatic:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook
* #847: Fix "--json" error when Borg includes non-JSON warnings in JSON output.
* #848: SECURITY: Mask the password when logging a MongoDB dump or restore command.
* Fix handling of the NO_COLOR environment variable to ignore an empty value.
* Add documentation about backing up containerized databases by configuring borgmatic to exec into
a container to run a dump command:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
1.8.9
* #311: Add custom dump/restore command options for MySQL and MariaDB.
* #811: Add an "access_token" option to the ntfy monitoring hook for authenticating
without username/password.
* #827: When the "--json" flag is given, suppress console escape codes so as not to
interfere with JSON output.
* #829: Fix "--override" values containing deprecated section headers not actually overriding
configuration options under deprecated section headers.
* #835: Add support for the NO_COLOR environment variable. See the documentation for more
information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#colored-output
* #839: Add log sending for the Apprise logging hook, enabled by default. See the documentation for
more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook
* #839: Document a potentially breaking shell quoting edge case within error hooks:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#error-hooks
* #840: When running the "rcreate" action and the repository already exists but with a different
encryption mode than requested, error.
* Switch from Drone to Gitea Actions for continuous integration.
* Rename scripts/run-end-to-end-dev-tests to scripts/run-end-to-end-tests and use it in both dev
and CI for better dev-CI parity.
* Clarify documentation about restoring a database: borgmatic does not create the database upon
restore.
1.8.8
* #370: For the PostgreSQL hook, pass the "PGSSLMODE" environment variable through to Borg when the
database's configuration omits the "ssl_mode" option.
* #818: Allow the "--repository" flag to match across multiple configuration files.
* #820: Fix broken repository detection in the "rcreate" action with Borg 1.4. The issue did not
occur with other versions of Borg.
* #822: Fix broken escaping logic in the PostgreSQL hook's "pg_dump_command" option.
* SECURITY: Prevent additional shell injection attacks within the PostgreSQL hook.
1.8.7
* #736: Store included configuration files within each backup archive in support of the "config
bootstrap" action. Previously, only top-level configuration files were stored.
* #798: Elevate specific Borg warnings to errors or squash errors to
* warnings. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/customize-warnings-and-errors/
* #810: SECURITY: Prevent shell injection attacks within the PostgreSQL hook, the MongoDB hook, the
SQLite hook, the "borgmatic borg" action, and command hook variable/constant interpolation.
* #814: Fix a traceback when providing an invalid "--override" value for a list option.
1.8.6
* #767: Add an "--ssh-command" flag to the "config bootstrap" action for setting a custom SSH
command, as no configuration is available (including the "ssh_command" option) until
bootstrapping completes.
* #794: Fix a traceback when the "repositories" option contains both strings and key/value pairs.
* #800: Add configured repository labels to the JSON output for all actions.
* #802: The "check --force" flag now runs checks even if "check" is in "skip_actions".
* #804: Validate the configured action names in the "skip_actions" option.
* #807: Stream SQLite databases directly to Borg instead of dumping to an intermediate file.
* When logging commands that borgmatic executes, log the environment variables that
borgmatic sets for those commands. (But don't log their values, since they often contain
passwords.)
1.8.5
* #701: Add a "skip_actions" option to skip running particular actions, handy for append-only or
checkless configurations. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#skipping-actions
* #701: Deprecate the "disabled" value for the "checks" option in favor of the new "skip_actions"
option.
* #745: Constants now apply to included configuration, not just the file doing the includes. As a
side effect of this change, constants no longer apply to option names and only substitute into
configuration values.
* #779: Add a "--match-archives" flag to the "check" action for selecting the archives to check,
overriding the existing "archive_name_format" and "match_archives" options in configuration.
* #779: Only parse "--override" values as complex data types when they're for options of those
types.
* #782: Fix environment variable interpolation within configured repository paths.
* #782: Add configuration constant overriding via the existing "--override" flag.
* #783: Upgrade ruamel.yaml dependency to support version 0.18.x.
* #784: Drop support for Python 3.7, which has been end-of-lifed.
1.8.4
* #715: Add a monitoring hook for sending backup status to a variety of monitoring services via the
Apprise library. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook
* #748: When an archive filter causes no matching archives for the "rlist" or "info"
actions, warn the user and suggest how to remove the filter.
* #768: Fix a traceback when an invalid command-line flag or action is used.
* #771: Fix normalization of deprecated sections ("location:", "storage:", "hooks:", etc.) to
support empty sections without erroring.
* #774: Disallow the "--dry-run" flag with the "borg" action, as borgmatic can't guarantee the Borg
command won't have side effects.
1.8.3
* #665: BREAKING: Simplify logging logic as follows: Syslog verbosity is now disabled by
default, but setting the "--syslog-verbosity" flag enables it regardless of whether you're at an
interactive console. Additionally, "--log-file-verbosity" and "--monitoring-verbosity" now
default to 1 (info about steps borgmatic is taking) instead of 0. And both syslog logging and
file logging can be enabled simultaneously.
* #743: Add a monitoring hook for sending backup status and logs to Grafana Loki. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook
* #753: When "archive_name_format" is not set, filter archives using the default archive name
format.
* #754: Fix error handling to log command output as one record per line instead of truncating
too-long output and swallowing the end of some Borg error messages.
* #757: Update documentation so "sudo borgmatic" works for pipx borgmatic installations.
* #761: Fix for borgmatic not stopping Borg immediately when the user presses ctrl-C.
* Update documentation to recommend installing/upgrading borgmatic with pipx instead of pip. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#installation
https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-borgmatic
1.8.2
* #345: Add "key export" action to export a copy of the repository key for safekeeping in case
the original goes missing or gets damaged.
* #727: Add a MariaDB database hook that uses native MariaDB commands instead of the deprecated
MySQL ones. Be aware though that any existing backups made with the "mysql_databases:" hook are
only restorable with a "mysql_databases:" configuration.
* #738: Fix for potential data loss (data not getting restored) in which the database "restore"
action didn't actually restore anything and indicated success anyway.
* Remove the deprecated use of the MongoDB hook's "--db" flag for database restoration.
* Add source code reference documentation for getting oriented with the borgmatic code as a
developer: https://torsion.org/borgmatic/docs/reference/source-code/
1.8.1
* #326: Add documentation for restoring a database to an alternate host:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#restore-to-an-alternate-host
@ -26,10 +170,10 @@
"check --repair".
* When merging two configuration files, error gracefully if the two files do not adhere to the same
format.
* #721: Remove configuration sections ("location:", "storage:", "hooks:" etc.), while still keeping
deprecated support for them. Now, all options are at the same level, and you don't need to worry
about commenting/uncommenting section headers when you change an option (if you remove your
sections first).
* #721: Remove configuration sections ("location:", "storage:", "hooks:", etc.), while still
keeping deprecated support for them. Now, all options are at the same level, and you don't need
to worry about commenting/uncommenting section headers when you change an option (if you remove
your sections first).
* #721: BREAKING: The retention prefix and the consistency prefix can no longer have different
values (unless one is not set).
* #721: BREAKING: The storage umask and the hooks umask can no longer have different values (unless

View File

@ -48,24 +48,27 @@ postgresql_databases:
- name: users
# Third-party services to notify you if backups aren't happening.
healthchecks: https://hc-ping.com/be067061-cf96-4412-8eae-62b0c50d6a8c
healthchecks:
ping_url: https://hc-ping.com/be067061-cf96-4412-8eae-62b0c50d6a8c
```
borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
## Integrations
<a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://sqlite.org/"><img src="docs/static/sqlite.png" alt="SQLite" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.pagerduty.com/"><img src="docs/static/pagerduty.png" alt="PagerDuty" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://ntfy.sh/"><img src="docs/static/ntfy.png" alt="ntfy" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://sqlite.org/"><img src="docs/static/sqlite.png" alt="SQLite" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.pagerduty.com/"><img src="docs/static/pagerduty.png" alt="PagerDuty" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://ntfy.sh/"><img src="docs/static/ntfy.png" alt="ntfy" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://grafana.com/oss/loki/"><img src="docs/static/loki.png" alt="Loki" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://github.com/caronc/apprise/wiki"><img src="docs/static/apprise.png" alt="Apprise" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
## Getting started
@ -151,6 +154,3 @@ general, contributions are very welcome. We don't bite!
Also, please check out the [borgmatic development
how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for
info on cloning source code, running tests, etc.
<a href="https://build.torsion.org/borgmatic-collective/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/borgmatic-collective/borgmatic/status.svg?ref=refs/heads/main)</a>

View File

@ -1,12 +1,575 @@
import datetime
import hashlib
import itertools
import logging
import os
import pathlib
import random
import borgmatic.borg.check
import borgmatic.borg.create
import borgmatic.borg.environment
import borgmatic.borg.extract
import borgmatic.borg.list
import borgmatic.borg.rlist
import borgmatic.borg.state
import borgmatic.config.validate
import borgmatic.execute
import borgmatic.hooks.command
DEFAULT_CHECKS = (
{'name': 'repository', 'frequency': '1 month'},
{'name': 'archives', 'frequency': '1 month'},
)
logger = logging.getLogger(__name__)
def parse_checks(config, only_checks=None):
'''
Given a configuration dict with a "checks" sequence of dicts and an optional list of override
checks, return a tuple of named checks to run.
For example, given a config of:
{'checks': ({'name': 'repository'}, {'name': 'archives'})}
This will be returned as:
('repository', 'archives')
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If a checks value
has a name of "disabled", return an empty tuple, meaning that no checks should be run.
'''
checks = only_checks or tuple(
check_config['name'] for check_config in (config.get('checks', None) or DEFAULT_CHECKS)
)
checks = tuple(check.lower() for check in checks)
if 'disabled' in checks:
logger.warning(
'The "disabled" value for the "checks" option is deprecated and will be removed from a future release; use "skip_actions" instead'
)
if len(checks) > 1:
logger.warning(
'Multiple checks are configured, but one of them is "disabled"; not running any checks'
)
return ()
return checks
def parse_frequency(frequency):
'''
Given a frequency string with a number and a unit of time, return a corresponding
datetime.timedelta instance or None if the frequency is None or "always".
For instance, given "3 weeks", return datetime.timedelta(weeks=3)
Raise ValueError if the given frequency cannot be parsed.
'''
if not frequency:
return None
frequency = frequency.strip().lower()
if frequency == 'always':
return None
try:
number, time_unit = frequency.split(' ')
number = int(number)
except ValueError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
if not time_unit.endswith('s'):
time_unit += 's'
if time_unit == 'months':
number *= 30
time_unit = 'days'
elif time_unit == 'years':
number *= 365
time_unit = 'days'
try:
return datetime.timedelta(**{time_unit: number})
except TypeError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
def filter_checks_on_frequency(
config,
borg_repository_id,
checks,
force,
archives_check_id=None,
):
'''
Given a configuration dict with a "checks" sequence of dicts, a Borg repository ID, a sequence
of checks, whether to force checks to run, and an ID for the archives check potentially being
run (if any), filter down those checks based on the configured "frequency" for each check as
compared to its check time file.
In other words, a check whose check time file's timestamp is too new (based on the configured
frequency) will get cut from the returned sequence of checks. Example:
config = {
'checks': [
{
'name': 'archives',
'frequency': '2 weeks',
},
]
}
When this function is called with that config and "archives" in checks, "archives" will get
filtered out of the returned result if its check time file is newer than 2 weeks old, indicating
that it's not yet time to run that check again.
Raise ValueError if a frequency cannot be parsed.
'''
if not checks:
return checks
filtered_checks = list(checks)
if force:
return tuple(filtered_checks)
for check_config in config.get('checks', DEFAULT_CHECKS):
check = check_config['name']
if checks and check not in checks:
continue
frequency_delta = parse_frequency(check_config.get('frequency'))
if not frequency_delta:
continue
check_time = probe_for_check_time(config, borg_repository_id, check, archives_check_id)
if not check_time:
continue
# If we've not yet reached the time when the frequency dictates we're ready for another
# check, skip this check.
if datetime.datetime.now() < check_time + frequency_delta:
remaining = check_time + frequency_delta - datetime.datetime.now()
logger.info(
f'Skipping {check} check due to configured frequency; {remaining} until next check (use --force to check anyway)'
)
filtered_checks.remove(check)
return tuple(filtered_checks)
def make_archives_check_id(archive_filter_flags):
'''
Given a sequence of flags to filter archives, return a unique hash corresponding to those
particular flags. If there are no flags, return None.
'''
if not archive_filter_flags:
return None
return hashlib.sha256(' '.join(archive_filter_flags).encode()).hexdigest()
def make_check_time_path(config, borg_repository_id, check_type, archives_check_id=None):
'''
Given a configuration dict, a Borg repository ID, the name of a check type ("repository",
"archives", etc.), and a unique hash of the archives filter flags, return a path for recording
that check's time (the time of that check last occurring).
'''
borgmatic_source_directory = os.path.expanduser(
config.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
)
if check_type in ('archives', 'data'):
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
archives_check_id if archives_check_id else 'all',
)
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
)
def write_check_time(path): # pragma: no cover
'''
Record a check time of now as the modification time of the given path.
'''
logger.debug(f'Writing check time at {path}')
os.makedirs(os.path.dirname(path), mode=0o700, exist_ok=True)
pathlib.Path(path, mode=0o600).touch()
def read_check_time(path):
'''
Return the check time based on the modification time of the given path. Return None if the path
doesn't exist.
'''
logger.debug(f'Reading check time from {path}')
try:
return datetime.datetime.fromtimestamp(os.stat(path).st_mtime)
except FileNotFoundError:
return None
def probe_for_check_time(config, borg_repository_id, check, archives_check_id):
'''
Given a configuration dict, a Borg repository ID, the name of a check type ("repository",
"archives", etc.), and a unique hash of the archives filter flags, return a the corresponding
check time or None if such a check time does not exist.
When the check type is "archives" or "data", this function probes two different paths to find
the check time, e.g.:
~/.borgmatic/checks/1234567890/archives/9876543210
~/.borgmatic/checks/1234567890/archives/all
... and returns the maximum modification time of the files found (if any). The first path
represents a more specific archives check time (a check on a subset of archives), and the second
is a fallback to the last "all" archives check.
For other check types, this function reads from a single check time path, e.g.:
~/.borgmatic/checks/1234567890/repository
'''
check_times = (
read_check_time(group[0])
for group in itertools.groupby(
(
make_check_time_path(config, borg_repository_id, check, archives_check_id),
make_check_time_path(config, borg_repository_id, check),
)
)
)
try:
return max(check_time for check_time in check_times if check_time)
except ValueError:
return None
def upgrade_check_times(config, borg_repository_id):
'''
Given a configuration dict and a Borg repository ID, upgrade any corresponding check times on
disk from old-style paths to new-style paths.
Currently, the only upgrade performed is renaming an archive or data check path that looks like:
~/.borgmatic/checks/1234567890/archives
to:
~/.borgmatic/checks/1234567890/archives/all
'''
for check_type in ('archives', 'data'):
new_path = make_check_time_path(config, borg_repository_id, check_type, 'all')
old_path = os.path.dirname(new_path)
temporary_path = f'{old_path}.temp'
if not os.path.isfile(old_path) and not os.path.isfile(temporary_path):
continue
logger.debug(f'Upgrading archives check time from {old_path} to {new_path}')
try:
os.rename(old_path, temporary_path)
except FileNotFoundError:
pass
os.mkdir(old_path)
os.rename(temporary_path, new_path)
def collect_spot_check_source_paths(
repository, config, local_borg_version, global_arguments, local_path, remote_path
):
'''
Given a repository configuration dict, a configuration dict, the local Borg version, global
arguments as an argparse.Namespace instance, the local Borg path, and the remote Borg path,
collect the source paths that Borg would use in an actual create (but only include files and
symlinks).
'''
stream_processes = any(
borgmatic.hooks.dispatch.call_hooks(
'use_streaming',
config,
repository['path'],
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES,
).values()
)
(create_flags, create_positional_arguments, pattern_file, exclude_file) = (
borgmatic.borg.create.make_base_create_command(
dry_run=True,
repository_path=repository['path'],
config=config,
config_paths=(),
local_borg_version=local_borg_version,
global_arguments=global_arguments,
borgmatic_source_directories=(),
local_path=local_path,
remote_path=remote_path,
list_files=True,
stream_processes=stream_processes,
)
)
borg_environment = borgmatic.borg.environment.make_environment(config)
try:
working_directory = os.path.expanduser(config.get('working_directory'))
except TypeError:
working_directory = None
paths_output = borgmatic.execute.execute_command_and_capture_output(
create_flags + create_positional_arguments,
capture_stderr=True,
working_directory=working_directory,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
paths = tuple(
path_line.split(' ', 1)[1]
for path_line in paths_output.split('\n')
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
return tuple(path for path in paths if os.path.isfile(path) or os.path.islink(path))
BORG_DIRECTORY_FILE_TYPE = 'd'
def collect_spot_check_archive_paths(
repository, archive, config, local_borg_version, global_arguments, local_path, remote_path
):
'''
Given a repository configuration dict, the name of the latest archive, a configuration dict, the
local Borg version, global arguments as an argparse.Namespace instance, the local Borg path, and
the remote Borg path, collect the paths from the given archive (but only include files and
symlinks).
'''
borgmatic_source_directory = os.path.expanduser(
config.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
)
return tuple(
path
for line in borgmatic.borg.list.capture_archive_listing(
repository['path'],
archive,
config,
local_borg_version,
global_arguments,
path_format='{type} /{path}{NL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
)
for (file_type, path) in (line.split(' ', 1),)
if file_type != BORG_DIRECTORY_FILE_TYPE
if pathlib.Path(borgmatic_source_directory) not in pathlib.Path(path).parents
)
def compare_spot_check_hashes(
repository,
archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
log_label,
source_paths,
):
'''
Given a repository configuration dict, the name of the latest archive, a configuration dict, the
local Borg version, global arguments as an argparse.Namespace instance, the local Borg path, the
remote Borg path, a log label, and spot check source paths, compare the hashes for a sampling of
the source paths with hashes from corresponding paths in the given archive. Return a sequence of
the paths that fail that hash comparison.
'''
# Based on the configured sample percentage, come up with a list of random sample files from the
# source directories.
spot_check_config = next(check for check in config['checks'] if check['name'] == 'spot')
sample_count = max(
int(len(source_paths) * (min(spot_check_config['data_sample_percentage'], 100) / 100)), 1
)
source_sample_paths = tuple(random.sample(source_paths, sample_count))
existing_source_sample_paths = {
source_path for source_path in source_sample_paths if os.path.exists(source_path)
}
logger.debug(
f'{log_label}: Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
)
# Hash each file in the sample paths (if it exists).
hash_output = borgmatic.execute.execute_command_and_capture_output(
(spot_check_config.get('xxh64sum_command', 'xxh64sum'),)
+ tuple(path for path in source_sample_paths if path in existing_source_sample_paths)
)
source_hashes = dict(
(reversed(line.split(' ', 1)) for line in hash_output.splitlines()),
**{path: '' for path in source_sample_paths if path not in existing_source_sample_paths},
)
archive_hashes = dict(
reversed(line.split(' ', 1))
for line in borgmatic.borg.list.capture_archive_listing(
repository['path'],
archive,
config,
local_borg_version,
global_arguments,
list_paths=source_sample_paths,
path_format='{xxh64} /{path}{NL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
)
if line
)
# Compare the source hashes with the archive hashes to see how many match.
failing_paths = []
for path, source_hash in source_hashes.items():
archive_hash = archive_hashes.get(path)
if archive_hash is not None and archive_hash == source_hash:
continue
failing_paths.append(path)
return tuple(failing_paths)
def spot_check(
repository,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
):
'''
Given a repository dict, a loaded configuration dict, the local Borg version, global arguments
as an argparse.Namespace instance, the local Borg path, and the remote Borg path, perform a spot
check for the latest archive in the given repository.
A spot check compares file counts and also the hashes for a random sampling of source files on
disk to those stored in the latest archive. If any differences are beyond configured tolerances,
then the check fails.
'''
log_label = f'{repository.get("label", repository["path"])}'
logger.debug(f'{log_label}: Running spot check')
try:
spot_check_config = next(
check for check in config.get('checks', ()) if check.get('name') == 'spot'
)
except StopIteration:
raise ValueError('Cannot run spot check because it is unconfigured')
if spot_check_config['data_tolerance_percentage'] > spot_check_config['data_sample_percentage']:
raise ValueError(
'The data_tolerance_percentage must be less than or equal to the data_sample_percentage'
)
source_paths = collect_spot_check_source_paths(
repository,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
logger.debug(f'{log_label}: {len(source_paths)} total source paths for spot check')
archive = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
'latest',
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
logger.debug(f'{log_label}: Using archive {archive} for spot check')
archive_paths = collect_spot_check_archive_paths(
repository,
archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
logger.debug(f'{log_label}: {len(archive_paths)} total archive paths for spot check')
# Calculate the percentage delta between the source paths count and the archive paths count, and
# compare that delta to the configured count tolerance percentage.
count_delta_percentage = abs(len(source_paths) - len(archive_paths)) / len(source_paths) * 100
if count_delta_percentage > spot_check_config['count_tolerance_percentage']:
logger.debug(
f'{log_label}: Paths in source paths but not latest archive: {", ".join(set(source_paths) - set(archive_paths)) or "none"}'
)
logger.debug(
f'{log_label}: Paths in latest archive but not source paths: {", ".join(set(archive_paths) - set(source_paths)) or "none"}'
)
raise ValueError(
f'Spot check failed: {count_delta_percentage:.2f}% file count delta between source paths and latest archive (tolerance is {spot_check_config["count_tolerance_percentage"]}%)'
)
failing_paths = compare_spot_check_hashes(
repository,
archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
log_label,
source_paths,
)
# Error if the percentage of failing hashes exceeds the configured tolerance percentage.
logger.debug(f'{log_label}: {len(failing_paths)} non-matching spot check hashes')
data_tolerance_percentage = spot_check_config['data_tolerance_percentage']
failing_percentage = (len(failing_paths) / len(source_paths)) * 100
if failing_percentage > data_tolerance_percentage:
logger.debug(
f'{log_label}: Source paths with data not matching the latest archive: {", ".join(failing_paths)}'
)
raise ValueError(
f'Spot check failed: {failing_percentage:.2f}% of source paths with data not matching the latest archive (tolerance is {data_tolerance_percentage}%)'
)
logger.info(
f'{log_label}: Spot check passed with a {count_delta_percentage:.2f}% file count delta and a {failing_percentage:.2f}% file data delta'
)
def run_check(
config_filename,
repository,
@ -20,6 +583,8 @@ def run_check(
):
'''
Run the "check" action for the given repository.
Raise ValueError if the Borg repository ID cannot be determined.
'''
if check_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, check_arguments.repository
@ -34,19 +599,69 @@ def run_check(
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository.get("label", repository["path"])}: Running consistency checks')
borgmatic.borg.check.check_archives(
repository_id = borgmatic.borg.check.get_repository_id(
repository['path'],
config,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
progress=check_arguments.progress,
repair=check_arguments.repair,
only_checks=check_arguments.only,
force=check_arguments.force,
)
upgrade_check_times(config, repository_id)
configured_checks = parse_checks(config, check_arguments.only_checks)
archive_filter_flags = borgmatic.borg.check.make_archive_filter_flags(
local_borg_version, config, configured_checks, check_arguments
)
archives_check_id = make_archives_check_id(archive_filter_flags)
checks = filter_checks_on_frequency(
config,
repository_id,
configured_checks,
check_arguments.force,
archives_check_id,
)
borg_specific_checks = set(checks).intersection({'repository', 'archives', 'data'})
if borg_specific_checks:
borgmatic.borg.check.check_archives(
repository['path'],
config,
local_borg_version,
check_arguments,
global_arguments,
borg_specific_checks,
archive_filter_flags,
local_path=local_path,
remote_path=remote_path,
)
for check in borg_specific_checks:
write_check_time(make_check_time_path(config, repository_id, check, archives_check_id))
if 'extract' in checks:
borgmatic.borg.extract.extract_last_archive_dry_run(
config,
local_borg_version,
global_arguments,
repository['path'],
config.get('lock_wait'),
local_path,
remote_path,
)
write_check_time(make_check_time_path(config, repository_id, 'extract'))
if 'spot' in checks:
spot_check(
repository,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
write_check_time(make_check_time_path(config, repository_id, 'spot'))
borgmatic.hooks.command.execute_hook(
config.get('after_check'),
config.get('umask'),

View File

@ -13,14 +13,11 @@ logger = logging.getLogger(__name__)
def get_config_paths(bootstrap_arguments, global_arguments, local_borg_version):
'''
Given:
The bootstrap arguments, which include the repository and archive name, borgmatic source directory,
destination directory, and whether to strip components.
The global arguments, which include the dry run flag
and the local borg version,
Return:
The config paths from the manifest.json file in the borgmatic source directory after extracting it from the
repository.
Given the bootstrap arguments as an argparse.Namespace (containing the repository and archive
name, borgmatic source directory, destination directory, and whether to strip components), the
global arguments as an argparse.Namespace (containing the dry run flag and the local borg
version), return the config paths from the manifest.json file in the borgmatic source directory
after extracting it from the repository.
Raise ValueError if the manifest JSON is missing, can't be decoded, or doesn't contain the
expected configuration path data.
@ -31,24 +28,26 @@ def get_config_paths(bootstrap_arguments, global_arguments, local_borg_version):
borgmatic_manifest_path = os.path.expanduser(
os.path.join(borgmatic_source_directory, 'bootstrap', 'manifest.json')
)
config = {'ssh_command': bootstrap_arguments.ssh_command}
extract_process = borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
bootstrap_arguments.repository,
borgmatic.borg.rlist.resolve_archive_name(
bootstrap_arguments.repository,
bootstrap_arguments.archive,
{},
config,
local_borg_version,
global_arguments,
),
[borgmatic_manifest_path],
{},
config,
local_borg_version,
global_arguments,
extract_to_stdout=True,
)
manifest_json = extract_process.stdout.read()
if not manifest_json:
raise ValueError(
'Cannot read configuration paths from archive due to missing bootstrap manifest'
@ -79,6 +78,7 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
manifest_config_paths = get_config_paths(
bootstrap_arguments, global_arguments, local_borg_version
)
config = {'ssh_command': bootstrap_arguments.ssh_command}
logger.info(f"Bootstrapping config paths: {', '.join(manifest_config_paths)}")
@ -88,12 +88,12 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
borgmatic.borg.rlist.resolve_archive_name(
bootstrap_arguments.repository,
bootstrap_arguments.archive,
{},
config,
local_borg_version,
global_arguments,
),
[config_path.lstrip(os.path.sep) for config_path in manifest_config_paths],
{},
config,
local_borg_version,
global_arguments,
extract_to_stdout=False,

View File

@ -1,12 +1,9 @@
import importlib.metadata
import json
import logging
import os
try:
import importlib_metadata
except ModuleNotFoundError: # pragma: nocover
import importlib.metadata as importlib_metadata
import borgmatic.actions.json
import borgmatic.borg.create
import borgmatic.borg.state
import borgmatic.config.validate
@ -39,7 +36,7 @@ def create_borgmatic_manifest(config, config_paths, dry_run):
with open(borgmatic_manifest_path, 'w') as config_list_file:
json.dump(
{
'borgmatic_version': importlib_metadata.version('borgmatic'),
'borgmatic_version': importlib.metadata.version('borgmatic'),
'config_paths': config_paths,
},
config_list_file,
@ -50,6 +47,7 @@ def run_create(
config_filename,
repository,
config,
config_paths,
hook_context,
local_borg_version,
create_arguments,
@ -78,22 +76,24 @@ def run_create(
)
logger.info(f'{repository.get("label", repository["path"])}: Creating archive{dry_run_label}')
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
'remove_data_source_dumps',
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES,
global_arguments.dry_run,
)
active_dumps = borgmatic.hooks.dispatch.call_hooks(
'dump_databases',
'dump_data_sources',
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES,
global_arguments.dry_run,
)
if config.get('store_config_files', True):
create_borgmatic_manifest(
config, global_arguments.used_config_paths, global_arguments.dry_run
config,
config_paths,
global_arguments.dry_run,
)
stream_processes = [process for processes in active_dumps.values() for process in processes]
@ -101,6 +101,7 @@ def run_create(
global_arguments.dry_run,
repository['path'],
config,
config_paths,
local_borg_version,
global_arguments,
local_path=local_path,
@ -111,14 +112,14 @@ def run_create(
list_files=create_arguments.list_files,
stream_processes=stream_processes,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
if json_output:
yield borgmatic.actions.json.parse_json(json_output, repository.get('label'))
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
'remove_data_source_dumps',
config,
config_filename,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES,
global_arguments.dry_run,
)
borgmatic.hooks.command.execute_hook(

View File

@ -0,0 +1,33 @@
import logging
import borgmatic.borg.export_key
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_export_key(
repository,
config,
local_borg_version,
export_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "key export" action for the given repository.
'''
if export_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_arguments.repository
):
logger.info(f'{repository.get("label", repository["path"])}: Exporting repository key')
borgmatic.borg.export_key.export_key(
repository['path'],
config,
local_borg_version,
export_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -1,7 +1,7 @@
import json
import logging
import borgmatic.actions.arguments
import borgmatic.actions.json
import borgmatic.borg.info
import borgmatic.borg.rlist
import borgmatic.config.validate
@ -26,7 +26,7 @@ def run_info(
if info_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, info_arguments.repository
):
if not info_arguments.json: # pragma: nocover
if not info_arguments.json:
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying archive summary information'
)
@ -48,5 +48,5 @@ def run_info(
local_path,
remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
if json_output:
yield borgmatic.actions.json.parse_json(json_output, repository.get('label'))

30
borgmatic/actions/json.py Normal file
View File

@ -0,0 +1,30 @@
import json
import logging
logger = logging.getLogger(__name__)
def parse_json(borg_json_output, label):
'''
Given a Borg JSON output string, parse it as JSON into a dict. Inject the given borgmatic
repository label into it and return the dict.
Raise JSONDecodeError if the JSON output cannot be parsed.
'''
lines = borg_json_output.splitlines()
start_line_index = 0
# Scan forward to find the first line starting with "{" and assume that's where the JSON starts.
for line_index, line in enumerate(lines):
if line.startswith('{'):
start_line_index = line_index
break
json_data = json.loads('\n'.join(lines[start_line_index:]))
if 'repository' not in json_data:
return json_data
json_data['repository']['label'] = label or ''
return json_data

View File

@ -1,7 +1,7 @@
import json
import logging
import borgmatic.actions.arguments
import borgmatic.actions.json
import borgmatic.borg.list
import borgmatic.config.validate
@ -25,10 +25,10 @@ def run_list(
if list_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, list_arguments.repository
):
if not list_arguments.json: # pragma: nocover
if list_arguments.find_paths:
if not list_arguments.json:
if list_arguments.find_paths: # pragma: no cover
logger.answer(f'{repository.get("label", repository["path"])}: Searching archives')
elif not list_arguments.archive:
elif not list_arguments.archive: # pragma: no cover
logger.answer(f'{repository.get("label", repository["path"])}: Listing archives')
archive_name = borgmatic.borg.rlist.resolve_archive_name(
@ -49,5 +49,5 @@ def run_list(
local_path,
remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
if json_output:
yield borgmatic.actions.json.parse_json(json_output, repository.get('label'))

View File

@ -17,50 +17,51 @@ logger = logging.getLogger(__name__)
UNSPECIFIED_HOOK = object()
def get_configured_database(
config, archive_database_names, hook_name, database_name, configuration_database_name=None
def get_configured_data_source(
config,
archive_data_source_names,
hook_name,
data_source_name,
configuration_data_source_name=None,
):
'''
Find the first database with the given hook name and database name in the configuration dict and
the given archive database names dict (from hook name to database names contained in a
particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all database
hooks for the named database. If a configuration database name is given, use that instead of the
database name to lookup the database in the given hooks configuration.
Find the first data source with the given hook name and data source name in the configuration
dict and the given archive data source names dict (from hook name to data source names contained
in a particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all data
source hooks for the named data source. If a configuration data source name is given, use that
instead of the data source name to lookup the data source in the given hooks configuration.
Return the found database as a tuple of (found hook name, database configuration dict).
Return the found data source as a tuple of (found hook name, data source configuration dict) or
(None, None) if not found.
'''
if not configuration_database_name:
configuration_database_name = database_name
if not configuration_data_source_name:
configuration_data_source_name = data_source_name
if hook_name == UNSPECIFIED_HOOK:
hooks_to_search = {
hook_name: value
for (hook_name, value) in config.items()
if hook_name in borgmatic.hooks.dump.DATABASE_HOOK_NAMES
if hook_name in borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES
}
else:
hooks_to_search = {hook_name: config[hook_name]}
try:
hooks_to_search = {hook_name: config[hook_name]}
except KeyError:
return (None, None)
return next(
(
(name, hook_database)
(name, hook_data_source)
for (name, hook) in hooks_to_search.items()
for hook_database in hook
if hook_database['name'] == configuration_database_name
and database_name in archive_database_names.get(name, [])
for hook_data_source in hook
if hook_data_source['name'] == configuration_data_source_name
and data_source_name in archive_data_source_names.get(name, [])
),
(None, None),
)
def get_configured_hook_name_and_database(hooks, database_name):
'''
Find the hook name and first database dict with the given database name in the configured hooks
dict. This searches across all database hooks.
'''
def restore_single_database(
def restore_single_data_source(
repository,
config,
local_borg_version,
@ -69,27 +70,27 @@ def restore_single_database(
remote_path,
archive_name,
hook_name,
database,
data_source,
connection_params,
): # pragma: no cover
'''
Given (among other things) an archive name, a database hook name, the hostname,
port, username and password as connection params, and a configured database
configuration dict, restore that database from the archive.
Given (among other things) an archive name, a data source hook name, the hostname, port,
username/password as connection params, and a configured data source configuration dict, restore
that data source from the archive.
'''
logger.info(
f'{repository.get("label", repository["path"])}: Restoring database {database["name"]}'
f'{repository.get("label", repository["path"])}: Restoring data source {data_source["name"]}'
)
dump_pattern = borgmatic.hooks.dispatch.call_hooks(
'make_database_dump_pattern',
'make_data_source_dump_pattern',
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
database['name'],
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES,
data_source['name'],
)[hook_name]
# Kick off a single database extract to stdout.
# Kick off a single data source extract to stdout.
extract_process = borgmatic.borg.extract.extract_archive(
dry_run=global_arguments.dry_run,
repository=repository['path'],
@ -103,23 +104,23 @@ def restore_single_database(
destination_path='/',
# A directory format dump isn't a single file, and therefore can't extract
# to stdout. In this case, the extract_process return value is None.
extract_to_stdout=bool(database.get('format') != 'directory'),
extract_to_stdout=bool(data_source.get('format') != 'directory'),
)
# Run a single database restore, consuming the extract stdout (if any).
# Run a single data source restore, consuming the extract stdout (if any).
borgmatic.hooks.dispatch.call_hooks(
'restore_database_dump',
config,
repository['path'],
database['name'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
global_arguments.dry_run,
extract_process,
connection_params,
function_name='restore_data_source_dump',
config=config,
log_prefix=repository['path'],
hook_names=[hook_name],
data_source=data_source,
dry_run=global_arguments.dry_run,
extract_process=extract_process,
connection_params=connection_params,
)
def collect_archive_database_names(
def collect_archive_data_source_names(
repository,
archive,
config,
@ -131,60 +132,62 @@ def collect_archive_database_names(
'''
Given a local or remote repository path, a resolved archive name, a configuration dict, the
local Borg version, global_arguments an argparse.Namespace, and local and remote Borg paths,
query the archive for the names of databases it contains and return them as a dict from hook
name to a sequence of database names.
query the archive for the names of data sources it contains as dumps and return them as a dict
from hook name to a sequence of data source names.
'''
borgmatic_source_directory = os.path.expanduser(
config.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
).lstrip('/')
parent_dump_path = os.path.expanduser(
borgmatic.hooks.dump.make_database_dump_path(borgmatic_source_directory, '*_databases/*/*')
)
dump_paths = borgmatic.borg.list.capture_archive_listing(
repository,
archive,
config,
local_borg_version,
global_arguments,
list_path=parent_dump_path,
list_paths=[
os.path.expanduser(
borgmatic.hooks.dump.make_data_source_dump_path(borgmatic_source_directory, pattern)
)
for pattern in ('*_databases/*/*',)
],
local_path=local_path,
remote_path=remote_path,
)
# Determine the database names corresponding to the dumps found in the archive and
# Determine the data source names corresponding to the dumps found in the archive and
# add them to restore_names.
archive_database_names = {}
archive_data_source_names = {}
for dump_path in dump_paths:
try:
(hook_name, _, database_name) = dump_path.split(
(hook_name, _, data_source_name) = dump_path.split(
borgmatic_source_directory + os.path.sep, 1
)[1].split(os.path.sep)[0:3]
except (ValueError, IndexError):
logger.warning(
f'{repository}: Ignoring invalid database dump path "{dump_path}" in archive {archive}'
f'{repository}: Ignoring invalid data source dump path "{dump_path}" in archive {archive}'
)
else:
if database_name not in archive_database_names.get(hook_name, []):
archive_database_names.setdefault(hook_name, []).extend([database_name])
if data_source_name not in archive_data_source_names.get(hook_name, []):
archive_data_source_names.setdefault(hook_name, []).extend([data_source_name])
return archive_database_names
return archive_data_source_names
def find_databases_to_restore(requested_database_names, archive_database_names):
def find_data_sources_to_restore(requested_data_source_names, archive_data_source_names):
'''
Given a sequence of requested database names to restore and a dict of hook name to the names of
databases found in an archive, return an expanded sequence of database names to restore,
replacing "all" with actual database names as appropriate.
Given a sequence of requested data source names to restore and a dict of hook name to the names
of data sources found in an archive, return an expanded sequence of data source names to
restore, replacing "all" with actual data source names as appropriate.
Raise ValueError if any of the requested database names cannot be found in the archive.
Raise ValueError if any of the requested data source names cannot be found in the archive.
'''
# A map from database hook name to the database names to restore for that hook.
# A map from data source hook name to the data source names to restore for that hook.
restore_names = (
{UNSPECIFIED_HOOK: requested_database_names}
if requested_database_names
{UNSPECIFIED_HOOK: requested_data_source_names}
if requested_data_source_names
else {UNSPECIFIED_HOOK: ['all']}
)
@ -193,56 +196,59 @@ def find_databases_to_restore(requested_database_names, archive_database_names):
if 'all' in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove('all')
for hook_name, database_names in archive_database_names.items():
restore_names.setdefault(hook_name, []).extend(database_names)
for hook_name, data_source_names in archive_data_source_names.items():
restore_names.setdefault(hook_name, []).extend(data_source_names)
# If a database is to be restored as part of "all", then remove it from restore names so
# it doesn't get restored twice.
for database_name in database_names:
if database_name in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove(database_name)
# If a data source is to be restored as part of "all", then remove it from restore names
# so it doesn't get restored twice.
for data_source_name in data_source_names:
if data_source_name in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove(data_source_name)
if not restore_names[UNSPECIFIED_HOOK]:
restore_names.pop(UNSPECIFIED_HOOK)
combined_restore_names = set(
name for database_names in restore_names.values() for name in database_names
name for data_source_names in restore_names.values() for name in data_source_names
)
combined_archive_database_names = set(
name for database_names in archive_database_names.values() for name in database_names
combined_archive_data_source_names = set(
name
for data_source_names in archive_data_source_names.values()
for name in data_source_names
)
missing_names = sorted(set(combined_restore_names) - combined_archive_database_names)
missing_names = sorted(set(combined_restore_names) - combined_archive_data_source_names)
if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError(
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from archive"
f"Cannot restore data source{'s' if len(missing_names) > 1 else ''} {joined_names} missing from archive"
)
return restore_names
def ensure_databases_found(restore_names, remaining_restore_names, found_names):
def ensure_data_sources_found(restore_names, remaining_restore_names, found_names):
'''
Given a dict from hook name to database names to restore, a dict from hook name to remaining
database names to restore, and a sequence of found (actually restored) database names, raise
ValueError if requested databases to restore were missing from the archive and/or configuration.
Given a dict from hook name to data source names to restore, a dict from hook name to remaining
data source names to restore, and a sequence of found (actually restored) data source names,
raise ValueError if requested data source to restore were missing from the archive and/or
configuration.
'''
combined_restore_names = set(
name
for database_names in tuple(restore_names.values())
for data_source_names in tuple(restore_names.values())
+ tuple(remaining_restore_names.values())
for name in database_names
for name in data_source_names
)
if not combined_restore_names and not found_names:
raise ValueError('No databases were found to restore')
raise ValueError('No data sources were found to restore')
missing_names = sorted(set(combined_restore_names) - set(found_names))
if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError(
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from borgmatic's configuration"
f"Cannot restore data source{'s' if len(missing_names) > 1 else ''} {joined_names} missing from borgmatic's configuration"
)
@ -259,7 +265,7 @@ def run_restore(
Run the "restore" action for the given repository, but only if the repository matches the
requested repository in restore arguments.
Raise ValueError if a configured database could not be found to restore.
Raise ValueError if a configured data source could not be found to restore.
'''
if restore_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, restore_arguments.repository
@ -267,14 +273,14 @@ def run_restore(
return
logger.info(
f'{repository.get("label", repository["path"])}: Restoring databases from archive {restore_arguments.archive}'
f'{repository.get("label", repository["path"])}: Restoring data sources from archive {restore_arguments.archive}'
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
'remove_data_source_dumps',
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES,
global_arguments.dry_run,
)
@ -287,7 +293,7 @@ def run_restore(
local_path,
remote_path,
)
archive_database_names = collect_archive_database_names(
archive_data_source_names = collect_archive_data_source_names(
repository['path'],
archive_name,
config,
@ -296,7 +302,9 @@ def run_restore(
local_path,
remote_path,
)
restore_names = find_databases_to_restore(restore_arguments.databases, archive_database_names)
restore_names = find_data_sources_to_restore(
restore_arguments.data_sources, archive_data_source_names
)
found_names = set()
remaining_restore_names = {}
connection_params = {
@ -307,20 +315,20 @@ def run_restore(
'restore_path': restore_arguments.restore_path,
}
for hook_name, database_names in restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
config, archive_database_names, hook_name, database_name
for hook_name, data_source_names in restore_names.items():
for data_source_name in data_source_names:
found_hook_name, found_data_source = get_configured_data_source(
config, archive_data_source_names, hook_name, data_source_name
)
if not found_database:
if not found_data_source:
remaining_restore_names.setdefault(found_hook_name or hook_name, []).append(
database_name
data_source_name
)
continue
found_names.add(database_name)
restore_single_database(
found_names.add(data_source_name)
restore_single_data_source(
repository,
config,
local_borg_version,
@ -329,26 +337,26 @@ def run_restore(
remote_path,
archive_name,
found_hook_name or hook_name,
dict(found_database, **{'schemas': restore_arguments.schemas}),
dict(found_data_source, **{'schemas': restore_arguments.schemas}),
connection_params,
)
# For any database that weren't found via exact matches in the configuration, try to fallback
# to "all" entries.
for hook_name, database_names in remaining_restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
config, archive_database_names, hook_name, database_name, 'all'
# For any data sources that weren't found via exact matches in the configuration, try to
# fallback to "all" entries.
for hook_name, data_source_names in remaining_restore_names.items():
for data_source_name in data_source_names:
found_hook_name, found_data_source = get_configured_data_source(
config, archive_data_source_names, hook_name, data_source_name, 'all'
)
if not found_database:
if not found_data_source:
continue
found_names.add(database_name)
database = copy.copy(found_database)
database['name'] = database_name
found_names.add(data_source_name)
data_source = copy.copy(found_data_source)
data_source['name'] = data_source_name
restore_single_database(
restore_single_data_source(
repository,
config,
local_borg_version,
@ -357,16 +365,16 @@ def run_restore(
remote_path,
archive_name,
found_hook_name or hook_name,
dict(database, **{'schemas': restore_arguments.schemas}),
dict(data_source, **{'schemas': restore_arguments.schemas}),
connection_params,
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
'remove_data_source_dumps',
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES,
global_arguments.dry_run,
)
ensure_databases_found(restore_names, remaining_restore_names, found_names)
ensure_data_sources_found(restore_names, remaining_restore_names, found_names)

View File

@ -1,6 +1,6 @@
import json
import logging
import borgmatic.actions.json
import borgmatic.borg.rinfo
import borgmatic.config.validate
@ -24,7 +24,7 @@ def run_rinfo(
if rinfo_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rinfo_arguments.repository
):
if not rinfo_arguments.json: # pragma: nocover
if not rinfo_arguments.json:
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying repository summary information'
)
@ -38,5 +38,5 @@ def run_rinfo(
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
if json_output:
yield borgmatic.actions.json.parse_json(json_output, repository.get('label'))

View File

@ -1,6 +1,6 @@
import json
import logging
import borgmatic.actions.json
import borgmatic.borg.rlist
import borgmatic.config.validate
@ -24,7 +24,7 @@ def run_rlist(
if rlist_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rlist_arguments.repository
):
if not rlist_arguments.json: # pragma: nocover
if not rlist_arguments.json:
logger.answer(f'{repository.get("label", repository["path"])}: Listing repository')
json_output = borgmatic.borg.rlist.list_repository(
@ -36,5 +36,5 @@ def run_rlist(
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
if json_output:
yield borgmatic.actions.json.parse_json(json_output, repository.get('label'))

View File

@ -1,4 +1,5 @@
import logging
import shlex
import borgmatic.commands.arguments
import borgmatic.logger
@ -56,9 +57,8 @@ def run_arbitrary_borg(
)
return execute_command(
full_command,
tuple(shlex.quote(part) for part in full_command),
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
shell=True,
extra_environment=dict(
(environment.make_environment(config) or {}),
@ -67,4 +67,6 @@ def run_arbitrary_borg(
'ARCHIVE': archive if archive else '',
},
),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -34,4 +34,9 @@ def break_lock(
)
borg_environment = environment.make_environment(config)
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)
execute_command(
full_command,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -1,163 +1,26 @@
import argparse
import datetime
import hashlib
import itertools
import json
import logging
import os
import pathlib
from borgmatic.borg import environment, extract, feature, flags, rinfo, state
from borgmatic.borg import environment, feature, flags, rinfo
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
DEFAULT_CHECKS = (
{'name': 'repository', 'frequency': '1 month'},
{'name': 'archives', 'frequency': '1 month'},
)
logger = logging.getLogger(__name__)
def parse_checks(config, only_checks=None):
def make_archive_filter_flags(local_borg_version, config, checks, check_arguments):
'''
Given a configuration dict with a "checks" sequence of dicts and an optional list of override
checks, return a tuple of named checks to run.
Given the local Borg version, a configuration dict, a parsed sequence of checks, and check
arguments as an argparse.Namespace instance, transform the checks into tuple of command-line
flags for filtering archives in a check command.
For example, given a config of:
{'checks': ({'name': 'repository'}, {'name': 'archives'})}
This will be returned as:
('repository', 'archives')
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If a checks value
has a name of "disabled", return an empty tuple, meaning that no checks should be run.
If "check_last" is set in the configuration and "archives" is in checks, then include a "--last"
flag. And if "prefix" is set in configuration and "archives" is in checks, then include a
"--match-archives" flag.
'''
checks = only_checks or tuple(
check_config['name'] for check_config in (config.get('checks', None) or DEFAULT_CHECKS)
)
checks = tuple(check.lower() for check in checks)
if 'disabled' in checks:
if len(checks) > 1:
logger.warning(
'Multiple checks are configured, but one of them is "disabled"; not running any checks'
)
return ()
check_last = config.get('check_last', None)
prefix = config.get('prefix')
return checks
def parse_frequency(frequency):
'''
Given a frequency string with a number and a unit of time, return a corresponding
datetime.timedelta instance or None if the frequency is None or "always".
For instance, given "3 weeks", return datetime.timedelta(weeks=3)
Raise ValueError if the given frequency cannot be parsed.
'''
if not frequency:
return None
frequency = frequency.strip().lower()
if frequency == 'always':
return None
try:
number, time_unit = frequency.split(' ')
number = int(number)
except ValueError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
if not time_unit.endswith('s'):
time_unit += 's'
if time_unit == 'months':
number *= 30
time_unit = 'days'
elif time_unit == 'years':
number *= 365
time_unit = 'days'
try:
return datetime.timedelta(**{time_unit: number})
except TypeError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
def filter_checks_on_frequency(
config,
borg_repository_id,
checks,
force,
archives_check_id=None,
):
'''
Given a configuration dict with a "checks" sequence of dicts, a Borg repository ID, a sequence
of checks, whether to force checks to run, and an ID for the archives check potentially being
run (if any), filter down those checks based on the configured "frequency" for each check as
compared to its check time file.
In other words, a check whose check time file's timestamp is too new (based on the configured
frequency) will get cut from the returned sequence of checks. Example:
config = {
'checks': [
{
'name': 'archives',
'frequency': '2 weeks',
},
]
}
When this function is called with that config and "archives" in checks, "archives" will get
filtered out of the returned result if its check time file is newer than 2 weeks old, indicating
that it's not yet time to run that check again.
Raise ValueError if a frequency cannot be parsed.
'''
filtered_checks = list(checks)
if force:
return tuple(filtered_checks)
for check_config in config.get('checks', DEFAULT_CHECKS):
check = check_config['name']
if checks and check not in checks:
continue
frequency_delta = parse_frequency(check_config.get('frequency'))
if not frequency_delta:
continue
check_time = probe_for_check_time(config, borg_repository_id, check, archives_check_id)
if not check_time:
continue
# If we've not yet reached the time when the frequency dictates we're ready for another
# check, skip this check.
if datetime.datetime.now() < check_time + frequency_delta:
remaining = check_time + frequency_delta - datetime.datetime.now()
logger.info(
f'Skipping {check} check due to configured frequency; {remaining} until next check (use --force to check anyway)'
)
filtered_checks.remove(check)
return tuple(filtered_checks)
def make_archive_filter_flags(local_borg_version, config, checks, check_last=None, prefix=None):
'''
Given the local Borg version, a configuration dict, a parsed sequence of checks, the check last
value, and a consistency check prefix, transform the checks into tuple of command-line flags for
filtering archives in a check command.
If a check_last value is given and "archives" is in checks, then include a "--last" flag. And if
a prefix value is given and "archives" is in checks, then include a "--match-archives" flag.
'''
if 'archives' in checks or 'data' in checks:
return (('--last', str(check_last)) if check_last else ()) + (
(
@ -168,7 +31,7 @@ def make_archive_filter_flags(local_borg_version, config, checks, check_last=Non
if prefix
else (
flags.make_match_archives_flags(
config.get('match_archives'),
check_arguments.match_archives or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
@ -187,21 +50,10 @@ def make_archive_filter_flags(local_borg_version, config, checks, check_last=Non
return ()
def make_archives_check_id(archive_filter_flags):
'''
Given a sequence of flags to filter archives, return a unique hash corresponding to those
particular flags. If there are no flags, return None.
'''
if not archive_filter_flags:
return None
return hashlib.sha256(' '.join(archive_filter_flags).encode()).hexdigest()
def make_check_flags(checks, archive_filter_flags):
'''
Given a parsed sequence of checks and a sequence of flags to filter archives, transform the
checks into tuple of command-line check flags.
Given a parsed checks set and a sequence of flags to filter archives,
transform the checks into tuple of command-line check flags.
For example, given parsed checks of:
@ -216,13 +68,13 @@ def make_check_flags(checks, archive_filter_flags):
'''
if 'data' in checks:
data_flags = ('--verify-data',)
checks += ('archives',)
checks.update({'archives'})
else:
data_flags = ()
common_flags = (archive_filter_flags if 'archives' in checks else ()) + data_flags
if {'repository', 'archives'}.issubset(set(checks)):
if {'repository', 'archives'}.issubset(checks):
return common_flags
return (
@ -231,147 +83,17 @@ def make_check_flags(checks, archive_filter_flags):
)
def make_check_time_path(config, borg_repository_id, check_type, archives_check_id=None):
'''
Given a configuration dict, a Borg repository ID, the name of a check type ("repository",
"archives", etc.), and a unique hash of the archives filter flags, return a path for recording
that check's time (the time of that check last occurring).
'''
borgmatic_source_directory = os.path.expanduser(
config.get('borgmatic_source_directory', state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY)
)
if check_type in ('archives', 'data'):
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
archives_check_id if archives_check_id else 'all',
)
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
)
def write_check_time(path): # pragma: no cover
'''
Record a check time of now as the modification time of the given path.
'''
logger.debug(f'Writing check time at {path}')
os.makedirs(os.path.dirname(path), mode=0o700, exist_ok=True)
pathlib.Path(path, mode=0o600).touch()
def read_check_time(path):
'''
Return the check time based on the modification time of the given path. Return None if the path
doesn't exist.
'''
logger.debug(f'Reading check time from {path}')
try:
return datetime.datetime.fromtimestamp(os.stat(path).st_mtime)
except FileNotFoundError:
return None
def probe_for_check_time(config, borg_repository_id, check, archives_check_id):
'''
Given a configuration dict, a Borg repository ID, the name of a check type ("repository",
"archives", etc.), and a unique hash of the archives filter flags, return a the corresponding
check time or None if such a check time does not exist.
When the check type is "archives" or "data", this function probes two different paths to find
the check time, e.g.:
~/.borgmatic/checks/1234567890/archives/9876543210
~/.borgmatic/checks/1234567890/archives/all
... and returns the maximum modification time of the files found (if any). The first path
represents a more specific archives check time (a check on a subset of archives), and the second
is a fallback to the last "all" archives check.
For other check types, this function reads from a single check time path, e.g.:
~/.borgmatic/checks/1234567890/repository
'''
check_times = (
read_check_time(group[0])
for group in itertools.groupby(
(
make_check_time_path(config, borg_repository_id, check, archives_check_id),
make_check_time_path(config, borg_repository_id, check),
)
)
)
try:
return max(check_time for check_time in check_times if check_time)
except ValueError:
return None
def upgrade_check_times(config, borg_repository_id):
'''
Given a configuration dict and a Borg repository ID, upgrade any corresponding check times on
disk from old-style paths to new-style paths.
Currently, the only upgrade performed is renaming an archive or data check path that looks like:
~/.borgmatic/checks/1234567890/archives
to:
~/.borgmatic/checks/1234567890/archives/all
'''
for check_type in ('archives', 'data'):
new_path = make_check_time_path(config, borg_repository_id, check_type, 'all')
old_path = os.path.dirname(new_path)
temporary_path = f'{old_path}.temp'
if not os.path.isfile(old_path) and not os.path.isfile(temporary_path):
continue
logger.debug(f'Upgrading archives check time from {old_path} to {new_path}')
try:
os.rename(old_path, temporary_path)
except FileNotFoundError:
pass
os.mkdir(old_path)
os.rename(temporary_path, new_path)
def check_archives(
repository_path,
config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=None,
repair=None,
only_checks=None,
force=None,
def get_repository_id(
repository_path, config, local_borg_version, global_arguments, local_path, remote_path
):
'''
Given a local or remote repository path, a configuration dict, local/remote commands to run,
whether to include progress information, whether to attempt a repair, and an optional list of
checks to use instead of configured checks, check the contained Borg archives for consistency.
Given a local or remote repository path, a configuration dict, the local Borg version, global
arguments, and local/remote commands to run, return the corresponding Borg repository ID.
If there are no consistency checks to run, skip running them.
Raises ValueError if the Borg repository ID cannot be determined.
Raise ValueError if the Borg repository ID cannot be determined.
'''
try:
borg_repository_id = json.loads(
return json.loads(
rinfo.display_repository_info(
repository_path,
config,
@ -385,72 +107,63 @@ def check_archives(
except (json.JSONDecodeError, KeyError):
raise ValueError(f'Cannot determine Borg repository ID for {repository_path}')
upgrade_check_times(config, borg_repository_id)
check_last = config.get('check_last', None)
prefix = config.get('prefix')
configured_checks = parse_checks(config, only_checks)
lock_wait = None
def check_archives(
repository_path,
config,
local_borg_version,
check_arguments,
global_arguments,
checks,
archive_filter_flags,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, check
arguments as an argparse.Namespace instance, global arguments, a set of named Borg checks to run
(some combination "repository", "archives", and/or "data"), archive filter flags, and
local/remote commands to run, check the contained Borg archives for consistency.
'''
lock_wait = config.get('lock_wait')
extra_borg_options = config.get('extra_borg_options', {}).get('check', '')
archive_filter_flags = make_archive_filter_flags(
local_borg_version, config, configured_checks, check_last, prefix
)
archives_check_id = make_archives_check_id(archive_filter_flags)
checks = filter_checks_on_frequency(
config,
borg_repository_id,
configured_checks,
force,
archives_check_id,
verbosity_flags = ()
if logger.isEnabledFor(logging.INFO):
verbosity_flags = ('--info',)
if logger.isEnabledFor(logging.DEBUG):
verbosity_flags = ('--debug', '--show-rc')
full_command = (
(local_path, 'check')
+ (('--repair',) if check_arguments.repair else ())
+ make_check_flags(checks, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if check_arguments.progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if set(checks).intersection({'repository', 'archives', 'data'}):
lock_wait = config.get('lock_wait')
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
verbosity_flags = ()
if logger.isEnabledFor(logging.INFO):
verbosity_flags = ('--info',)
if logger.isEnabledFor(logging.DEBUG):
verbosity_flags = ('--debug', '--show-rc')
full_command = (
(local_path, 'check')
+ (('--repair',) if repair else ())
+ make_check_flags(checks, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
if check_arguments.repair or check_arguments.progress:
execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
borg_environment = environment.make_environment(config)
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
if repair or progress:
execute_command(
full_command, output_file=DO_NOT_CAPTURE, extra_environment=borg_environment
)
else:
execute_command(full_command, extra_environment=borg_environment)
for check in checks:
write_check_time(
make_check_time_path(config, borg_repository_id, check, archives_check_id)
)
if 'extract' in checks:
extract.extract_last_archive_dry_run(
config,
local_borg_version,
global_arguments,
repository_path,
lock_wait,
local_path,
remote_path,
else:
execute_command(
full_command,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
write_check_time(make_check_time_path(config, borg_repository_id, 'extract'))

View File

@ -48,6 +48,7 @@ def compact_segments(
execute_command(
full_command,
output_log_level=logging.INFO,
borg_local_path=local_path,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -215,9 +215,6 @@ def make_list_filter_flags(local_borg_version, dry_run):
return f'{base_flags}-'
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}' # noqa: FS003
def collect_borgmatic_source_directories(borgmatic_source_directory):
'''
Return a list of borgmatic-specific source directories used for state like database backups.
@ -275,14 +272,14 @@ def any_parent_directories(path, candidate_parents):
def collect_special_file_paths(
create_command, local_path, working_directory, borg_environment, skip_directories
create_command, config, local_path, working_directory, borg_environment, skip_directories
):
'''
Given a Borg create command as a tuple, a local Borg path, a working directory, a dict of
environment variables to pass to Borg, and a sequence of parent directories to skip, collect the
paths for any special files (character devices, block devices, and named pipes / FIFOs) that
Borg would encounter during a create. These are all paths that could cause Borg to hang if its
--read-special flag is used.
Given a Borg create command as a tuple, a configuration dict, a local Borg path, a working
directory, a dict of environment variables to pass to Borg, and a sequence of parent directories
to skip, collect the paths for any special files (character devices, block devices, and named
pipes / FIFOs) that Borg would encounter during a create. These are all paths that could cause
Borg to hang if its --read-special flag is used.
'''
# Omit "--exclude-nodump" from the Borg dry run command, because that flag causes Borg to open
# files including any named pipe we've created.
@ -293,6 +290,7 @@ def collect_special_file_paths(
working_directory=working_directory,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
paths = tuple(
@ -322,43 +320,37 @@ def check_all_source_directories_exist(source_directories):
raise ValueError(f"Source directories do not exist: {', '.join(missing_directories)}")
def create_archive(
def make_base_create_command(
dry_run,
repository_path,
config,
config_paths,
local_borg_version,
global_arguments,
borgmatic_source_directories,
local_path='borg',
remote_path=None,
progress=False,
stats=False,
json=False,
list_files=False,
stream_processes=None,
):
'''
Given vebosity/dry-run flags, a local or remote repository path, and a configuration dict,
create a Borg archive and return Borg's JSON output (if any).
If a sequence of stream processes is given (instances of subprocess.Popen), then execute the
create command while also triggering the given processes to produce output.
Given vebosity/dry-run flags, a local or remote repository path, a configuration dict, a
sequence of loaded configuration paths, the local Borg version, global arguments as an
argparse.Namespace instance, and a sequence of borgmatic source directories, return a tuple of
(base Borg create command flags, Borg create command positional arguments, open pattern file
handle, open exclude file handle).
'''
borgmatic.logger.add_custom_log_levels()
borgmatic_source_directories = expand_directories(
collect_borgmatic_source_directories(config.get('borgmatic_source_directory'))
)
if config.get('source_directories_must_exist', False):
check_all_source_directories_exist(config.get('source_directories'))
sources = deduplicate_directories(
map_directories_to_devices(
expand_directories(
tuple(config.get('source_directories', ()))
+ borgmatic_source_directories
+ tuple(
global_arguments.used_config_paths
if config.get('store_config_files', True)
else ()
)
+ tuple(config_paths if config.get('store_config_files', True) else ())
)
),
additional_directory_devices=map_directories_to_devices(
@ -368,11 +360,6 @@ def create_archive(
ensure_files_readable(config.get('patterns_from'), config.get('exclude_from'))
try:
working_directory = os.path.expanduser(config.get('working_directory'))
except TypeError:
working_directory = None
pattern_file = (
write_pattern_file(config.get('patterns'), sources)
if config.get('patterns') or config.get('patterns_from')
@ -388,7 +375,7 @@ def create_archive(
lock_wait = config.get('lock_wait', None)
list_filter_flags = make_list_filter_flags(local_borg_version, dry_run)
files_cache = config.get('files_cache')
archive_name_format = config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT)
archive_name_format = config.get('archive_name_format', flags.DEFAULT_ARCHIVE_NAME_FORMAT)
extra_borg_options = config.get('extra_borg_options', {}).get('create', '')
if feature.available(feature.Feature.ATIME, local_borg_version):
@ -415,12 +402,7 @@ def create_archive(
('--remote-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
)
if stream_processes and config.get('read_special') is False:
logger.warning(
f'{repository_path}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
create_command = (
create_flags = (
tuple(local_path.split(' '))
+ ('create',)
+ make_pattern_flags(config, pattern_file.name if pattern_file else None)
@ -449,31 +431,29 @@ def create_archive(
)
+ (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_archive_flags(
repository_path, archive_name_format, local_borg_version
)
+ (sources if not pattern_file else ())
)
if json:
output_log_level = None
elif list_files or (stats and not dry_run):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.
output_file = DO_NOT_CAPTURE if progress else None
borg_environment = environment.make_environment(config)
create_positional_arguments = flags.make_repository_archive_flags(
repository_path, archive_name_format, local_borg_version
) + (sources if not pattern_file else ())
# If database hooks are enabled (as indicated by streaming processes), exclude files that might
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
if stream_processes and not config.get('read_special'):
logger.warning(
f'{repository_path}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
try:
working_directory = os.path.expanduser(config.get('working_directory'))
except TypeError:
working_directory = None
borg_environment = environment.make_environment(config)
logger.debug(f'{repository_path}: Collecting special file paths')
special_file_paths = collect_special_file_paths(
create_command,
create_flags + create_positional_arguments,
config,
local_path,
working_directory,
borg_environment,
@ -490,39 +470,110 @@ def create_archive(
),
pattern_file=exclude_file,
)
create_command += make_exclude_flags(config, exclude_file.name)
create_flags += make_exclude_flags(config, exclude_file.name)
create_command += (
return (create_flags, create_positional_arguments, pattern_file, exclude_file)
def create_archive(
dry_run,
repository_path,
config,
config_paths,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=False,
stats=False,
json=False,
list_files=False,
stream_processes=None,
):
'''
Given vebosity/dry-run flags, a local or remote repository path, a configuration dict, a
sequence of loaded configuration paths, the local Borg version, and global arguments as an
argparse.Namespace instance, create a Borg archive and return Borg's JSON output (if any).
If a sequence of stream processes is given (instances of subprocess.Popen), then execute the
create command while also triggering the given processes to produce output.
'''
borgmatic.logger.add_custom_log_levels()
borgmatic_source_directories = expand_directories(
collect_borgmatic_source_directories(config.get('borgmatic_source_directory'))
)
(create_flags, create_positional_arguments, pattern_file, exclude_file) = (
make_base_create_command(
dry_run,
repository_path,
config,
config_paths,
local_borg_version,
global_arguments,
borgmatic_source_directories,
local_path,
remote_path,
progress,
json,
list_files,
stream_processes,
)
)
if json:
output_log_level = None
elif list_files or (stats and not dry_run):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.
output_file = DO_NOT_CAPTURE if progress else None
try:
working_directory = os.path.expanduser(config.get('working_directory'))
except TypeError:
working_directory = None
borg_environment = environment.make_environment(config)
create_flags += (
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
+ (('--progress',) if progress else ())
+ (('--json',) if json else ())
)
borg_exit_codes = config.get('borg_exit_codes')
if stream_processes:
return execute_command_with_processes(
create_command,
create_flags + create_positional_arguments,
stream_processes,
output_log_level,
output_file,
borg_local_path=local_path,
working_directory=working_directory,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
elif output_log_level is None:
return execute_command_and_capture_output(
create_command,
create_flags + create_positional_arguments,
working_directory=working_directory,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
else:
execute_command(
create_command,
create_flags + create_positional_arguments,
output_log_level,
output_file,
borg_local_path=local_path,
working_directory=working_directory,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)

View File

@ -50,4 +50,8 @@ def make_environment(config):
if value is not None:
environment[environment_variable_name] = 'YES' if value else 'NO'
# On Borg 1.4.0a1+, take advantage of more specific exit codes. No effect on
# older versions of Borg.
environment['BORG_EXIT_CODES'] = 'modern'
return environment

View File

@ -0,0 +1,71 @@
import logging
import os
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def export_key(
repository_path,
config,
local_borg_version,
export_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, and
optional local and remote Borg paths, export the repository key to the destination path
indicated in the export arguments.
If the destination path is empty or "-", then print the key to stdout instead of to a file.
Raise FileExistsError if a path is given but it already exists on disk.
'''
borgmatic.logger.add_custom_log_levels()
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
if export_arguments.path and export_arguments.path != '-':
if os.path.exists(export_arguments.path):
raise FileExistsError(
f'Destination path {export_arguments.path} already exists. Aborting.'
)
output_file = None
else:
output_file = DO_NOT_CAPTURE
full_command = (
(local_path, 'key', 'export')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('paper', export_arguments.paper)
+ flags.make_flags('qr-html', export_arguments.qr_html)
+ flags.make_repository_flags(
repository_path,
local_borg_version,
)
+ ((export_arguments.path,) if output_file is None else ())
)
if global_arguments.dry_run:
logging.info(f'{repository_path}: Skipping key export (dry run)')
return
execute_command(
full_command,
output_file=output_file,
output_log_level=logging.ANSWER,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -69,6 +69,7 @@ def export_tar_archive(
full_command,
output_file=DO_NOT_CAPTURE if destination_path == '-' else None,
output_log_level=output_log_level,
borg_local_path=local_path,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -57,7 +57,11 @@ def extract_last_archive_dry_run(
)
execute_command(
full_extract_command, working_directory=None, extra_environment=borg_environment
full_extract_command,
working_directory=None,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
@ -100,8 +104,13 @@ def extract_archive(
if not paths:
raise ValueError('The --strip-components flag with "all" requires at least one --path')
# Calculate the maximum number of leading path components of the given paths.
strip_components = max(0, *(len(path.split(os.path.sep)) - 1 for path in paths))
# Calculate the maximum number of leading path components of the given paths. "if piece"
# ignores empty path components, e.g. those resulting from a leading slash. And the "- 1"
# is so this doesn't count the final path component, e.g. the filename itself.
strip_components = max(
0,
*(len(tuple(piece for piece in path.split(os.path.sep) if piece)) - 1 for path in paths)
)
full_command = (
(local_path, 'extract')
@ -127,6 +136,7 @@ def extract_archive(
)
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.
@ -136,6 +146,8 @@ def extract_archive(
output_file=DO_NOT_CAPTURE,
working_directory=destination_path,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
return None
@ -146,10 +158,16 @@ def extract_archive(
working_directory=destination_path,
run_to_completion=False,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command(
full_command, working_directory=destination_path, extra_environment=borg_environment
full_command,
working_directory=destination_path,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)

View File

@ -1,8 +1,12 @@
import itertools
import json
import logging
import re
from borgmatic.borg import feature
logger = logging.getLogger(__name__)
def make_flags(name, value):
'''
@ -59,23 +63,28 @@ def make_repository_archive_flags(repository_path, archive, local_borg_version):
)
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}' # noqa: FS003
def make_match_archives_flags(match_archives, archive_name_format, local_borg_version):
'''
Return match archives flags based on the given match archives value, if any. If it isn't set,
return match archives flags to match archives created with the given archive name format, if
any. This is done by replacing certain archive name format placeholders for ephemeral data (like
"{now}") with globs.
return match archives flags to match archives created with the given (or default) archive name
format. This is done by replacing certain archive name format placeholders for ephemeral data
(like "{now}") with globs.
'''
if match_archives:
if match_archives in {'*', 're:.*', 'sh:*'}:
return ()
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
return ('--match-archives', match_archives)
else:
return ('--glob-archives', re.sub(r'^sh:', '', match_archives))
if not archive_name_format:
return ()
derived_match_archives = re.sub(r'\{(now|utcnow|pid)([:%\w\.-]*)\}', '*', archive_name_format)
derived_match_archives = re.sub(
r'\{(now|utcnow|pid)([:%\w\.-]*)\}', '*', archive_name_format or DEFAULT_ARCHIVE_NAME_FORMAT
)
if derived_match_archives == '*':
return ()
@ -84,3 +93,26 @@ def make_match_archives_flags(match_archives, archive_name_format, local_borg_ve
return ('--match-archives', f'sh:{derived_match_archives}')
else:
return ('--glob-archives', f'{derived_match_archives}')
def warn_for_aggressive_archive_flags(json_command, json_output):
'''
Given a JSON archives command and the resulting JSON string output from running it, parse the
JSON and warn if the command used an archive flag but the output indicates zero archives were
found.
'''
archive_flags_used = {'--glob-archives', '--match-archives'}.intersection(set(json_command))
if not archive_flags_used:
return
try:
if len(json.loads(json_output)['archives']) == 0:
logger.warning('An archive filter was applied, but no matching archives were found.')
logger.warning(
'Try adding --match-archives "*" or adjusting archive_name_format/match_archives in configuration.'
)
except json.JSONDecodeError as error:
logger.debug(f'Cannot parse JSON output from archive command: {error}')
except (TypeError, KeyError):
logger.debug('Cannot parse JSON output from archive command: No "archives" key found')

View File

@ -1,3 +1,4 @@
import argparse
import logging
import borgmatic.logger
@ -7,24 +8,21 @@ from borgmatic.execute import execute_command, execute_command_and_capture_outpu
logger = logging.getLogger(__name__)
def display_archives_info(
def make_info_command(
repository_path,
config,
local_borg_version,
info_arguments,
global_arguments,
local_path='borg',
remote_path=None,
local_path,
remote_path,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, global
arguments as an argparse.Namespace, and the arguments to the info action, display summary
information for Borg archives in the repository or return JSON summary information.
Given a local or remote repository path, a configuration dict, the local Borg version, the
arguments to the info action as an argparse.Namespace, and global arguments, return a command
as a tuple to display summary information for archives in the repository.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = config.get('lock_wait', None)
full_command = (
return (
(local_path, 'info')
+ (
('--info',)
@ -38,7 +36,7 @@ def display_archives_info(
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', lock_wait)
+ flags.make_flags('lock-wait', config.get('lock_wait'))
+ (
(
flags.make_flags('match-archives', f'sh:{info_arguments.prefix}*')
@ -62,16 +60,59 @@ def display_archives_info(
+ flags.make_repository_flags(repository_path, local_borg_version)
)
def display_archives_info(
repository_path,
config,
local_borg_version,
info_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, the
arguments to the info action as an argparse.Namespace, and global arguments, display summary
information for Borg archives in the repository or return JSON summary information.
'''
borgmatic.logger.add_custom_log_levels()
main_command = make_info_command(
repository_path,
config,
local_borg_version,
info_arguments,
global_arguments,
local_path,
remote_path,
)
json_command = make_info_command(
repository_path,
config,
local_borg_version,
argparse.Namespace(**dict(info_arguments.__dict__, json=True)),
global_arguments,
local_path,
remote_path,
)
borg_exit_codes = config.get('borg_exit_codes')
json_info = execute_command_and_capture_output(
json_command,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
if info_arguments.json:
return execute_command_and_capture_output(
full_command,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
)
else:
execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=environment.make_environment(config),
)
return json_info
flags.warn_for_aggressive_archive_flags(json_command, json_info)
execute_command(
main_command,
output_log_level=logging.ANSWER,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)

View File

@ -79,9 +79,11 @@ def make_find_paths(find_paths):
return ()
return tuple(
find_path
if re.compile(r'([-!+RrPp] )|(\w\w:)').match(find_path)
else f'sh:**/*{find_path}*/**'
(
find_path
if re.compile(r'([-!+RrPp] )|(\w\w:)').match(find_path)
else f'sh:**/*{find_path}*/**'
)
for find_path in find_paths
)
@ -92,15 +94,16 @@ def capture_archive_listing(
config,
local_borg_version,
global_arguments,
list_path=None,
list_paths=None,
path_format=None,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a configuration dict, the local Borg
version, global arguments as an argparse.Namespace, the archive path in which to list files, and
local and remote Borg paths, capture the output of listing that archive and return it as a list
of file paths.
version, global arguments as an argparse.Namespace, the archive paths in which to list files,
the Borg path format to use for the output, and local and remote Borg paths, capture the output
of listing that archive and return it as a list of file paths.
'''
borg_environment = environment.make_environment(config)
@ -113,10 +116,10 @@ def capture_archive_listing(
argparse.Namespace(
repository=repository_path,
archive=archive,
paths=[f'sh:{list_path}'],
paths=[f'sh:{path}' for path in list_paths] if list_paths else None,
find_paths=None,
json=None,
format='{path}{NL}', # noqa: FS003
format=path_format or '{path}{NL}', # noqa: FS003
),
global_arguments,
local_path,
@ -124,6 +127,7 @@ def capture_archive_listing(
),
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
.strip('\n')
.split('\n')
@ -189,6 +193,7 @@ def list_archive(
)
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
# If there are any paths to find (and there's not a single archive already selected), start by
# getting a list of archives to search.
@ -219,6 +224,7 @@ def list_archive(
),
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
.strip('\n')
.split('\n')
@ -251,6 +257,7 @@ def list_archive(
execute_command(
main_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)

View File

@ -65,9 +65,15 @@ def mount_archive(
execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
return
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)
execute_command(
full_command,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -94,6 +94,7 @@ def prune_archives(
execute_command(
full_command,
output_log_level=output_log_level,
borg_local_path=local_path,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -1,4 +1,5 @@
import argparse
import json
import logging
import subprocess
@ -8,7 +9,7 @@ from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE = 2
RINFO_REPOSITORY_NOT_FOUND_EXIT_CODES = {2, 13}
def create_repository(
@ -31,21 +32,34 @@ def create_repository(
version, a Borg encryption mode, the path to another repo whose key material should be reused,
whether the repository should be append-only, and the storage quota to use, create the
repository. If the repository already exists, then log and skip creation.
Raise ValueError if the requested encryption mode does not match that of the repository.
Raise json.decoder.JSONDecodeError if the "borg info" JSON outputcannot be decoded.
Raise subprocess.CalledProcessError if "borg info" returns an error exit code.
'''
try:
rinfo.display_repository_info(
repository_path,
config,
local_borg_version,
argparse.Namespace(json=True),
global_arguments,
local_path,
remote_path,
info_data = json.loads(
rinfo.display_repository_info(
repository_path,
config,
local_borg_version,
argparse.Namespace(json=True),
global_arguments,
local_path,
remote_path,
)
)
repository_encryption_mode = info_data.get('encryption', {}).get('mode')
if repository_encryption_mode != encryption_mode:
raise ValueError(
f'Requested encryption mode "{encryption_mode}" does not match existing repository encryption mode "{repository_encryption_mode}"'
)
logger.info(f'{repository_path}: Repository already exists. Skipping creation.')
return
except subprocess.CalledProcessError as error:
if error.returncode != RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE:
if error.returncode not in RINFO_REPOSITORY_NOT_FOUND_EXIT_CODES:
raise
lock_wait = config.get('lock_wait')
@ -81,6 +95,7 @@ def create_repository(
execute_command(
rcreate_command,
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -49,17 +49,20 @@ def display_repository_info(
)
extra_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
if rinfo_arguments.json:
return execute_command_and_capture_output(
full_command,
extra_environment=extra_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
else:
execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=extra_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)

View File

@ -1,3 +1,4 @@
import argparse
import logging
import borgmatic.logger
@ -44,6 +45,7 @@ def resolve_archive_name(
full_command,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
try:
latest_archive = output.strip().splitlines()[-1]
@ -137,15 +139,33 @@ def list_repository(
local_path,
remote_path,
)
json_command = make_rlist_command(
repository_path,
config,
local_borg_version,
argparse.Namespace(**dict(rlist_arguments.__dict__, json=True)),
global_arguments,
local_path,
remote_path,
)
borg_exit_codes = config.get('borg_exit_codes')
json_listing = execute_command_and_capture_output(
json_command,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
if rlist_arguments.json:
return execute_command_and_capture_output(
main_command, extra_environment=borg_environment, borg_local_path=local_path
)
else:
execute_command(
main_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=borg_environment,
)
return json_listing
flags.warn_for_aggressive_archive_flags(json_command, json_listing)
execute_command(
main_command,
output_log_level=logging.ANSWER,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)

View File

@ -56,5 +56,6 @@ def transfer_archives(
output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
extra_environment=environment.make_environment(config),
)

View File

@ -5,7 +5,7 @@ from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def unmount_archive(mount_point, local_path='borg'):
def unmount_archive(config, mount_point, local_path='borg'):
'''
Given a mounted filesystem mount point, and an optional local Borg paths, umount the filesystem
from the mount point.
@ -17,4 +17,6 @@ def unmount_archive(mount_point, local_path='borg'):
+ (mount_point,)
)
execute_command(full_command)
execute_command(
full_command, borg_local_path=local_path, borg_exit_codes=config.get('borg_exit_codes')
)

View File

@ -22,6 +22,7 @@ def local_borg_version(config, local_path='borg'):
full_command,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
try:

View File

@ -23,6 +23,7 @@ ACTION_ALIASES = {
'info': ['-i'],
'transfer': [],
'break-lock': [],
'key': [],
'borg': [],
}
@ -258,28 +259,28 @@ def make_parsers():
type=int,
choices=range(-2, 3),
default=0,
help='Display verbose progress to the console (disabled, errors only, default, some, or lots: -2, -1, 0, 1, or 2)',
help='Display verbose progress to the console: -2 (disabled), -1 (errors only), 0 (responses to actions, the default), 1 (info about steps borgmatic is taking), or 2 (debug)',
)
global_group.add_argument(
'--syslog-verbosity',
type=int,
choices=range(-2, 3),
default=0,
help='Log verbose progress to syslog (disabled, errors only, default, some, or lots: -2, -1, 0, 1, or 2). Ignored when console is interactive or --log-file is given',
default=-2,
help='Log verbose progress to syslog: -2 (disabled, the default), -1 (errors only), 0 (responses to actions), 1 (info about steps borgmatic is taking), or 2 (debug)',
)
global_group.add_argument(
'--log-file-verbosity',
type=int,
choices=range(-2, 3),
default=0,
help='Log verbose progress to log file (disabled, errors only, default, some, or lots: -2, -1, 0, 1, or 2). Only used when --log-file is given',
default=1,
help='When --log-file is given, log verbose progress to file: -2 (disabled), -1 (errors only), 0 (responses to actions), 1 (info about steps borgmatic is taking, the default), or 2 (debug)',
)
global_group.add_argument(
'--monitoring-verbosity',
type=int,
choices=range(-2, 3),
default=0,
help='Log verbose progress to monitoring integrations that support logging (from disabled, errors only, default, some, or lots: -2, -1, 0, 1, or 2)',
default=1,
help='When a monitoring integration supporting logging is configured, log verbose progress to it: -2 (disabled), -1 (errors only), responses to actions (0), 1 (info about steps borgmatic is taking, the default), or 2 (debug)',
)
global_group.add_argument(
'--log-file',
@ -301,7 +302,7 @@ def make_parsers():
metavar='OPTION.SUBOPTION=VALUE',
dest='overrides',
action='append',
help='Configuration file option to override with specified value, can specify flag multiple times',
help='Configuration file option to override with specified value, see documentation for overriding list or key/value options, can specify flag multiple times',
)
global_group.add_argument(
'--no-environment-interpolation',
@ -466,8 +467,8 @@ def make_parsers():
prune_parser = action_parsers.add_parser(
'prune',
aliases=ACTION_ALIASES['prune'],
help='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
description='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
help='Prune archives according to the retention policy (with Borg 1.2+, you must run compact afterwards to actually free space)',
description='Prune archives according to the retention policy (with Borg 1.2+, you must run compact afterwards to actually free space)',
add_help=False,
)
prune_group = prune_parser.add_argument_group('prune arguments')
@ -603,13 +604,20 @@ def make_parsers():
action='store_true',
help='Attempt to repair any inconsistencies found (for interactive use)',
)
check_group.add_argument(
'-a',
'--match-archives',
'--glob-archives',
metavar='PATTERN',
help='Only check archives with names matching this pattern',
)
check_group.add_argument(
'--only',
metavar='CHECK',
choices=('repository', 'archives', 'data', 'extract'),
dest='only',
choices=('repository', 'archives', 'data', 'extract', 'spot'),
dest='only_checks',
action='append',
help='Run a particular consistency check (repository, archives, data, or extract) instead of configured checks (subject to configured frequency, can specify flag multiple times)',
help='Run a particular consistency check (repository, archives, data, extract, or spot) instead of configured checks (subject to configured frequency, can specify flag multiple times)',
)
check_group.add_argument(
'--force',
@ -723,6 +731,11 @@ def make_parsers():
action='store_true',
help='Display progress for each file as it is extracted',
)
config_bootstrap_group.add_argument(
'--ssh-command',
metavar='COMMAND',
help='Command to use instead of "ssh"',
)
config_bootstrap_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
@ -905,8 +918,8 @@ def make_parsers():
restore_parser = action_parsers.add_parser(
'restore',
aliases=ACTION_ALIASES['restore'],
help='Restore database dumps from a named archive',
description='Restore database dumps from a named archive. (To extract files instead, use "borgmatic extract".)',
help='Restore data source (e.g. database) dumps from a named archive',
description='Restore data source (e.g. database) dumps from a named archive. (To extract files instead, use "borgmatic extract".)',
add_help=False,
)
restore_group = restore_parser.add_argument_group('restore arguments')
@ -918,18 +931,19 @@ def make_parsers():
'--archive', help='Name of archive to restore from (or "latest")', required=True
)
restore_group.add_argument(
'--data-source',
'--database',
metavar='NAME',
dest='databases',
dest='data_sources',
action='append',
help="Name of database to restore from archive, must be defined in borgmatic's configuration, can specify flag multiple times, defaults to all databases",
help="Name of data source (e.g. database) to restore from archive, must be defined in borgmatic's configuration, can specify flag multiple times, defaults to all data sources in the archive",
)
restore_group.add_argument(
'--schema',
metavar='NAME',
dest='schemas',
action='append',
help='Name of schema to restore from the database, can specify flag multiple times, defaults to all schemas. Schemas are only supported for PostgreSQL and MongoDB databases',
help='Name of schema to restore from the data source, can specify flag multiple times, defaults to all schemas. Schemas are only supported for PostgreSQL and MongoDB databases',
)
restore_group.add_argument(
'--hostname',
@ -937,7 +951,7 @@ def make_parsers():
)
restore_group.add_argument(
'--port',
help='Port to restore to. Defaults to the "restore_port" option in borgmatic\'s configuration',
help='Database port to restore to. Defaults to the "restore_port" option in borgmatic\'s configuration',
)
restore_group.add_argument(
'--username',
@ -1176,6 +1190,51 @@ def make_parsers():
'-h', '--help', action='help', help='Show this help message and exit'
)
key_parser = action_parsers.add_parser(
'key',
aliases=ACTION_ALIASES['key'],
help='Perform repository key related operations',
description='Perform repository key related operations',
add_help=False,
)
key_group = key_parser.add_argument_group('key arguments')
key_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
key_parsers = key_parser.add_subparsers(
title='key sub-actions',
)
key_export_parser = key_parsers.add_parser(
'export',
help='Export a copy of the repository key for safekeeping in case the original goes missing or gets damaged',
description='Export a copy of the repository key for safekeeping in case the original goes missing or gets damaged',
add_help=False,
)
key_export_group = key_export_parser.add_argument_group('key export arguments')
key_export_group.add_argument(
'--paper',
action='store_true',
help='Export the key in a text format suitable for printing and later manual entry',
)
key_export_group.add_argument(
'--qr-html',
action='store_true',
help='Export the key in an HTML format suitable for printing and later manual entry or QR code scanning',
)
key_export_group.add_argument(
'--repository',
help='Path of repository to export the key for, defaults to the configured repository if there is only one',
)
key_export_group.add_argument(
'--path',
metavar='PATH',
help='Path to export the key to, defaults to stdout (but be careful about dirtying the output with --verbosity)',
)
key_export_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
borg_parser = action_parsers.add_parser(
'borg',
aliases=ACTION_ALIASES['borg'],
@ -1281,4 +1340,7 @@ def parse_arguments(*unparsed_arguments):
'With the info action, only one of --archive, --prefix, or --match-archives flags can be used.'
)
if 'borg' in arguments and arguments['global'].dry_run:
raise ValueError('With the borg action, --dry-run is not supported.')
return arguments

View File

@ -1,4 +1,5 @@
import collections
import importlib.metadata
import json
import logging
import os
@ -9,11 +10,6 @@ from subprocess import CalledProcessError
import colorama
try:
import importlib_metadata
except ModuleNotFoundError: # pragma: nocover
import importlib.metadata as importlib_metadata
import borgmatic.actions.borg
import borgmatic.actions.break_lock
import borgmatic.actions.check
@ -22,6 +18,7 @@ import borgmatic.actions.config.bootstrap
import borgmatic.actions.config.generate
import borgmatic.actions.config.validate
import borgmatic.actions.create
import borgmatic.actions.export_key
import borgmatic.actions.export_tar
import borgmatic.actions.extract
import borgmatic.actions.info
@ -47,11 +44,25 @@ from borgmatic.verbosity import verbosity_to_log_level
logger = logging.getLogger(__name__)
def run_configuration(config_filename, config, arguments):
def get_skip_actions(config, arguments):
'''
Given a config filename, the corresponding parsed config dict, and command-line arguments as a
dict from subparser name to a namespace of parsed arguments, execute the defined create, prune,
compact, check, and/or other actions.
Given a configuration dict and command-line arguments as an argparse.Namespace, return a list of
the configured action names to skip. Omit "check" from this list though if "check --force" is
part of the command-like arguments.
'''
skip_actions = config.get('skip_actions', [])
if 'check' in arguments and arguments['check'].force:
return [action for action in skip_actions if action != 'check']
return skip_actions
def run_configuration(config_filename, config, config_paths, arguments):
'''
Given a config filename, the corresponding parsed config dict, a sequence of loaded
configuration paths, and command-line arguments as a dict from subparser name to a namespace of
parsed arguments, execute the defined create, prune, compact, check, and/or other actions.
Yield a combination of:
@ -69,9 +80,16 @@ def run_configuration(config_filename, config, arguments):
using_primary_action = {'create', 'prune', 'compact', 'check'}.intersection(arguments)
monitoring_log_level = verbosity_to_log_level(global_arguments.monitoring_verbosity)
monitoring_hooks_are_activated = using_primary_action and monitoring_log_level != DISABLED
skip_actions = get_skip_actions(config, arguments)
if skip_actions:
logger.debug(
f"{config_filename}: Skipping {'/'.join(skip_actions)} action{'s' if len(skip_actions) > 1 else ''} due to configured skip_actions"
)
try:
local_borg_version = borg_version.local_borg_version(config, local_path)
logger.debug(f'{config_filename}: Borg {local_borg_version}')
except (OSError, CalledProcessError, ValueError) as error:
yield from log_error_records(f'{config_filename}: Error getting local Borg version', error)
return
@ -126,6 +144,7 @@ def run_configuration(config_filename, config, arguments):
arguments=arguments,
config_filename=config_filename,
config=config,
config_paths=config_paths,
local_path=local_path,
remote_path=remote_path,
local_borg_version=local_borg_version,
@ -150,7 +169,7 @@ def run_configuration(config_filename, config, arguments):
continue
if command.considered_soft_failure(config_filename, error):
return
break
yield from log_error_records(
f'{repository.get("label", repository["path"])}: Error running actions for repository',
@ -161,7 +180,7 @@ def run_configuration(config_filename, config, arguments):
try:
if monitoring_hooks_are_activated:
# send logs irrespective of error
# Send logs irrespective of error.
dispatch.call_hooks(
'ping_monitor',
config,
@ -172,11 +191,9 @@ def run_configuration(config_filename, config, arguments):
global_arguments.dry_run,
)
except (OSError, CalledProcessError) as error:
if command.considered_soft_failure(config_filename, error):
return
encountered_error = error
yield from log_error_records(f'{repository["path"]}: Error pinging monitor', error)
if not command.considered_soft_failure(config_filename, error):
encountered_error = error
yield from log_error_records(f'{repository["path"]}: Error pinging monitor', error)
if not encountered_error:
try:
@ -246,6 +263,7 @@ def run_actions(
arguments,
config_filename,
config,
config_paths,
local_path,
remote_path,
local_borg_version,
@ -253,9 +271,9 @@ def run_actions(
):
'''
Given parsed command-line arguments as an argparse.ArgumentParser instance, the configuration
filename, several different configuration dicts, local and remote paths to Borg, a local Borg
version string, and a repository name, run all actions from the command-line arguments on the
given repository.
filename, a configuration dict, a sequence of loaded configuration paths, local and remote paths
to Borg, a local Borg version string, and a repository name, run all actions from the
command-line arguments on the given repository.
Yield JSON output strings from executing any actions that produce JSON.
@ -273,6 +291,7 @@ def run_actions(
'repositories': ','.join([repo['path'] for repo in config['repositories']]),
'log_file': global_arguments.log_file if global_arguments.log_file else '',
}
skip_actions = set(get_skip_actions(config, arguments))
command.execute_hook(
config.get('before_actions'),
@ -284,7 +303,7 @@ def run_actions(
)
for action_name, action_arguments in arguments.items():
if action_name == 'rcreate':
if action_name == 'rcreate' and action_name not in skip_actions:
borgmatic.actions.rcreate.run_rcreate(
repository,
config,
@ -294,7 +313,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'transfer':
elif action_name == 'transfer' and action_name not in skip_actions:
borgmatic.actions.transfer.run_transfer(
repository,
config,
@ -304,11 +323,12 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'create':
elif action_name == 'create' and action_name not in skip_actions:
yield from borgmatic.actions.create.run_create(
config_filename,
repository,
config,
config_paths,
hook_context,
local_borg_version,
action_arguments,
@ -317,7 +337,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'prune':
elif action_name == 'prune' and action_name not in skip_actions:
borgmatic.actions.prune.run_prune(
config_filename,
repository,
@ -330,7 +350,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'compact':
elif action_name == 'compact' and action_name not in skip_actions:
borgmatic.actions.compact.run_compact(
config_filename,
repository,
@ -343,7 +363,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'check':
elif action_name == 'check' and action_name not in skip_actions:
if checks.repository_enabled_for_checks(repository, config):
borgmatic.actions.check.run_check(
config_filename,
@ -356,7 +376,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'extract':
elif action_name == 'extract' and action_name not in skip_actions:
borgmatic.actions.extract.run_extract(
config_filename,
repository,
@ -368,7 +388,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'export-tar':
elif action_name == 'export-tar' and action_name not in skip_actions:
borgmatic.actions.export_tar.run_export_tar(
repository,
config,
@ -378,7 +398,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'mount':
elif action_name == 'mount' and action_name not in skip_actions:
borgmatic.actions.mount.run_mount(
repository,
config,
@ -388,7 +408,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'restore':
elif action_name == 'restore' and action_name not in skip_actions:
borgmatic.actions.restore.run_restore(
repository,
config,
@ -398,7 +418,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'rlist':
elif action_name == 'rlist' and action_name not in skip_actions:
yield from borgmatic.actions.rlist.run_rlist(
repository,
config,
@ -408,7 +428,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'list':
elif action_name == 'list' and action_name not in skip_actions:
yield from borgmatic.actions.list.run_list(
repository,
config,
@ -418,7 +438,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'rinfo':
elif action_name == 'rinfo' and action_name not in skip_actions:
yield from borgmatic.actions.rinfo.run_rinfo(
repository,
config,
@ -428,7 +448,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'info':
elif action_name == 'info' and action_name not in skip_actions:
yield from borgmatic.actions.info.run_info(
repository,
config,
@ -438,7 +458,7 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'break-lock':
elif action_name == 'break-lock' and action_name not in skip_actions:
borgmatic.actions.break_lock.run_break_lock(
repository,
config,
@ -448,7 +468,17 @@ def run_actions(
local_path,
remote_path,
)
elif action_name == 'borg':
elif action_name == 'export' and action_name not in skip_actions:
borgmatic.actions.export_key.run_export_key(
repository,
config,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'borg' and action_name not in skip_actions:
borgmatic.actions.borg.run_borg(
repository,
config,
@ -473,13 +503,15 @@ def load_configurations(config_filenames, overrides=None, resolve_env=True):
'''
Given a sequence of configuration filenames, load and validate each configuration file. Return
the results as a tuple of: dict of configuration filename to corresponding parsed configuration,
and sequence of logging.LogRecord instances containing any parse errors.
a sequence of paths for all loaded configuration files (including includes), and a sequence of
logging.LogRecord instances containing any parse errors.
Log records are returned here instead of being logged directly because logging isn't yet
initialized at this point!
'''
# Dict mapping from config filename to corresponding parsed config dict.
configs = collections.OrderedDict()
config_paths = set()
logs = []
# Parse and load each configuration file.
@ -496,9 +528,10 @@ def load_configurations(config_filenames, overrides=None, resolve_env=True):
]
)
try:
configs[config_filename], parse_logs = validate.parse_configuration(
configs[config_filename], paths, parse_logs = validate.parse_configuration(
config_filename, validate.schema_filename(), overrides, resolve_env
)
config_paths.update(paths)
logs.extend(parse_logs)
except PermissionError:
logs.extend(
@ -528,7 +561,7 @@ def load_configurations(config_filenames, overrides=None, resolve_env=True):
]
)
return (configs, logs)
return (configs, sorted(config_paths), logs)
def log_record(suppress_log=False, **kwargs):
@ -544,9 +577,6 @@ def log_record(suppress_log=False, **kwargs):
return record
MAX_CAPTURED_OUTPUT_LENGTH = 1000
def log_error_records(
message, error=None, levelno=logging.CRITICAL, log_command_error_output=False
):
@ -568,20 +598,24 @@ def log_error_records(
raise error
except CalledProcessError as error:
yield log_record(levelno=levelno, levelname=level_name, msg=message)
if error.output:
try:
output = error.output.decode('utf-8')
except (UnicodeDecodeError, AttributeError):
output = error.output
# Suppress these logs for now and save full error output for the log summary at the end.
yield log_record(
levelno=levelno,
levelname=level_name,
msg=output[:MAX_CAPTURED_OUTPUT_LENGTH]
+ ' ...' * (len(output) > MAX_CAPTURED_OUTPUT_LENGTH),
suppress_log=True,
)
# Suppress these logs for now and save the error output for the log summary at the end.
# Log a separate record per line, as some errors can be really verbose and overflow the
# per-record size limits imposed by some logging backends.
for output_line in output.splitlines():
yield log_record(
levelno=levelno,
levelname=level_name,
msg=output_line,
suppress_log=True,
)
yield log_record(levelno=levelno, levelname=level_name, msg=error)
except (ValueError, OSError) as error:
yield log_record(levelno=levelno, levelname=level_name, msg=message)
@ -694,12 +728,12 @@ def collect_highlander_action_summary_logs(configs, arguments, configuration_par
return
def collect_configuration_run_summary_logs(configs, arguments):
def collect_configuration_run_summary_logs(configs, config_paths, arguments):
'''
Given a dict of configuration filename to corresponding parsed configuration and parsed
command-line arguments as a dict from subparser name to a parsed namespace of arguments, run
each configuration file and yield a series of logging.LogRecord instances containing summary
information about each run.
Given a dict of configuration filename to corresponding parsed configuration, a sequence of
loaded configuration paths, and parsed command-line arguments as a dict from subparser name to a
parsed namespace of arguments, run each configuration file and yield a series of
logging.LogRecord instances containing summary information about each run.
As a side effect of running through these configuration files, output their JSON results, if
any, to stdout.
@ -744,7 +778,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
# Execute the actions corresponding to each configuration file.
json_results = []
for config_filename, config in configs.items():
results = list(run_configuration(config_filename, config, arguments))
results = list(run_configuration(config_filename, config, config_paths, arguments))
error_logs = tuple(result for result in results if isinstance(result, logging.LogRecord))
if error_logs:
@ -765,6 +799,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
logger.info(f"Unmounting mount point {arguments['umount'].mount_point}")
try:
borg_umount.unmount_archive(
config,
mount_point=arguments['umount'].mount_point,
local_path=get_local_path(configs),
)
@ -815,7 +850,7 @@ def main(extra_summary_logs=[]): # pragma: no cover
global_arguments = arguments['global']
if global_arguments.version:
print(importlib_metadata.version('borgmatic'))
print(importlib.metadata.version('borgmatic'))
sys.exit(0)
if global_arguments.bash_completion:
print(borgmatic.commands.completion.bash.bash_completion())
@ -825,8 +860,7 @@ def main(extra_summary_logs=[]): # pragma: no cover
sys.exit(0)
config_filenames = tuple(collect.collect_config_filenames(global_arguments.config_paths))
global_arguments.used_config_paths = list(config_filenames)
configs, parse_logs = load_configurations(
configs, config_paths, parse_logs = load_configurations(
config_filenames, global_arguments.overrides, global_arguments.resolve_env
)
configuration_parse_errors = (
@ -836,10 +870,8 @@ def main(extra_summary_logs=[]): # pragma: no cover
any_json_flags = any(
getattr(sub_arguments, 'json', False) for sub_arguments in arguments.values()
)
colorama.init(
autoreset=True,
strip=not should_do_markup(global_arguments.no_color or any_json_flags, configs),
)
color_enabled = should_do_markup(global_arguments.no_color or any_json_flags, configs)
colorama.init(autoreset=color_enabled, strip=not color_enabled)
try:
configure_logging(
verbosity_to_log_level(global_arguments.verbosity),
@ -848,6 +880,7 @@ def main(extra_summary_logs=[]): # pragma: no cover
verbosity_to_log_level(global_arguments.monitoring_verbosity),
global_arguments.log_file,
global_arguments.log_file_format,
color_enabled=color_enabled,
)
except (FileNotFoundError, PermissionError) as error:
configure_logging(logging.CRITICAL)
@ -863,7 +896,7 @@ def main(extra_summary_logs=[]): # pragma: no cover
configs, arguments, configuration_parse_errors
)
)
or list(collect_configuration_run_summary_logs(configs, arguments))
or list(collect_configuration_run_summary_logs(configs, config_paths, arguments))
)
)
summary_logs_max_level = max(log.levelno for log in summary_logs)

View File

@ -1,9 +1,9 @@
def repository_enabled_for_checks(repository, consistency):
def repository_enabled_for_checks(repository, config):
'''
Given a repository name and a consistency configuration dict, return whether the repository
is enabled to have consistency checks run.
Given a repository name and a configuration dict, return whether the
repository is enabled to have consistency checks run.
'''
if not consistency.get('check_repositories'):
if not config.get('check_repositories'):
return True
return repository in consistency['check_repositories']
return repository in config['check_repositories']

View File

@ -0,0 +1,61 @@
import shlex
def coerce_scalar(value):
'''
Given a configuration value, coerce it to an integer or a boolean as appropriate and return the
result.
'''
try:
return int(value)
except (TypeError, ValueError):
pass
if value == 'true' or value == 'True':
return True
if value == 'false' or value == 'False':
return False
return value
def apply_constants(value, constants, shell_escape=False):
'''
Given a configuration value (bool, dict, int, list, or string) and a dict of named constants,
replace any configuration string values of the form "{constant}" (or containing it) with the
value of the correspondingly named key from the constants. Recurse as necessary into nested
configuration to find values to replace.
For instance, if a configuration value contains "{foo}", replace it with the value of the "foo"
key found within the configuration's "constants".
If shell escape is True, then escape the constant's value before applying it.
Return the configuration value and modify the original.
'''
if not value or not constants:
return value
if isinstance(value, str):
for constant_name, constant_value in constants.items():
value = value.replace(
'{' + constant_name + '}',
shlex.quote(str(constant_value)) if shell_escape else str(constant_value),
)
# Support constants within non-string scalars by coercing the value to its appropriate type.
value = coerce_scalar(value)
elif isinstance(value, list):
for index, list_value in enumerate(value):
value[index] = apply_constants(list_value, constants, shell_escape)
elif isinstance(value, dict):
for option_name, option_value in value.items():
shell_escape = (
shell_escape
or option_name.startswith('before_')
or option_name.startswith('after_')
or option_name == 'on_error'
)
value[option_name] = apply_constants(option_value, constants, shell_escape)
return value

View File

@ -1,21 +1,22 @@
import os
import re
_VARIABLE_PATTERN = re.compile(
VARIABLE_PATTERN = re.compile(
r'(?P<escape>\\)?(?P<variable>\$\{(?P<name>[A-Za-z0-9_]+)((:?-)(?P<default>[^}]+))?\})'
)
def _resolve_string(matcher):
def resolve_string(matcher):
'''
Get the value from environment given a matcher containing a name and an optional default value.
If the variable is not defined in environment and no default value is provided, an Error is raised.
Given a matcher containing a name and an optional default value, get the value from environment.
Raise ValueError if the variable is not defined in environment and no default value is provided.
'''
if matcher.group('escape') is not None:
# in case of escaped envvar, unescape it
# In the case of an escaped environment variable, unescape it.
return matcher.group('variable')
# resolve the env var
# Resolve the environment variable.
name, default = matcher.group('name'), matcher.group('default')
out = os.getenv(name, default=default)
@ -27,19 +28,24 @@ def _resolve_string(matcher):
def resolve_env_variables(item):
'''
Resolves variables like or ${FOO} from given configuration with values from process environment
Supported formats:
- ${FOO} will return FOO env variable
- ${FOO-bar} or ${FOO:-bar} will return FOO env variable if it exists, else "bar"
Resolves variables like or ${FOO} from given configuration with values from process environment.
If any variable is missing in environment and no default value is provided, an Error is raised.
Supported formats:
* ${FOO} will return FOO env variable
* ${FOO-bar} or ${FOO:-bar} will return FOO env variable if it exists, else "bar"
Raise if any variable is missing in environment and no default value is provided.
'''
if isinstance(item, str):
return _VARIABLE_PATTERN.sub(_resolve_string, item)
return VARIABLE_PATTERN.sub(resolve_string, item)
if isinstance(item, list):
for i, subitem in enumerate(item):
item[i] = resolve_env_variables(subitem)
for index, subitem in enumerate(item):
item[index] = resolve_env_variables(subitem)
if isinstance(item, dict):
for key, value in item.items():
item[key] = resolve_env_variables(value)
return item

View File

@ -3,7 +3,7 @@ import io
import os
import re
from ruamel import yaml
import ruamel.yaml
from borgmatic.config import load, normalize
@ -17,10 +17,23 @@ def insert_newline_before_comment(config, field_name):
field and its comments.
'''
config.ca.items[field_name][1].insert(
0, yaml.tokens.CommentToken('\n', yaml.error.CommentMark(0), None)
0, ruamel.yaml.tokens.CommentToken('\n', ruamel.yaml.error.CommentMark(0), None)
)
def get_properties(schema):
'''
Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
potential properties, returned their merged properties instead.
'''
if 'oneOf' in schema:
return dict(
collections.ChainMap(*[sub_schema['properties'] for sub_schema in schema['oneOf']])
)
return schema['properties']
def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
'''
Given a loaded configuration schema, generate and return sample config for it. Include comments
@ -32,15 +45,15 @@ def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
return example
if schema_type == 'array':
config = yaml.comments.CommentedSeq(
config = ruamel.yaml.comments.CommentedSeq(
[schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
)
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
elif schema_type == 'object':
config = yaml.comments.CommentedMap(
config = ruamel.yaml.comments.CommentedMap(
[
(field_name, schema_to_sample_configuration(sub_schema, level + 1))
for field_name, sub_schema in schema['properties'].items()
for field_name, sub_schema in get_properties(schema).items()
]
)
indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
@ -101,7 +114,7 @@ def render_configuration(config):
'''
Given a config data structure of nested OrderedDicts, render the config as YAML and return it.
'''
dumper = yaml.YAML()
dumper = ruamel.yaml.YAML(typ='rt')
dumper.indent(mapping=INDENT, sequence=INDENT + SEQUENCE_INDENT, offset=INDENT)
rendered = io.StringIO()
dumper.dump(config, rendered)
@ -151,7 +164,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
return
for field_name in config[0].keys():
field_schema = schema['items']['properties'].get(field_name, {})
field_schema = get_properties(schema['items']).get(field_name, {})
description = field_schema.get('description')
# No description to use? Skip it.
@ -178,7 +191,7 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
if skip_first and index == 0:
continue
field_schema = schema['properties'].get(field_name, {})
field_schema = get_properties(schema).get(field_name, {})
description = field_schema.get('description', '').strip()
# If this is an optional key, add an indicator to the comment flagging it to be commented
@ -225,8 +238,7 @@ def merge_source_configuration_into_destination(destination_config, source_confi
favoring values from the source when there are collisions.
The purpose of this is to upgrade configuration files from old versions of borgmatic by adding
new
configuration keys and comments.
new configuration keys and comments.
'''
if not source_config:
return destination_config
@ -236,7 +248,9 @@ def merge_source_configuration_into_destination(destination_config, source_confi
for field_name, source_value in source_config.items():
# Since this key/value is from the source configuration, leave it uncommented and remove any
# sentinel that would cause it to get commented out.
remove_commented_out_sentinel(destination_config, field_name)
remove_commented_out_sentinel(
ruamel.yaml.comments.CommentedMap(destination_config), field_name
)
# This is a mapping. Recurse for this key/value.
if isinstance(source_value, collections.abc.Mapping):
@ -248,7 +262,7 @@ def merge_source_configuration_into_destination(destination_config, source_confi
# This is a sequence. Recurse for each item in it.
if isinstance(source_value, collections.abc.Sequence) and not isinstance(source_value, str):
destination_value = destination_config[field_name]
destination_config[field_name] = yaml.comments.CommentedSeq(
destination_config[field_name] = ruamel.yaml.comments.CommentedSeq(
[
merge_source_configuration_into_destination(
destination_value[index] if index < len(destination_value) else None,
@ -275,7 +289,7 @@ def generate_sample_configuration(
schema. If a source filename is provided, merge the parsed contents of that configuration into
the generated configuration.
'''
schema = yaml.round_trip_load(open(schema_filename))
schema = ruamel.yaml.YAML(typ='safe').load(open(schema_filename))
source_config = None
if source_filename:

View File

@ -1,6 +1,5 @@
import functools
import itertools
import json
import logging
import operator
import os
@ -10,18 +9,18 @@ import ruamel.yaml
logger = logging.getLogger(__name__)
def probe_and_include_file(filename, include_directories):
def probe_and_include_file(filename, include_directories, config_paths):
'''
Given a filename to include and a list of include directories to search for matching files,
probe for the file, load it, and return the loaded configuration as a data structure of nested
dicts, lists, etc.
Given a filename to include, a list of include directories to search for matching files, and a
set of configuration paths, probe for the file, load it, and return the loaded configuration as
a data structure of nested dicts, lists, etc. Add the filename to the given configuration paths.
Raise FileNotFoundError if the included file was not found.
'''
expanded_filename = os.path.expanduser(filename)
if os.path.isabs(expanded_filename):
return load_configuration(expanded_filename)
return load_configuration(expanded_filename, config_paths)
candidate_filenames = {
os.path.join(directory, expanded_filename) for directory in include_directories
@ -29,32 +28,33 @@ def probe_and_include_file(filename, include_directories):
for candidate_filename in candidate_filenames:
if os.path.exists(candidate_filename):
return load_configuration(candidate_filename)
return load_configuration(candidate_filename, config_paths)
raise FileNotFoundError(
f'Could not find include {filename} at {" or ".join(candidate_filenames)}'
)
def include_configuration(loader, filename_node, include_directory):
def include_configuration(loader, filename_node, include_directory, config_paths):
'''
Given a ruamel.yaml.loader.Loader, a ruamel.yaml.nodes.ScalarNode containing the included
filename (or a list containing multiple such filenames), and an include directory path to search
for matching files, load the given YAML filenames (ignoring the given loader so we can use our
own) and return their contents as data structure of nested dicts, lists, etc. If the given
filename (or a list containing multiple such filenames), an include directory path to search for
matching files, and a set of configuration paths, load the given YAML filenames (ignoring the
given loader so we can use our own) and return their contents as data structure of nested dicts,
lists, etc. Add the names of included files to the given configuration paths. If the given
filename node's value is a scalar string, then the return value will be a single value. But if
the given node value is a list, then the return value will be a list of values, one per loaded
configuration file.
If a filename is relative, probe for it within 1. the current working directory and 2. the given
include directory.
If a filename is relative, probe for it within: 1. the current working directory and 2. the
given include directory.
Raise FileNotFoundError if an included file was not found.
'''
include_directories = [os.getcwd(), os.path.abspath(include_directory)]
if isinstance(filename_node.value, str):
return probe_and_include_file(filename_node.value, include_directories)
return probe_and_include_file(filename_node.value, include_directories, config_paths)
if (
isinstance(filename_node.value, list)
@ -64,7 +64,7 @@ def include_configuration(loader, filename_node, include_directory):
# Reversing the values ensures the correct ordering if these includes are subsequently
# merged together.
return [
probe_and_include_file(node.value, include_directories)
probe_and_include_file(node.value, include_directories, config_paths)
for node in reversed(filename_node.value)
]
@ -110,11 +110,17 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
separate YAML configuration files. Example syntax: `option: !include common.yaml`
'''
def __init__(self, preserve_quotes=None, loader=None, include_directory=None):
def __init__(
self, preserve_quotes=None, loader=None, include_directory=None, config_paths=None
):
super(Include_constructor, self).__init__(preserve_quotes, loader)
self.add_constructor(
'!include',
functools.partial(include_configuration, include_directory=include_directory),
functools.partial(
include_configuration,
include_directory=include_directory,
config_paths=config_paths,
),
)
# These are catch-all error handlers for tags that don't get applied and removed by
@ -156,46 +162,36 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
node.value = deep_merge_nodes(node.value)
def load_configuration(filename):
def load_configuration(filename, config_paths=None):
'''
Load the given configuration file and return its contents as a data structure of nested dicts
and lists. Also, replace any "{constant}" strings with the value of the "constant" key in the
"constants" option of the configuration file.
and lists. Add the filename to the given configuration paths set, and also add any included
configuration filenames.
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes.
'''
if config_paths is None:
config_paths = set()
# Use an embedded derived class for the include constructor so as to capture the filename
# value. (functools.partial doesn't work for this use case because yaml.Constructor has to be
# an actual class.)
class Include_constructor_with_include_directory(Include_constructor):
# Use an embedded derived class for the include constructor so as to capture the include
# directory and configuration paths values. (functools.partial doesn't work for this use case
# because yaml.Constructor has to be an actual class.)
class Include_constructor_with_extras(Include_constructor):
def __init__(self, preserve_quotes=None, loader=None):
super(Include_constructor_with_include_directory, self).__init__(
preserve_quotes, loader, include_directory=os.path.dirname(filename)
super(Include_constructor_with_extras, self).__init__(
preserve_quotes,
loader,
include_directory=os.path.dirname(filename),
config_paths=config_paths,
)
yaml = ruamel.yaml.YAML(typ='safe')
yaml.Constructor = Include_constructor_with_include_directory
yaml.Constructor = Include_constructor_with_extras
config_paths.add(filename)
with open(filename) as file:
file_contents = file.read()
config = yaml.load(file_contents)
try:
has_constants = bool(config and 'constants' in config)
except TypeError:
has_constants = False
if has_constants:
for key, value in config['constants'].items():
value = json.dumps(value)
file_contents = file_contents.replace(f'{{{key}}}', value.strip('"'))
config = yaml.load(file_contents)
del config['constants']
return config
return yaml.load(file.read())
def filter_omitted_nodes(nodes, values):

View File

@ -39,7 +39,7 @@ def normalize_sections(config_filename, config):
for section_name in ('location', 'storage', 'retention', 'consistency', 'output', 'hooks'):
section_config = config.get(section_name)
if section_config:
if section_config is not None:
any_section_upgraded = True
del config[section_name]
config.update(section_config)
@ -50,7 +50,7 @@ def normalize_sections(config_filename, config):
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Configuration sections (like location: and storage:) are deprecated and support will be removed from a future release. To prepare for this, move your options out of sections to the global scope.',
msg=f'{config_filename}: Configuration sections (like location:, storage:, retention:, consistency:, and hooks:) are deprecated and support will be removed from a future release. To prepare for this, move your options out of sections to the global scope.',
)
)
]
@ -90,7 +90,7 @@ def normalize(config_filename, config):
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects a mapping value. String values for this option are deprecated and support will be removed from a future release.',
msg=f'{config_filename}: The healthchecks hook now expects a key/value pair with "ping_url" as a key. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
@ -192,7 +192,7 @@ def normalize(config_filename, config):
# Upgrade remote repositories to ssh:// syntax, required in Borg 2.
repositories = config.get('repositories')
if repositories:
if isinstance(repositories[0], str):
if any(isinstance(repository, str) for repository in repositories):
logs.append(
logging.makeLogRecord(
dict(
@ -202,7 +202,10 @@ def normalize(config_filename, config):
)
)
)
config['repositories'] = [{'path': repository} for repository in repositories]
config['repositories'] = [
{'path': repository} if isinstance(repository, str) else repository
for repository in repositories
]
repositories = config['repositories']
config['repositories'] = []

View File

@ -13,6 +13,11 @@ def set_values(config, keys, value):
first_key = keys[0]
if len(keys) == 1:
if isinstance(config, list):
raise ValueError(
'When overriding a list option, the value must use list syntax (e.g., "[foo, bar]" or "[{key: value}]" as appropriate)'
)
config[first_key] = value
return
@ -22,13 +27,19 @@ def set_values(config, keys, value):
set_values(config[first_key], keys[1:], value)
def convert_value_type(value):
def convert_value_type(value, option_type):
'''
Given a string value, determine its logical type (string, boolean, integer, etc.), and return it
converted to that type.
Given a string value and its schema type as a string, determine its logical type (string,
boolean, integer, etc.), and return it converted to that type.
If the option type is a string, leave the value as a string so that special characters in it
don't get interpreted as YAML during conversion.
Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
'''
if option_type == 'string':
return value
return ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
@ -46,11 +57,32 @@ def strip_section_names(parsed_override_key):
return parsed_override_key
def parse_overrides(raw_overrides):
def type_for_option(schema, option_keys):
'''
Given a sequence of configuration file override strings in the form of "option.suboption=value",
parse and return a sequence of tuples (keys, values), where keys is a sequence of strings. For
instance, given the following raw overrides:
Given a configuration schema and a sequence of keys identifying an option, e.g.
('extra_borg_options', 'init'), return the schema type of that option as a string.
Return None if the option or its type cannot be found in the schema.
'''
option_schema = schema
for key in option_keys:
try:
option_schema = option_schema['properties'][key]
except KeyError:
return None
try:
return option_schema['type']
except KeyError:
return None
def parse_overrides(raw_overrides, schema):
'''
Given a sequence of configuration file override strings in the form of "option.suboption=value"
and a configuration schema dict, parse and return a sequence of tuples (keys, values), where
keys is a sequence of strings. For instance, given the following raw overrides:
['my_option.suboption=value1', 'other_option=value2']
@ -71,10 +103,13 @@ def parse_overrides(raw_overrides):
for raw_override in raw_overrides:
try:
raw_keys, value = raw_override.split('=', 1)
keys = tuple(raw_keys.split('.'))
option_type = type_for_option(schema, keys)
parsed_overrides.append(
(
strip_section_names(tuple(raw_keys.split('.'))),
convert_value_type(value),
keys,
convert_value_type(value, option_type),
)
)
except ValueError:
@ -87,12 +122,18 @@ def parse_overrides(raw_overrides):
return tuple(parsed_overrides)
def apply_overrides(config, raw_overrides):
def apply_overrides(config, schema, raw_overrides):
'''
Given a configuration dict and a sequence of configuration file override strings in the form of
"option.suboption=value", parse each override and set it the configuration dict.
Given a configuration dict, a corresponding configuration schema dict, and a sequence of
configuration file override strings in the form of "option.suboption=value", parse each override
and set it into the configuration dict.
Set the overrides into the configuration both with and without deprecated section names (if
used), so that the overrides work regardless of whether the configuration is also using
deprecated section names.
'''
overrides = parse_overrides(raw_overrides)
overrides = parse_overrides(raw_overrides, schema)
for keys, value in overrides:
set_values(config, keys, value)
set_values(config, strip_section_names(keys), value)

View File

@ -6,21 +6,22 @@ properties:
constants:
type: object
description: |
Constants to use in the configuration file. All occurrences of the
constant name within culy braces will be replaced with the value.
For example, if you have a constant named "hostname" with the value
"myhostname", then the string "{hostname}" will be replaced with
"myhostname" in the configuration file.
Constants to use in the configuration file. Within option values,
all occurrences of the constant name in curly braces will be
replaced with the constant value. For example, if you have a
constant named "app_name" with the value "myapp", then the string
"{app_name}" will be replaced with "myapp" in the configuration
file.
example:
hostname: myhostname
prefix: myprefix
app_name: myapp
user: myuser
source_directories:
type: array
items:
type: string
description: |
List of source directories and files to backup. Globs and tildes are
expanded. Do not backslash spaces in path names.
List of source directories and files to back up. Globs and tildes
are expanded. Do not backslash spaces in path names.
example:
- /home
- /etc
@ -29,7 +30,7 @@ properties:
repositories:
type: array
items:
type: object
type: object
required:
- path
properties:
@ -215,8 +216,8 @@ properties:
description: |
Store configuration files used to create a backup in the backup
itself. Defaults to true. Changing this to false prevents "borgmatic
bootstrap" from extracting configuration files from the backup.
example: true
bootstrap" from extracting configuration files from the backup.
example: false
source_directories_must_exist:
type: boolean
description: |
@ -260,7 +261,7 @@ properties:
chunker_params:
type: string
description: |
Specify the parameters passed to then chunker (CHUNK_MIN_EXP,
Specify the parameters passed to the chunker (CHUNK_MIN_EXP,
CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). See
https://borgbackup.readthedocs.io/en/stable/internals.html for
details. Defaults to "19,23,21,4095".
@ -287,20 +288,23 @@ properties:
retry_wait:
type: integer
description: |
Wait time between retries (in seconds) to allow transient issues to
pass. Increases after each retry as a form of backoff. Defaults to 0
(no wait).
Wait time between retries (in seconds) to allow transient issues
to pass. Increases after each retry by that same wait time as a
form of backoff. Defaults to 0 (no wait).
example: 10
temporary_directory:
type: string
description: |
Directory where temporary files are stored. Defaults to $TMPDIR.
Directory where temporary Borg files are stored. Defaults to
$TMPDIR. See "Resource Usage" at
https://borgbackup.readthedocs.io/en/stable/usage/general.html for
details.
example: /path/to/tmpdir
ssh_command:
type: string
description: |
Command to use instead of "ssh". This can be used to specify ssh
options. Defaults to not set.
options. Defaults to not set.
example: ssh -i /path/to/private/key
borg_base_directory:
type: string
@ -337,6 +341,37 @@ properties:
Path for Borg encryption key files. Defaults to
$borg_base_directory/.config/borg/keys
example: /path/to/base/config/keys
borg_exit_codes:
type: array
items:
type: object
required: ['code', 'treat_as']
additionalProperties: false
properties:
code:
type: integer
not: {enum: [0]}
description: |
The exit code for an existing Borg warning or error.
example: 100
treat_as:
type: string
enum: ['error', 'warning']
description: |
Whether to consider the exit code as an error or as a
warning in borgmatic.
example: error
description: |
A list of Borg exit codes that should be elevated to errors or
squashed to warnings as indicated. By default, Borg error exit codes
(2 to 99) are treated as errors while warning exit codes (1 and
100+) are treated as warnings. Exit codes other than 1 and 2 are
only present in Borg 1.4.0+.
example:
- code: 13
treat_as: warning
- code: 100
treat_as: error
umask:
type: integer
description: |
@ -423,7 +458,9 @@ properties:
command-line invocation.
keep_within:
type: string
description: Keep all archives within this time interval.
description: |
Keep all archives within this time interval. See "skip_actions" for
disabling pruning altogether.
example: 3H
keep_secondly:
type: integer
@ -466,37 +503,120 @@ properties:
type: array
items:
type: object
required: ['name']
additionalProperties: false
properties:
name:
type: string
enum:
- repository
- archives
- data
- extract
- disabled
description: |
Name of consistency check to run: "repository",
"archives", "data", and/or "extract". Set to "disabled"
to disable all consistency checks. "repository" checks
the consistency of the repository, "archives" checks all
of the archives, "data" verifies the integrity of the
data within the archives, and "extract" does an
extraction dry-run of the most recent archive. Note that
"data" implies "archives".
example: repository
frequency:
type: string
description: |
How frequently to run this type of consistency check (as
a best effort). The value is a number followed by a unit
of time. E.g., "2 weeks" to run this consistency check
no more than every two weeks for a given repository or
"1 month" to run it no more than monthly. Defaults to
"always": running this check every time checks are run.
example: 2 weeks
oneOf:
- required: [name]
additionalProperties: false
properties:
name:
type: string
enum:
- repository
- archives
- data
- extract
- disabled
description: |
Name of consistency check to run: "repository",
"archives", "data", "spot", and/or "extract".
"repository" checks the consistency of the
repository, "archives" checks all of the
archives, "data" verifies the integrity of the
data within the archives, "spot" checks that
some percentage of source files are found in the
most recent archive (with identical contents),
and "extract" does an extraction dry-run of the
most recent archive. Note that "data" implies
"archives". See "skip_actions" for disabling
checks altogether.
example: spot
frequency:
type: string
description: |
How frequently to run this type of consistency
check (as a best effort). The value is a number
followed by a unit of time. E.g., "2 weeks" to
run this consistency check no more than every
two weeks for a given repository or "1 month" to
run it no more than monthly. Defaults to
"always": running this check every time checks
are run.
example: 2 weeks
- required:
- name
- count_tolerance_percentage
- data_sample_percentage
- data_tolerance_percentage
additionalProperties: false
properties:
name:
type: string
enum:
- spot
description: |
Name of consistency check to run: "repository",
"archives", "data", "spot", and/or "extract".
"repository" checks the consistency of the
repository, "archives" checks all of the
archives, "data" verifies the integrity of the
data within the archives, "spot" checks that
some percentage of source files are found in the
most recent archive (with identical contents),
and "extract" does an extraction dry-run of the
most recent archive. Note that "data" implies
"archives". See "skip_actions" for disabling
checks altogether.
example: repository
frequency:
type: string
description: |
How frequently to run this type of consistency
check (as a best effort). The value is a number
followed by a unit of time. E.g., "2 weeks" to
run this consistency check no more than every
two weeks for a given repository or "1 month" to
run it no more than monthly. Defaults to
"always": running this check every time checks
are run.
example: 2 weeks
count_tolerance_percentage:
type: number
description: |
The percentage delta between the source
directories file count and the most recent backup
archive file count that is allowed before the
entire consistency check fails. This can catch
problems like incorrect excludes, inadvertent
deletes, etc. Only applies to the "spot" check.
example: 10
data_sample_percentage:
type: number
description: |
The percentage of total files in the source
directories to randomly sample and compare to
their corresponding files in the most recent
backup archive. Only applies to the "spot" check.
example: 1
data_tolerance_percentage:
type: number
description: |
The percentage of total files in the source
directories that can fail a spot check comparison
without failing the entire consistency check. This
can catch problems like source files that have
been bulk-changed by malware, backups that have
been tampered with, etc. The value must be lower
than or equal to the "contents_sample_percentage".
Only applies to the "spot" check.
example: 0.5
xxh64sum_command:
type: string
description: |
Command to use instead of "xxh64sum" to hash
source files, usually found in an OS package named
"xxhash". Do not substitute with a different hash
type (SHA, MD5, etc.) or the check will never
succeed. Only applies to the "spot" check.
example: /usr/local/bin/xxh64sum
description: |
List of one or more consistency checks to run on a periodic basis
(if "frequency" is set) or every time borgmatic runs checks (if
@ -525,6 +645,38 @@ properties:
Apply color to console output. Can be overridden with --no-color
command-line flag. Defaults to true.
example: false
skip_actions:
type: array
items:
type: string
enum:
- rcreate
- transfer
- prune
- compact
- create
- check
- extract
- config
- export-tar
- mount
- umount
- restore
- rlist
- list
- rinfo
- info
- break-lock
- key
- borg
description: |
List of one or more actions to skip running for this configuration
file, even if specified on the command-line (explicitly or
implicitly). This is handy for append-only configurations where you
never want to run "compact" or checkless configuration where you
want to skip "check". Defaults to not skipping any actions.
example:
- compact
before_actions:
type: array
items:
@ -841,10 +993,135 @@ properties:
description: |
List of one or more PostgreSQL databases to dump before creating a
backup, run once per configuration file. The database dumps are
added to your source directories at runtime, backed up, and removed
afterwards. Requires pg_dump/pg_dumpall/pg_restore commands. See
added to your source directories at runtime and streamed directly
to Borg. Requires pg_dump/pg_dumpall/pg_restore commands. See
https://www.postgresql.org/docs/current/app-pgdump.html and
https://www.postgresql.org/docs/current/libpq-ssl.html for details.
https://www.postgresql.org/docs/current/libpq-ssl.html for
details.
mariadb_databases:
type: array
items:
type: object
required: ['name']
additionalProperties: false
properties:
name:
type: string
description: |
Database name (required if using this hook). Or "all" to
dump all databases on the host. Note that using this
database hook implicitly enables both read_special and
one_file_system (see above) to support dump and restore
streaming.
example: users
hostname:
type: string
description: |
Database hostname to connect to. Defaults to connecting
via local Unix socket.
example: database.example.org
restore_hostname:
type: string
description: |
Database hostname to restore to. Defaults to the
"hostname" option.
example: database.example.org
port:
type: integer
description: Port to connect to. Defaults to 3306.
example: 3307
restore_port:
type: integer
description: |
Port to restore to. Defaults to the "port" option.
example: 5433
username:
type: string
description: |
Username with which to connect to the database. Defaults
to the username of the current user.
example: dbuser
restore_username:
type: string
description: |
Username with which to restore the database. Defaults to
the "username" option.
example: dbuser
password:
type: string
description: |
Password with which to connect to the database. Omitting
a password will only work if MariaDB is configured to
trust the configured username without a password.
example: trustsome1
mariadb_dump_command:
type: string
description: |
Command to use instead of "mariadb-dump". This can be
used to run a specific mariadb_dump version (e.g., one
inside a running container). Defaults to "mariadb-dump".
example: docker exec mariadb_container mariadb-dump
mariadb_command:
type: string
description: |
Command to run instead of "mariadb". This can be used to
run a specific mariadb version (e.g., one inside a
running container). Defaults to "mariadb".
example: docker exec mariadb_container mariadb
restore_password:
type: string
description: |
Password with which to connect to the restore database.
Defaults to the "password" option.
example: trustsome1
format:
type: string
enum: ['sql']
description: |
Database dump output format. Currently only "sql" is
supported. Defaults to "sql" for a single database. Or,
when database name is "all" and format is blank, dumps
all databases to a single file. But if a format is
specified with an "all" database name, dumps each
database to a separate file of that format, allowing
more convenient restores of individual databases.
example: directory
add_drop_database:
type: boolean
description: |
Use the "--add-drop-database" flag with mariadb-dump,
causing the database to be dropped right before restore.
Defaults to true.
example: false
options:
type: string
description: |
Additional mariadb-dump options to pass directly to the
dump command, without performing any validation on them.
See mariadb-dump documentation for details.
example: --skip-comments
list_options:
type: string
description: |
Additional options to pass directly to the mariadb
command that lists available databases, without
performing any validation on them. See mariadb command
documentation for details.
example: --defaults-extra-file=mariadb.cnf
restore_options:
type: string
description: |
Additional options to pass directly to the mariadb
command that restores database dumps, without
performing any validation on them. See mariadb command
documentation for details.
example: --defaults-extra-file=mariadb.cnf
description: |
List of one or more MariaDB databases to dump before creating a
backup, run once per configuration file. The database dumps are
added to your source directories at runtime and streamed directly
to Borg. Requires mariadb-dump/mariadb commands. See
https://mariadb.com/kb/en/library/mysqldump/ for details.
mysql_databases:
type: array
items:
@ -893,7 +1170,7 @@ properties:
description: |
Username with which to restore the database. Defaults to
the "username" option.
example: dbuser
example: dbuser
password:
type: string
description: |
@ -906,7 +1183,21 @@ properties:
description: |
Password with which to connect to the restore database.
Defaults to the "password" option.
example: trustsome1
example: trustsome1
mysql_dump_command:
type: string
description: |
Command to use instead of "mysqldump". This can be used
to run a specific mysql_dump version (e.g., one inside a
running container). Defaults to "mysqldump".
example: docker exec mysql_container mysqldump
mysql_command:
type: string
description: |
Command to run instead of "mysql". This can be used to
run a specific mysql version (e.g., one inside a running
container). Defaults to "mysql".
example: docker exec mysql_container mysql
format:
type: string
enum: ['sql']
@ -936,26 +1227,26 @@ properties:
list_options:
type: string
description: |
Additional mysql options to pass directly to the mysql
Additional options to pass directly to the mysql
command that lists available databases, without
performing any validation on them. See mysql
performing any validation on them. See mysql command
documentation for details.
example: --defaults-extra-file=my.cnf
restore_options:
type: string
description: |
Additional mysql options to pass directly to the mysql
command that restores database dumps, without performing
any validation on them. See mysql documentation for
details.
Additional options to pass directly to the mysql
command that restores database dumps, without
performing any validation on them. See mysql command
documentation for details.
example: --defaults-extra-file=my.cnf
description: |
List of one or more MySQL/MariaDB databases to dump before creating
a backup, run once per configuration file. The database dumps are
added to your source directories at runtime, backed up, and removed
afterwards. Requires mysqldump/mysql commands (from either MySQL or
MariaDB). See https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html
or https://mariadb.com/kb/en/library/mysqldump/ for details.
List of one or more MySQL databases to dump before creating a
backup, run once per configuration file. The database dumps are
added to your source directories at runtime and streamed directly
to Borg. Requires mysqldump/mysql commands. See
https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html for
details.
sqlite_databases:
type: array
items:
@ -1021,7 +1312,7 @@ properties:
type: integer
description: |
Port to restore to. Defaults to the "port" option.
example: 5433
example: 5433
username:
type: string
description: |
@ -1033,7 +1324,7 @@ properties:
description: |
Username with which to restore the database. Defaults to
the "username" option.
example: dbuser
example: dbuser
password:
type: string
description: |
@ -1080,8 +1371,8 @@ properties:
description: |
List of one or more MongoDB databases to dump before creating a
backup, run once per configuration file. The database dumps are
added to your source directories at runtime, backed up, and removed
afterwards. Requires mongodump/mongorestore commands. See
added to your source directories at runtime and streamed directly
to Borg. Requires mongodump/mongorestore commands. See
https://docs.mongodb.com/database-tools/mongodump/ and
https://docs.mongodb.com/database-tools/mongorestore/ for details.
ntfy:
@ -1110,6 +1401,12 @@ properties:
description: |
The password used for authentication.
example: fakepassword
access_token:
type: string
description: |
An ntfy access token to authenticate with instead of
username/password.
example: tk_AgQdq7mVBoFD37zQVN29RhuMzNIz2
start:
type: object
properties:
@ -1195,6 +1492,129 @@ properties:
example:
- start
- finish
apprise:
type: object
required: ['services']
additionalProperties: false
properties:
services:
type: array
items:
type: object
required:
- url
- label
properties:
url:
type: string
example: "gotify://hostname/token"
label:
type: string
example: gotify
description: |
A list of Apprise services to publish to with URLs and
labels. The labels are used for logging. A full list of
services and their configuration can be found at
https://github.com/caronc/apprise/wiki.
example:
- url: "kodi://user@hostname"
label: kodi
- url: "line://Token@User"
label: line
send_logs:
type: boolean
description: |
Send borgmatic logs to Apprise services as part the
"finish", "fail", and "log" states. Defaults to true.
example: false
logs_size_limit:
type: integer
description: |
Number of bytes of borgmatic logs to send to Apprise
services. Set to 0 to send all logs and disable this
truncation. Defaults to 1500.
example: 100000
start:
type: object
required: ['body']
properties:
title:
type: string
description: |
Specify the message title. If left unspecified, no
title is sent.
example: Ping!
body:
type: string
description: |
Specify the message body.
example: Starting backup process.
finish:
type: object
required: ['body']
properties:
title:
type: string
description: |
Specify the message title. If left unspecified, no
title is sent.
example: Ping!
body:
type: string
description: |
Specify the message body.
example: Backups successfully made.
fail:
type: object
required: ['body']
properties:
title:
type: string
description: |
Specify the message title. If left unspecified, no
title is sent.
example: Ping!
body:
type: string
description: |
Specify the message body.
example: Your backups have failed.
log:
type: object
required: ['body']
properties:
title:
type: string
description: |
Specify the message title. If left unspecified, no
title is sent.
example: Ping!
body:
type: string
description: |
Specify the message body.
example: Here is some info about your backups.
states:
type: array
items:
type: string
enum:
- start
- finish
- fail
- log
uniqueItems: true
description: |
List of one or more monitoring states to ping for:
"start", "finish", "fail", and/or "log". Defaults to
pinging for failure only. For each selected state,
corresponding configuration for the message title and body
should be given. If any is left unspecified, a generic
message is emitted instead.
example:
- start
- finish
healthchecks:
type: object
required: ['ping_url']
@ -1242,6 +1662,14 @@ properties:
states.
example:
- finish
create_slug:
type: boolean
description: |
Create the check if it does not exist. Only works with
the slug URL scheme (https://hc-ping.com/<ping-key>/<slug>
as opposed to https://hc-ping.com/<uuid>).
Defaults to false.
example: true
description: |
Configuration for a monitoring integration with Healthchecks. Create
an account at https://healthchecks.io (or self-host Healthchecks) if
@ -1275,7 +1703,7 @@ properties:
example: a177cad45bd374409f78906a810a3074
description: |
Configuration for a monitoring integration with PagerDuty. Create an
account at https://www.pagerduty.com/ if you'd like to use this
account at https://www.pagerduty.com if you'd like to use this
service. See borgmatic monitoring documentation for details.
cronhub:
type: object
@ -1289,6 +1717,36 @@ properties:
ends, or errors.
example: https://cronhub.io/ping/1f5e3410-254c-5587
description: |
Configuration for a monitoring integration with Crunhub. Create an
Configuration for a monitoring integration with Cronhub. Create an
account at https://cronhub.io if you'd like to use this service. See
borgmatic monitoring documentation for details.
loki:
type: object
required: ['url', 'labels']
additionalProperties: false
properties:
url:
type: string
description: |
Grafana loki log URL to notify when a backup begins,
ends, or fails.
example: "http://localhost:3100/loki/api/v1/push"
labels:
type: object
additionalProperties:
type: string
description: |
Allows setting custom labels for the logging stream. At
least one label is required. "__hostname" gets replaced by
the machine hostname automatically. "__config" gets replaced
by just the name of the configuration file. "__config_path"
gets replaced by the full path of the configuration file.
example:
app: "borgmatic"
config: "__config"
hostname: "__hostname"
description: |
Configuration for a monitoring integration with Grafana loki. You
can send the logs to a self-hosted instance or create an account at
https://grafana.com/auth/sign-up/create-user. See borgmatic
monitoring documentation for details.

View File

@ -4,7 +4,7 @@ import jsonschema
import ruamel.yaml
import borgmatic.config
from borgmatic.config import environment, load, normalize, override
from borgmatic.config import constants, environment, load, normalize, override
def schema_filename():
@ -97,23 +97,28 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
'checks': ['repository', 'archives'],
}
Also return a sequence of logging.LogRecord instances containing any warnings about the
configuration.
Also return a set of loaded configuration paths and a sequence of logging.LogRecord instances
containing any warnings about the configuration.
Raise FileNotFoundError if the file does not exist, PermissionError if the user does not
have permissions to read the file, or Validation_error if the config does not match the schema.
'''
config_paths = set()
try:
config = load.load_configuration(config_filename)
config = load.load_configuration(config_filename, config_paths)
schema = load.load_configuration(schema_filename)
except (ruamel.yaml.error.YAMLError, RecursionError) as error:
raise Validation_error(config_filename, (str(error),))
override.apply_overrides(config, overrides)
logs = normalize.normalize(config_filename, config)
override.apply_overrides(config, schema, overrides)
constants.apply_constants(config, config.get('constants') if config else {})
if resolve_env:
environment.resolve_env_variables(config)
logs = normalize.normalize(config_filename, config)
try:
validator = jsonschema.Draft7Validator(schema)
except AttributeError: # pragma: no cover
@ -127,7 +132,7 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
apply_logical_validation(config_filename, config)
return config, logs
return config, config_paths, logs
def normalize_repository_path(repository):
@ -162,11 +167,10 @@ def repositories_match(first, second):
def guard_configuration_contains_repository(repository, configurations):
'''
Given a repository path and a dict mapping from config filename to corresponding parsed config
dict, ensure that the repository is declared exactly once in all of the configurations. If no
dict, ensure that the repository is declared at least once in all of the configurations. If no
repository is given, skip this check.
Raise ValueError if the repository is not found in a configuration, or is declared multiple
times.
Raise ValueError if the repository is not found in any configurations.
'''
if not repository:
return
@ -181,9 +185,7 @@ def guard_configuration_contains_repository(repository, configurations):
)
if count == 0:
raise ValueError(f'Repository {repository} not found in configuration files')
if count > 1:
raise ValueError(f'Repository {repository} found in multiple configuration files')
raise ValueError(f'Repository "{repository}" not found in configuration files')
def guard_single_repository_selected(repository, configurations):

View File

@ -1,29 +1,70 @@
import collections
import enum
import logging
import os
import select
import subprocess
import textwrap
logger = logging.getLogger(__name__)
ERROR_OUTPUT_MAX_LINE_COUNT = 25
BORG_ERROR_EXIT_CODE = 2
BORG_ERROR_EXIT_CODE_START = 2
BORG_ERROR_EXIT_CODE_END = 99
def exit_code_indicates_error(command, exit_code, borg_local_path=None):
class Exit_status(enum.Enum):
STILL_RUNNING = 1
SUCCESS = 2
WARNING = 3
ERROR = 4
def interpret_exit_code(command, exit_code, borg_local_path=None, borg_exit_codes=None):
'''
Return True if the given exit code from running a command corresponds to an error. If a Borg
local path is given and matches the process' command, then treat exit code 1 as a warning
instead of an error.
Return an Exit_status value (e.g. SUCCESS, ERROR, or WARNING) based on interpreting the given
exit code. If a Borg local path is given and matches the process' command, then interpret the
exit code based on Borg's documented exit code semantics. And if Borg exit codes are given as a
sequence of exit code configuration dicts, then take those configured preferences into account.
'''
if exit_code is None:
return False
return Exit_status.STILL_RUNNING
if exit_code == 0:
return Exit_status.SUCCESS
if borg_local_path and command[0] == borg_local_path:
return bool(exit_code < 0 or exit_code >= BORG_ERROR_EXIT_CODE)
# First try looking for the exit code in the borg_exit_codes configuration.
for entry in borg_exit_codes or ():
if entry.get('code') == exit_code:
treat_as = entry.get('treat_as')
return bool(exit_code != 0)
if treat_as == 'error':
logger.error(
f'Treating exit code {exit_code} as an error, as per configuration'
)
return Exit_status.ERROR
elif treat_as == 'warning':
logger.warning(
f'Treating exit code {exit_code} as a warning, as per configuration'
)
return Exit_status.WARNING
# If the exit code doesn't have explicit configuration, then fall back to the default Borg
# behavior.
return (
Exit_status.ERROR
if (
exit_code < 0
or (
exit_code >= BORG_ERROR_EXIT_CODE_START
and exit_code <= BORG_ERROR_EXIT_CODE_END
)
)
else Exit_status.WARNING
)
return Exit_status.ERROR
def command_for_process(process):
@ -60,7 +101,7 @@ def append_last_lines(last_lines, captured_output, line, output_log_level):
logger.log(output_log_level, line)
def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path, borg_exit_codes):
'''
Given a sequence of subprocess.Popen() instances for multiple processes, log the output for each
process with the requested log level. Additionally, raise a CalledProcessError if a process
@ -68,7 +109,8 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
path).
If output log level is None, then instead of logging, capture output for each process and return
it as a dict from the process to its output.
it as a dict from the process to its output. Use the given Borg local path and exit code
configuration to decide what's an error and what's a warning.
For simplicity, it's assumed that the output buffer for each process is its stdout. But if any
stdouts are given to exclude, then for any matching processes, log from their stderr instead.
@ -132,10 +174,13 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
if exit_code is None:
still_running = True
command = process.args.split(' ') if isinstance(process.args, str) else process.args
continue
command = process.args.split(' ') if isinstance(process.args, str) else process.args
# If any process errors, then raise accordingly.
if exit_code_indicates_error(command, exit_code, borg_local_path):
exit_status = interpret_exit_code(command, exit_code, borg_local_path, borg_exit_codes)
if exit_status in (Exit_status.ERROR, Exit_status.WARNING):
# If an error occurs, include its output in the raised exception so that we don't
# inadvertently hide error output.
output_buffer = output_buffer_for_process(process, exclude_stdouts)
@ -161,9 +206,13 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
other_process.stdout.read(0)
other_process.kill()
raise subprocess.CalledProcessError(
exit_code, command_for_process(process), '\n'.join(last_lines)
)
if exit_status == Exit_status.ERROR:
raise subprocess.CalledProcessError(
exit_code, command_for_process(process), '\n'.join(last_lines)
)
still_running = False
break
if captured_outputs:
return {
@ -171,19 +220,47 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
}
def log_command(full_command, input_file=None, output_file=None):
SECRET_COMMAND_FLAG_NAMES = {'--password'}
def mask_command_secrets(full_command):
'''
Given a command as a sequence, mask secret values for flags like "--password" in preparation for
logging.
'''
masked_command = []
previous_piece = None
for piece in full_command:
masked_command.append('***' if previous_piece in SECRET_COMMAND_FLAG_NAMES else piece)
previous_piece = piece
return tuple(masked_command)
MAX_LOGGED_COMMAND_LENGTH = 1000
def log_command(full_command, input_file=None, output_file=None, environment=None):
'''
Log the given command (a sequence of command/argument strings), along with its input/output file
paths.
paths and extra environment variables (with omitted values in case they contain passwords).
'''
logger.debug(
' '.join(full_command)
textwrap.shorten(
' '.join(
tuple(f'{key}=***' for key in (environment or {}).keys())
+ mask_command_secrets(full_command)
),
width=MAX_LOGGED_COMMAND_LENGTH,
placeholder=' ...',
)
+ (f" < {getattr(input_file, 'name', '')}" if input_file else '')
+ (f" > {getattr(output_file, 'name', '')}" if output_file else '')
)
# An sentinel passed as an output file to execute_command() to indicate that the command's output
# A sentinel passed as an output file to execute_command() to indicate that the command's output
# should be allowed to flow through to stdout without being captured for logging. Useful for
# commands with interactive prompts or those that mess directly with the console.
DO_NOT_CAPTURE = object()
@ -198,6 +275,7 @@ def execute_command(
extra_environment=None,
working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
run_to_completion=True,
):
'''
@ -208,12 +286,13 @@ def execute_command(
augment the current environment, and pass the result into the command. If a working directory is
given, use that as the present working directory when running the command. If a Borg local path
is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning
instead of an error. If run to completion is False, then return the process for the command
without executing it to completion.
instead of an error. But if Borg exit codes are given as a sequence of exit code configuration
dicts, then use that configuration to decide what's an error and what's a warning. If run to
completion is False, then return the process for the command without executing it to completion.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
log_command(full_command, input_file, output_file)
log_command(full_command, input_file, output_file, extra_environment)
environment = {**os.environ, **extra_environment} if extra_environment else None
do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command
@ -231,7 +310,11 @@ def execute_command(
return process
log_outputs(
(process,), (input_file, output_file), output_log_level, borg_local_path=borg_local_path
(process,),
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
)
@ -242,6 +325,7 @@ def execute_command_and_capture_output(
extra_environment=None,
working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
):
'''
Execute the given command (a sequence of command/argument strings), capturing and returning its
@ -250,11 +334,13 @@ def execute_command_and_capture_output(
given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the
command. If a Borg local path is given, and the command matches it (regardless of arguments),
treat exit code 1 as a warning instead of an error.
treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a
sequence of exit code configuration dicts, then use that configuration to decide what's an error
and what's a warning.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
log_command(full_command)
log_command(full_command, environment=extra_environment)
environment = {**os.environ, **extra_environment} if extra_environment else None
command = ' '.join(full_command) if shell else full_command
@ -267,7 +353,10 @@ def execute_command_and_capture_output(
cwd=working_directory,
)
except subprocess.CalledProcessError as error:
if exit_code_indicates_error(command, error.returncode, borg_local_path):
if (
interpret_exit_code(command, error.returncode, borg_local_path, borg_exit_codes)
== Exit_status.ERROR
):
raise
output = error.output
@ -284,6 +373,7 @@ def execute_command_with_processes(
extra_environment=None,
working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
):
'''
Execute the given command (a sequence of command/argument strings) and log its output at the
@ -298,12 +388,14 @@ def execute_command_with_processes(
use it to augment the current environment, and pass the result into the command. If a working
directory is given, use that as the present working directory when running the command. If a
Borg local path is given, then for any matching command or process (regardless of arguments),
treat exit code 1 as a warning instead of an error.
treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a
sequence of exit code configuration dicts, then use that configuration to decide what's an error
and what's a warning.
Raise subprocesses.CalledProcessError if an error occurs while running the command or in the
upstream process.
'''
log_command(full_command, input_file, output_file)
log_command(full_command, input_file, output_file, extra_environment)
environment = {**os.environ, **extra_environment} if extra_environment else None
do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command
@ -313,9 +405,9 @@ def execute_command_with_processes(
command,
stdin=input_file,
stdout=None if do_not_capture else (output_file or subprocess.PIPE),
stderr=None
if do_not_capture
else (subprocess.PIPE if output_file else subprocess.STDOUT),
stderr=(
None if do_not_capture else (subprocess.PIPE if output_file else subprocess.STDOUT)
),
shell=shell,
env=environment,
cwd=working_directory,
@ -333,7 +425,8 @@ def execute_command_with_processes(
tuple(processes) + (command_process,),
(input_file, output_file),
output_log_level,
borg_local_path=borg_local_path,
borg_local_path,
borg_exit_codes,
)
if output_log_level is None:

109
borgmatic/hooks/apprise.py Normal file
View File

@ -0,0 +1,109 @@
import logging
import operator
import borgmatic.hooks.logs
import borgmatic.hooks.monitor
logger = logging.getLogger(__name__)
DEFAULT_LOGS_SIZE_LIMIT_BYTES = 100000
HANDLER_IDENTIFIER = 'apprise'
def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
we can send them all to an Apprise notification service upon a finish or failure state. But skip
this if the "send_logs" option is false.
'''
if hook_config.get('send_logs') is False:
return
logs_size_limit = max(
hook_config.get('logs_size_limit', DEFAULT_LOGS_SIZE_LIMIT_BYTES)
- len(borgmatic.hooks.logs.PAYLOAD_TRUNCATION_INDICATOR),
0,
)
borgmatic.hooks.logs.add_handler(
borgmatic.hooks.logs.Forgetful_buffering_handler(
HANDLER_IDENTIFIER, logs_size_limit, monitoring_log_level
)
)
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the configured Apprise service URLs. Use the given configuration filename in any log
entries. If this is a dry run, then don't actually ping anything.
'''
try:
import apprise
from apprise import NotifyFormat, NotifyType
except ImportError: # pragma: no cover
logger.warning('Unable to import Apprise in monitoring hook')
return
state_to_notify_type = {
'start': NotifyType.INFO,
'finish': NotifyType.SUCCESS,
'fail': NotifyType.FAILURE,
'log': NotifyType.INFO,
}
run_states = hook_config.get('states', ['fail'])
if state.name.lower() not in run_states:
return
state_config = hook_config.get(
state.name.lower(),
{
'title': f'A borgmatic {state.name} event happened',
'body': f'A borgmatic {state.name} event happened',
},
)
if not hook_config.get('services'):
logger.info(f'{config_filename}: No Apprise services to ping')
return
dry_run_string = ' (dry run; not actually pinging)' if dry_run else ''
labels_string = ', '.join(map(operator.itemgetter('label'), hook_config.get('services')))
logger.info(f'{config_filename}: Pinging Apprise services: {labels_string}{dry_run_string}')
apprise_object = apprise.Apprise()
apprise_object.add(list(map(operator.itemgetter('url'), hook_config.get('services'))))
if dry_run:
return
body = state_config.get('body')
if state in (
borgmatic.hooks.monitor.State.FINISH,
borgmatic.hooks.monitor.State.FAIL,
borgmatic.hooks.monitor.State.LOG,
):
formatted_logs = borgmatic.hooks.logs.format_buffered_logs_for_payload(HANDLER_IDENTIFIER)
if formatted_logs:
body += f'\n\n{formatted_logs}'
result = apprise_object.notify(
title=state_config.get('title', ''),
body=body,
body_format=NotifyFormat.TEXT,
notify_type=state_to_notify_type[state.name.lower()],
)
if result is False:
logger.warning(f'{config_filename}: Error sending some Apprise notifications')
def destroy_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Remove the monitor handler that was added to the root logger. This prevents the handler from
getting reused by other instances of this monitor.
'''
borgmatic.hooks.logs.remove_handler(HANDLER_IDENTIFIER)

View File

@ -1,6 +1,7 @@
import logging
import os
import re
import shlex
from borgmatic import execute
@ -16,7 +17,7 @@ def interpolate_context(config_filename, hook_description, command, context):
names/values, interpolate the values by "{name}" into the command and return the result.
'''
for name, value in context.items():
command = command.replace(f'{{{name}}}', str(value))
command = command.replace(f'{{{name}}}', shlex.quote(str(value)))
for unsupported_variable in re.findall(r'{\w+}', command):
logger.warning(
@ -67,9 +68,9 @@ def execute_hook(commands, umask, config_filename, description, dry_run, **conte
if not dry_run:
execute.execute_command(
[command],
output_log_level=logging.ERROR
if description == 'on-error'
else logging.WARNING,
output_log_level=(
logging.ERROR if description == 'on-error' else logging.WARNING
),
shell=True,
)
finally:

View File

@ -1,9 +1,12 @@
import logging
from borgmatic.hooks import (
apprise,
cronhub,
cronitor,
healthchecks,
loki,
mariadb,
mongodb,
mysql,
ntfy,
@ -15,15 +18,18 @@ from borgmatic.hooks import (
logger = logging.getLogger(__name__)
HOOK_NAME_TO_MODULE = {
'apprise': apprise,
'cronhub': cronhub,
'cronitor': cronitor,
'healthchecks': healthchecks,
'mariadb_databases': mariadb,
'mongodb_databases': mongodb,
'mysql_databases': mysql,
'ntfy': ntfy,
'pagerduty': pagerduty,
'postgresql_databases': postgresql,
'sqlite_databases': sqlite,
'loki': loki,
}

View File

@ -6,34 +6,35 @@ from borgmatic.borg.state import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
logger = logging.getLogger(__name__)
DATABASE_HOOK_NAMES = (
'postgresql_databases',
DATA_SOURCE_HOOK_NAMES = (
'mariadb_databases',
'mysql_databases',
'mongodb_databases',
'postgresql_databases',
'sqlite_databases',
)
def make_database_dump_path(borgmatic_source_directory, database_hook_name):
def make_data_source_dump_path(borgmatic_source_directory, data_source_hook_name):
'''
Given a borgmatic source directory (or None) and a database hook name, construct a database dump
path.
Given a borgmatic source directory (or None) and a data source hook name, construct a data
source dump path.
'''
if not borgmatic_source_directory:
borgmatic_source_directory = DEFAULT_BORGMATIC_SOURCE_DIRECTORY
return os.path.join(borgmatic_source_directory, database_hook_name)
return os.path.join(borgmatic_source_directory, data_source_hook_name)
def make_database_dump_filename(dump_path, name, hostname=None):
def make_data_source_dump_filename(dump_path, name, hostname=None):
'''
Based on the given dump directory path, database name, and hostname, return a filename to use
for the database dump. The hostname defaults to localhost.
Based on the given dump directory path, data source name, and hostname, return a filename to use
for the data source dump. The hostname defaults to localhost.
Raise ValueError if the database name is invalid.
Raise ValueError if the data source name is invalid.
'''
if os.path.sep in name:
raise ValueError(f'Invalid database name {name}')
raise ValueError(f'Invalid data source name {name}')
return os.path.join(os.path.expanduser(dump_path), hostname or 'localhost', name)
@ -53,14 +54,14 @@ def create_named_pipe_for_dump(dump_path):
os.mkfifo(dump_path, mode=0o600)
def remove_database_dumps(dump_path, database_type_name, log_prefix, dry_run):
def remove_data_source_dumps(dump_path, data_source_type_name, log_prefix, dry_run):
'''
Remove all database dumps in the given dump directory path (including the directory itself). If
this is a dry run, then don't actually remove anything.
Remove all data source dumps in the given dump directory path (including the directory itself).
If this is a dry run, then don't actually remove anything.
'''
dry_run_label = ' (dry run; not actually removing anything)' if dry_run else ''
logger.debug(f'{log_prefix}: Removing {database_type_name} database dumps{dry_run_label}')
logger.debug(f'{log_prefix}: Removing {data_source_type_name} data source dumps{dry_run_label}')
expanded_path = os.path.expanduser(dump_path)

View File

@ -1,7 +1,9 @@
import logging
import re
import requests
import borgmatic.hooks.logs
from borgmatic.hooks import monitor
logger = logging.getLogger(__name__)
@ -13,61 +15,8 @@ MONITOR_STATE_TO_HEALTHCHECKS = {
monitor.State.LOG: 'log',
}
PAYLOAD_TRUNCATION_INDICATOR = '...\n'
DEFAULT_PING_BODY_LIMIT_BYTES = 100000
class Forgetful_buffering_handler(logging.Handler):
'''
A buffering log handler that stores log messages in memory, and throws away messages (oldest
first) once a particular capacity in bytes is reached. But if the given byte capacity is zero,
don't throw away any messages.
'''
def __init__(self, byte_capacity, log_level):
super().__init__()
self.byte_capacity = byte_capacity
self.byte_count = 0
self.buffer = []
self.forgot = False
self.setLevel(log_level)
def emit(self, record):
message = record.getMessage() + '\n'
self.byte_count += len(message)
self.buffer.append(message)
if not self.byte_capacity:
return
while self.byte_count > self.byte_capacity and self.buffer:
self.byte_count -= len(self.buffer[0])
self.buffer.pop(0)
self.forgot = True
def format_buffered_logs_for_payload():
'''
Get the handler previously added to the root logger, and slurp buffered logs out of it to
send to Healthchecks.
'''
try:
buffering_handler = next(
handler
for handler in logging.getLogger().handlers
if isinstance(handler, Forgetful_buffering_handler)
)
except StopIteration:
# No handler means no payload.
return ''
payload = ''.join(message for message in buffering_handler.buffer)
if buffering_handler.forgot:
return PAYLOAD_TRUNCATION_INDICATOR + payload
return payload
DEFAULT_PING_BODY_LIMIT_BYTES = 1500
HANDLER_IDENTIFIER = 'healthchecks'
def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
@ -81,12 +30,14 @@ def initialize_monitor(hook_config, config, config_filename, monitoring_log_leve
ping_body_limit = max(
hook_config.get('ping_body_limit', DEFAULT_PING_BODY_LIMIT_BYTES)
- len(PAYLOAD_TRUNCATION_INDICATOR),
- len(borgmatic.hooks.logs.PAYLOAD_TRUNCATION_INDICATOR),
0,
)
logging.getLogger().addHandler(
Forgetful_buffering_handler(ping_body_limit, monitoring_log_level)
borgmatic.hooks.logs.add_handler(
borgmatic.hooks.logs.Forgetful_buffering_handler(
HANDLER_IDENTIFIER, ping_body_limit, monitoring_log_level
)
)
@ -109,15 +60,25 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
)
return
ping_url_is_uuid = re.search(r'\w{8}-\w{4}-\w{4}-\w{4}-\w{12}$', ping_url)
healthchecks_state = MONITOR_STATE_TO_HEALTHCHECKS.get(state)
if healthchecks_state:
ping_url = f'{ping_url}/{healthchecks_state}'
if hook_config.get('create_slug'):
if ping_url_is_uuid:
logger.warning(
f'{config_filename}: Healthchecks UUIDs do not support auto provisionning; ignoring'
)
else:
ping_url = f'{ping_url}?create=1'
logger.info(f'{config_filename}: Pinging Healthchecks {state.name.lower()}{dry_run_label}')
logger.debug(f'{config_filename}: Using Healthchecks ping URL {ping_url}')
if state in (monitor.State.FINISH, monitor.State.FAIL, monitor.State.LOG):
payload = format_buffered_logs_for_payload()
payload = borgmatic.hooks.logs.format_buffered_logs_for_payload(HANDLER_IDENTIFIER)
else:
payload = ''
@ -138,8 +99,4 @@ def destroy_monitor(hook_config, config, config_filename, monitoring_log_level,
Remove the monitor handler that was added to the root logger. This prevents the handler from
getting reused by other instances of this monitor.
'''
logger = logging.getLogger()
for handler in tuple(logger.handlers):
if isinstance(handler, Forgetful_buffering_handler):
logger.removeHandler(handler)
borgmatic.hooks.logs.remove_handler(HANDLER_IDENTIFIER)

91
borgmatic/hooks/logs.py Normal file
View File

@ -0,0 +1,91 @@
import logging
PAYLOAD_TRUNCATION_INDICATOR = '...\n'
class Forgetful_buffering_handler(logging.Handler):
'''
A buffering log handler that stores log messages in memory, and throws away messages (oldest
first) once a particular capacity in bytes is reached. But if the given byte capacity is zero,
don't throw away any messages.
The given identifier is used to distinguish the instance of this handler used for one monitoring
hook from those instances used for other monitoring hooks.
'''
def __init__(self, identifier, byte_capacity, log_level):
super().__init__()
self.identifier = identifier
self.byte_capacity = byte_capacity
self.byte_count = 0
self.buffer = []
self.forgot = False
self.setLevel(log_level)
def emit(self, record):
message = record.getMessage() + '\n'
self.byte_count += len(message)
self.buffer.append(message)
if not self.byte_capacity:
return
while self.byte_count > self.byte_capacity and self.buffer:
self.byte_count -= len(self.buffer[0])
self.buffer.pop(0)
self.forgot = True
def add_handler(handler): # pragma: no cover
'''
Add the given handler to the global logger.
'''
logging.getLogger().addHandler(handler)
def get_handler(identifier):
'''
Given the identifier for an existing Forgetful_buffering_handler instance, return the handler.
Raise ValueError if the handler isn't found.
'''
try:
return next(
handler
for handler in logging.getLogger().handlers
if isinstance(handler, Forgetful_buffering_handler) and handler.identifier == identifier
)
except StopIteration:
raise ValueError(f'A buffering handler for {identifier} was not found')
def format_buffered_logs_for_payload(identifier):
'''
Get the handler previously added to the root logger, and slurp buffered logs out of it to
send to Healthchecks.
'''
try:
buffering_handler = get_handler(identifier)
except ValueError:
# No handler means no payload.
return ''
payload = ''.join(message for message in buffering_handler.buffer)
if buffering_handler.forgot:
return PAYLOAD_TRUNCATION_INDICATOR + payload
return payload
def remove_handler(identifier):
'''
Given the identifier for an existing Forgetful_buffering_handler instance, remove it.
'''
logger = logging.getLogger()
try:
logger.removeHandler(get_handler(identifier))
except ValueError:
pass

154
borgmatic/hooks/loki.py Normal file
View File

@ -0,0 +1,154 @@
import json
import logging
import os
import platform
import time
import requests
from borgmatic.hooks import monitor
logger = logging.getLogger(__name__)
MONITOR_STATE_TO_LOKI = {
monitor.State.START: 'Started',
monitor.State.FINISH: 'Finished',
monitor.State.FAIL: 'Failed',
}
# Threshold at which logs get flushed to loki
MAX_BUFFER_LINES = 100
class Loki_log_buffer:
'''
A log buffer that allows to output the logs as loki requests in json. Allows
adding labels to the log stream and takes care of communication with loki.
'''
def __init__(self, url, dry_run):
self.url = url
self.dry_run = dry_run
self.root = {'streams': [{'stream': {}, 'values': []}]}
def add_value(self, value):
'''
Add a log entry to the stream.
'''
timestamp = str(time.time_ns())
self.root['streams'][0]['values'].append((timestamp, value))
def add_label(self, label, value):
'''
Add a label to the logging stream.
'''
self.root['streams'][0]['stream'][label] = value
def to_request(self):
return json.dumps(self.root)
def __len__(self):
'''
Gets the number of lines currently in the buffer.
'''
return len(self.root['streams'][0]['values'])
def flush(self):
if self.dry_run:
# Just empty the buffer and skip
self.root['streams'][0]['values'] = []
logger.info('Skipped uploading logs to loki due to dry run')
return
if len(self) == 0:
# Skip as there are not logs to send yet
return
request_body = self.to_request()
self.root['streams'][0]['values'] = []
request_header = {'Content-Type': 'application/json'}
try:
result = requests.post(self.url, headers=request_header, data=request_body, timeout=5)
result.raise_for_status()
except requests.RequestException:
logger.warning('Failed to upload logs to loki')
class Loki_log_handler(logging.Handler):
'''
A log handler that sends logs to loki.
'''
def __init__(self, url, dry_run):
super().__init__()
self.buffer = Loki_log_buffer(url, dry_run)
def emit(self, record):
'''
Add a log record from the logging module to the stream.
'''
self.raw(record.getMessage())
def add_label(self, key, value):
'''
Add a label to the logging stream.
'''
self.buffer.add_label(key, value)
def raw(self, msg):
'''
Add an arbitrary string as a log entry to the stream.
'''
self.buffer.add_value(msg)
if len(self.buffer) > MAX_BUFFER_LINES:
self.buffer.flush()
def flush(self):
'''
Send the logs to loki and empty the buffer.
'''
self.buffer.flush()
def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Add a handler to the root logger to regularly send the logs to loki.
'''
url = hook_config.get('url')
loki = Loki_log_handler(url, dry_run)
for key, value in hook_config.get('labels').items():
if value == '__hostname':
loki.add_label(key, platform.node())
elif value == '__config':
loki.add_label(key, os.path.basename(config_filename))
elif value == '__config_path':
loki.add_label(key, config_filename)
else:
loki.add_label(key, value)
logging.getLogger().addHandler(loki)
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run):
'''
Add an entry to the loki logger with the current state.
'''
for handler in tuple(logging.getLogger().handlers):
if isinstance(handler, Loki_log_handler):
if state in MONITOR_STATE_TO_LOKI.keys():
handler.raw(f'{config_filename}: {MONITOR_STATE_TO_LOKI[state]} backup')
def destroy_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Remove the monitor handler that was added to the root logger.
'''
logger = logging.getLogger()
for handler in tuple(logger.handlers):
if isinstance(handler, Loki_log_handler):
handler.flush()
logger.removeHandler(handler)

257
borgmatic/hooks/mariadb.py Normal file
View File

@ -0,0 +1,257 @@
import copy
import logging
import os
import shlex
from borgmatic.execute import (
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_data_source_dump_path(
config.get('borgmatic_source_directory'), 'mariadb_databases'
)
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
'''
Given a requested database config, return the corresponding sequence of database names to dump.
In the case of "all", query for the names of databases on the configured host and return them,
excluding any system databases that will cause problems during restore.
'''
if database['name'] != 'all':
return (database['name'],)
if dry_run:
return ()
mariadb_show_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('mariadb_command') or 'mariadb')
)
show_command = (
mariadb_show_command
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ ('--skip-column-names', '--batch')
+ ('--execute', 'show schemas')
)
logger.debug(f'{log_prefix}: Querying for "all" MariaDB databases to dump')
show_output = execute_command_and_capture_output(
show_command, extra_environment=extra_environment
)
return tuple(
show_name
for show_name in show_output.strip().splitlines()
if show_name not in SYSTEM_DATABASE_NAMES
)
def execute_dump_command(
database, log_prefix, dump_path, database_names, extra_environment, dry_run, dry_run_label
):
'''
Kick off a dump for the given MariaDB database (provided as a configuration dict) to a named
pipe constructed from the given dump path and database name. Use the given log prefix in any
log entries.
Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if
this is a dry run, then don't actually dump anything and return None.
'''
database_name = database['name']
dump_filename = dump.make_data_source_dump_filename(
dump_path, database['name'], database.get('hostname')
)
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of MariaDB database "{database_name}" to {dump_filename}'
)
return None
mariadb_dump_command = tuple(
shlex.quote(part)
for part in shlex.split(database.get('mariadb_dump_command') or 'mariadb-dump')
)
dump_command = (
mariadb_dump_command
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ ('--databases',)
+ database_names
+ ('--result-file', dump_filename)
)
logger.debug(
f'{log_prefix}: Dumping MariaDB database "{database_name}" to {dump_filename}{dry_run_label}'
)
if dry_run:
return None
dump.create_named_pipe_for_dump(dump_filename)
return execute_command(
dump_command,
extra_environment=extra_environment,
run_to_completion=False,
)
def use_streaming(databases, config, log_prefix):
'''
Given a sequence of MariaDB database configuration dicts, a configuration dict (ignored), and a
log prefix (ignored), return whether streaming will be using during dumps.
'''
return any(databases)
def dump_data_sources(databases, config, log_prefix, dry_run):
'''
Dump the given MariaDB databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given
configuration dict to construct the destination path and the given log prefix in any log
entries.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info(f'{log_prefix}: Dumping MariaDB databases{dry_run_label}')
for database in databases:
dump_path = make_dump_path(config)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run
)
if not dump_database_names:
if dry_run:
continue
raise ValueError('Cannot find any MariaDB databases to dump.')
if database['name'] == 'all' and database.get('format'):
for dump_name in dump_database_names:
renamed_database = copy.copy(database)
renamed_database['name'] = dump_name
processes.append(
execute_dump_command(
renamed_database,
log_prefix,
dump_path,
(dump_name,),
extra_environment,
dry_run,
dry_run_label,
)
)
else:
processes.append(
execute_dump_command(
database,
log_prefix,
dump_path,
dump_database_names,
extra_environment,
dry_run,
dry_run_label,
)
)
return [process for process in processes if process]
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the given
configuration dict to construct the destination path and the log prefix in any log entries. If
this is a dry run, then don't actually remove anything.
'''
dump.remove_data_source_dumps(make_dump_path(config), 'MariaDB', log_prefix, dry_run)
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Given a sequence of configurations dicts, a configuration dict, a prefix to log with, and a
database name to match, return the corresponding glob patterns to match the database dump in an
archive.
'''
return dump.make_data_source_dump_filename(make_dump_path(config), name, hostname='*')
def restore_data_source_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params
):
'''
Restore a database from the given extract stream. The database is supplied as a data source
configuration dict, but the given hook configuration is ignored. The given configuration dict is
used to construct the destination path, and the given log prefix is used for any log entries. If
this is a dry run, then don't actually restore anything. Trigger the given active extract
process (an instance of subprocess.Popen) to produce output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
hostname = connection_params['hostname'] or data_source.get(
'restore_hostname', data_source.get('hostname')
)
port = str(
connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
)
username = connection_params['username'] or data_source.get(
'restore_username', data_source.get('username')
)
password = connection_params['password'] or data_source.get(
'restore_password', data_source.get('password')
)
mariadb_restore_command = tuple(
shlex.quote(part) for part in shlex.split(data_source.get('mariadb_command') or 'mariadb')
)
restore_command = (
mariadb_restore_command
+ ('--batch',)
+ (
tuple(data_source['restore_options'].split(' '))
if 'restore_options' in data_source
else ()
)
+ (('--host', hostname) if hostname else ())
+ (('--port', str(port)) if port else ())
+ (('--protocol', 'tcp') if hostname or port else ())
+ (('--user', username) if username else ())
)
extra_environment = {'MYSQL_PWD': password} if password else None
logger.debug(f"{log_prefix}: Restoring MariaDB database {data_source['name']}{dry_run_label}")
if dry_run:
return
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout,
extra_environment=extra_environment,
)

View File

@ -1,4 +1,5 @@
import logging
import shlex
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump
@ -10,12 +11,20 @@ def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_database_dump_path(
return dump.make_data_source_dump_path(
config.get('borgmatic_source_directory'), 'mongodb_databases'
)
def dump_databases(databases, config, log_prefix, dry_run):
def use_streaming(databases, config, log_prefix):
'''
Given a sequence of MongoDB database configuration dicts, a configuration dict (ignored), and a
log prefix (ignored), return whether streaming will be using during dumps.
'''
return any(database.get('format') != 'directory' for database in databases)
def dump_data_sources(databases, config, log_prefix, dry_run):
'''
Dump the given MongoDB databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the configuration
@ -31,7 +40,7 @@ def dump_databases(databases, config, log_prefix, dry_run):
processes = []
for database in databases:
name = database['name']
dump_filename = dump.make_database_dump_filename(
dump_filename = dump.make_data_source_dump_filename(
make_dump_path(config), name, database.get('hostname')
)
dump_format = database.get('format', 'archive')
@ -59,81 +68,69 @@ def build_dump_command(database, dump_filename, dump_format):
Return the mongodump command from a single database configuration.
'''
all_databases = database['name'] == 'all'
command = ['mongodump']
if dump_format == 'directory':
command.extend(('--out', dump_filename))
if 'hostname' in database:
command.extend(('--host', database['hostname']))
if 'port' in database:
command.extend(('--port', str(database['port'])))
if 'username' in database:
command.extend(('--username', database['username']))
if 'password' in database:
command.extend(('--password', database['password']))
if 'authentication_database' in database:
command.extend(('--authenticationDatabase', database['authentication_database']))
if not all_databases:
command.extend(('--db', database['name']))
if 'options' in database:
command.extend(database['options'].split(' '))
if dump_format != 'directory':
command.extend(('--archive', '>', dump_filename))
return command
return (
('mongodump',)
+ (('--out', shlex.quote(dump_filename)) if dump_format == 'directory' else ())
+ (('--host', shlex.quote(database['hostname'])) if 'hostname' in database else ())
+ (('--port', shlex.quote(str(database['port']))) if 'port' in database else ())
+ (('--username', shlex.quote(database['username'])) if 'username' in database else ())
+ (('--password', shlex.quote(database['password'])) if 'password' in database else ())
+ (
('--authenticationDatabase', shlex.quote(database['authentication_database']))
if 'authentication_database' in database
else ()
)
+ (('--db', shlex.quote(database['name'])) if not all_databases else ())
+ (
tuple(shlex.quote(option) for option in database['options'].split(' '))
if 'options' in database
else ()
)
+ (('--archive', '>', shlex.quote(dump_filename)) if dump_format != 'directory' else ())
)
def remove_database_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the log
prefix in any log entries. Use the given configuration dict to construct the destination path.
If this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(config), 'MongoDB', log_prefix, dry_run)
dump.remove_data_source_dumps(make_dump_path(config), 'MongoDB', log_prefix, dry_run)
def make_database_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Given a sequence of database configurations dicts, a configuration dict, a prefix to log with,
and a database name to match, return the corresponding glob patterns to match the database dump
in an archive.
'''
return dump.make_database_dump_filename(make_dump_path(config), name, hostname='*')
return dump.make_data_source_dump_filename(make_dump_path(config), name, hostname='*')
def restore_database_dump(
databases_config, config, log_prefix, database_name, dry_run, extract_process, connection_params
def restore_data_source_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params
):
'''
Restore the given MongoDB database from an extract stream. The databases are supplied as a
sequence containing one dict describing each database (as per the configuration schema), but
only the database corresponding to the given database name is restored. Use the configuration
dict to construct the destination path and the given log prefix in any log entries. If this is a
dry run, then don't actually restore anything. Trigger the given active extract process (an
instance of subprocess.Popen) to produce output to consume.
Restore a database from the given extract stream. The database is supplied as a data source
configuration dict, but the given hook configuration is ignored. The given configuration dict is
used to construct the destination path, and the given log prefix is used for any log entries. If
this is a dry run, then don't actually restore anything. Trigger the given active extract
process (an instance of subprocess.Popen) to produce output to consume.
If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
try:
database = next(
database_config
for database_config in databases_config
if database_config.get('name') == database_name
)
except StopIteration:
raise ValueError(
f'A database named "{database_name}" could not be found in the configuration'
)
dump_filename = dump.make_database_dump_filename(
make_dump_path(config), database['name'], database.get('hostname')
dump_filename = dump.make_data_source_dump_filename(
make_dump_path(config), data_source['name'], data_source.get('hostname')
)
restore_command = build_restore_command(
extract_process, database, dump_filename, connection_params
extract_process, data_source, dump_filename, connection_params
)
logger.debug(f"{log_prefix}: Restoring MongoDB database {database['name']}{dry_run_label}")
logger.debug(f"{log_prefix}: Restoring MongoDB database {data_source['name']}{dry_run_label}")
if dry_run:
return
@ -168,7 +165,7 @@ def build_restore_command(extract_process, database, dump_filename, connection_p
else:
command.extend(('--dir', dump_filename))
if database['name'] != 'all':
command.extend(('--drop', '--db', database['name']))
command.extend(('--drop',))
if hostname:
command.extend(('--host', hostname))
if port:
@ -181,7 +178,8 @@ def build_restore_command(extract_process, database, dump_filename, connection_p
command.extend(('--authenticationDatabase', database['authentication_database']))
if 'restore_options' in database:
command.extend(database['restore_options'].split(' '))
if database['schemas']:
if database.get('schemas'):
for schema in database['schemas']:
command.extend(('--nsInclude', schema))
return command

View File

@ -1,6 +1,6 @@
from enum import Enum
MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty', 'ntfy')
MONITOR_HOOK_NAMES = ('apprise', 'healthchecks', 'cronitor', 'cronhub', 'pagerduty', 'ntfy', 'loki')
class State(Enum):

View File

@ -1,6 +1,7 @@
import copy
import logging
import os
import shlex
from borgmatic.execute import (
execute_command,
@ -16,7 +17,9 @@ def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_database_dump_path(config.get('borgmatic_source_directory'), 'mysql_databases')
return dump.make_data_source_dump_path(
config.get('borgmatic_source_directory'), 'mysql_databases'
)
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
@ -33,8 +36,11 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
if dry_run:
return ()
mysql_show_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('mysql_command') or 'mysql')
)
show_command = (
('mysql',)
mysql_show_command
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
@ -60,24 +66,28 @@ def execute_dump_command(
):
'''
Kick off a dump for the given MySQL/MariaDB database (provided as a configuration dict) to a
named pipe constructed from the given dump path and database names. Use the given log prefix in
named pipe constructed from the given dump path and database name. Use the given log prefix in
any log entries.
Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if
this is a dry run, then don't actually dump anything and return None.
'''
database_name = database['name']
dump_filename = dump.make_database_dump_filename(
dump_filename = dump.make_data_source_dump_filename(
dump_path, database['name'], database.get('hostname')
)
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of MySQL database "{database_name}" to {dump_filename}'
)
return None
mysql_dump_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('mysql_dump_command') or 'mysqldump')
)
dump_command = (
('mysqldump',)
mysql_dump_command
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
@ -104,7 +114,15 @@ def execute_dump_command(
)
def dump_databases(databases, config, log_prefix, dry_run):
def use_streaming(databases, config, log_prefix):
'''
Given a sequence of MySQL database configuration dicts, a configuration dict (ignored), and a
log prefix (ignored), return whether streaming will be using during dumps.
'''
return any(databases)
def dump_data_sources(databases, config, log_prefix, dry_run):
'''
Dump the given MySQL/MariaDB databases to a named pipe. The databases are supplied as a sequence
of dicts, one dict describing each database as per the configuration schema. Use the given
@ -162,61 +180,59 @@ def dump_databases(databases, config, log_prefix, dry_run):
return [process for process in processes if process]
def remove_database_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the given
configuration dict to construct the destination path and the log prefix in any log entries. If
this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(config), 'MySQL', log_prefix, dry_run)
dump.remove_data_source_dumps(make_dump_path(config), 'MySQL', log_prefix, dry_run)
def make_database_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Given a sequence of configurations dicts, a configuration dict, a prefix to log with, and a
database name to match, return the corresponding glob patterns to match the database dump in an
archive.
'''
return dump.make_database_dump_filename(make_dump_path(config), name, hostname='*')
return dump.make_data_source_dump_filename(make_dump_path(config), name, hostname='*')
def restore_database_dump(
databases_config, config, log_prefix, database_name, dry_run, extract_process, connection_params
def restore_data_source_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params
):
'''
Restore the given MySQL/MariaDB database from an extract stream. The databases are supplied as a
sequence containing one dict describing each database (as per the configuration schema), but
only the database corresponding to the given database name is restored. Use the given log
prefix in any log entries. If this is a dry run, then don't actually restore anything. Trigger
the given active extract process (an instance of subprocess.Popen) to produce output to consume.
Restore a database from the given extract stream. The database is supplied as a data source
configuration dict, but the given hook configuration is ignored. The given configuration dict is
used to construct the destination path, and the given log prefix is used for any log entries. If
this is a dry run, then don't actually restore anything. Trigger the given active extract
process (an instance of subprocess.Popen) to produce output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
try:
database = next(
database_config
for database_config in databases_config
if database_config.get('name') == database_name
)
except StopIteration:
raise ValueError(
f'A database named "{database_name}" could not be found in the configuration'
)
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
hostname = connection_params['hostname'] or data_source.get(
'restore_hostname', data_source.get('hostname')
)
port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
username = connection_params['username'] or database.get(
'restore_username', database.get('username')
port = str(
connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
)
password = connection_params['password'] or database.get(
'restore_password', database.get('password')
username = connection_params['username'] or data_source.get(
'restore_username', data_source.get('username')
)
password = connection_params['password'] or data_source.get(
'restore_password', data_source.get('password')
)
mysql_restore_command = tuple(
shlex.quote(part) for part in shlex.split(data_source.get('mysql_command') or 'mysql')
)
restore_command = (
('mysql', '--batch')
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
mysql_restore_command
+ ('--batch',)
+ (
tuple(data_source['restore_options'].split(' '))
if 'restore_options' in data_source
else ()
)
+ (('--host', hostname) if hostname else ())
+ (('--port', str(port)) if port else ())
+ (('--protocol', 'tcp') if hostname or port else ())
@ -224,7 +240,7 @@ def restore_database_dump(
)
extra_environment = {'MYSQL_PWD': password} if password else None
logger.debug(f"{log_prefix}: Restoring MySQL database {database['name']}{dry_run_label}")
logger.debug(f"{log_prefix}: Restoring MySQL database {data_source['name']}{dry_run_label}")
if dry_run:
return

View File

@ -50,9 +50,16 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
username = hook_config.get('username')
password = hook_config.get('password')
access_token = hook_config.get('access_token')
auth = None
if (username and password) is not None:
if access_token is not None:
if username or password:
logger.warning(
f'{config_filename}: ntfy access_token is set but so is username/password, only using access_token'
)
auth = requests.auth.HTTPBasicAuth('', access_token)
elif (username and password) is not None:
auth = requests.auth.HTTPBasicAuth(username, password)
logger.info(f'{config_filename}: Using basic auth with user {username} for ntfy')
elif username is not None:

View File

@ -18,15 +18,15 @@ def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_database_dump_path(
return dump.make_data_source_dump_path(
config.get('borgmatic_source_directory'), 'postgresql_databases'
)
def make_extra_environment(database, restore_connection_params=None):
'''
Make the extra_environment dict from the given database configuration.
If restore connection params are given, this is for a restore operation.
Make the extra_environment dict from the given database configuration. If restore connection
params are given, this is for a restore operation.
'''
extra = dict()
@ -40,7 +40,8 @@ def make_extra_environment(database, restore_connection_params=None):
except (AttributeError, KeyError):
pass
extra['PGSSLMODE'] = database.get('ssl_mode', 'disable')
if 'ssl_mode' in database:
extra['PGSSLMODE'] = database['ssl_mode']
if 'ssl_cert' in database:
extra['PGSSLCERT'] = database['ssl_cert']
if 'ssl_key' in database:
@ -49,6 +50,7 @@ def make_extra_environment(database, restore_connection_params=None):
extra['PGSSLROOTCERT'] = database['ssl_root_cert']
if 'ssl_crl' in database:
extra['PGSSLCRL'] = database['ssl_crl']
return extra
@ -71,9 +73,11 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
if dry_run:
return ()
psql_command = shlex.split(database.get('psql_command') or 'psql')
psql_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('psql_command') or 'psql')
)
list_command = (
tuple(psql_command)
psql_command
+ ('--list', '--no-password', '--no-psqlrc', '--csv', '--tuples-only')
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
@ -92,7 +96,15 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
)
def dump_databases(databases, config, log_prefix, dry_run):
def use_streaming(databases, config, log_prefix):
'''
Given a sequence of PostgreSQL database configuration dicts, a configuration dict (ignored), and
a log prefix (ignored), return whether streaming will be using during dumps.
'''
return any(database.get('format') != 'directory' for database in databases)
def dump_data_sources(databases, config, log_prefix, dry_run):
'''
Dump the given PostgreSQL databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given
@ -125,8 +137,11 @@ def dump_databases(databases, config, log_prefix, dry_run):
for database_name in dump_database_names:
dump_format = database.get('format', None if database_name == 'all' else 'custom')
default_dump_command = 'pg_dumpall' if database_name == 'all' else 'pg_dump'
dump_command = database.get('pg_dump_command') or default_dump_command
dump_filename = dump.make_database_dump_filename(
dump_command = tuple(
shlex.quote(part)
for part in shlex.split(database.get('pg_dump_command') or default_dump_command)
)
dump_filename = dump.make_data_source_dump_filename(
dump_path, database_name, database.get('hostname')
)
if os.path.exists(dump_filename):
@ -136,24 +151,32 @@ def dump_databases(databases, config, log_prefix, dry_run):
continue
command = (
(
dump_command,
dump_command
+ (
'--no-password',
'--clean',
'--if-exists',
)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (('--host', shlex.quote(database['hostname'])) if 'hostname' in database else ())
+ (('--port', shlex.quote(str(database['port']))) if 'port' in database else ())
+ (
('--username', shlex.quote(database['username']))
if 'username' in database
else ()
)
+ (('--no-owner',) if database.get('no_owner', False) else ())
+ (('--format', dump_format) if dump_format else ())
+ (('--file', dump_filename) if dump_format == 'directory' else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (() if database_name == 'all' else (database_name,))
+ (('--format', shlex.quote(dump_format)) if dump_format else ())
+ (('--file', shlex.quote(dump_filename)) if dump_format == 'directory' else ())
+ (
tuple(shlex.quote(option) for option in database['options'].split(' '))
if 'options' in database
else ()
)
+ (() if database_name == 'all' else (shlex.quote(database_name),))
# Use shell redirection rather than the --file flag to sidestep synchronization issues
# when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump
# format in a particular, a named destination is required, and redirection doesn't work.
+ (('>', dump_filename) if dump_format != 'directory' else ())
+ (('>', shlex.quote(dump_filename)) if dump_format != 'directory' else ())
)
logger.debug(
@ -183,34 +206,33 @@ def dump_databases(databases, config, log_prefix, dry_run):
return processes
def remove_database_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the given
configuration dict to construct the destination path and the log prefix in any log entries. If
this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(config), 'PostgreSQL', log_prefix, dry_run)
dump.remove_data_source_dumps(make_dump_path(config), 'PostgreSQL', log_prefix, dry_run)
def make_database_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Given a sequence of configurations dicts, a configuration dict, a prefix to log with, and a
database name to match, return the corresponding glob patterns to match the database dump in an
archive.
'''
return dump.make_database_dump_filename(make_dump_path(config), name, hostname='*')
return dump.make_data_source_dump_filename(make_dump_path(config), name, hostname='*')
def restore_database_dump(
databases_config, config, log_prefix, database_name, dry_run, extract_process, connection_params
def restore_data_source_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params
):
'''
Restore the given PostgreSQL database from an extract stream. The databases are supplied as a
sequence containing one dict describing each database (as per the configuration schema), but
only the database corresponding to the given database name is restored. Use the given
configuration dict to construct the destination path and the given log prefix in any log
entries. If this is a dry run, then don't actually restore anything. Trigger the given active
extract process (an instance of subprocess.Popen) to produce output to consume.
Restore a database from the given extract stream. The database is supplied as a data source
configuration dict, but the given hook configuration is ignored. The given configuration dict is
used to construct the destination path, and the given log prefix is used for any log entries. If
this is a dry run, then don't actually restore anything. Trigger the given active extract
process (an instance of subprocess.Popen) to produce output to consume.
If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream.
@ -219,66 +241,71 @@ def restore_database_dump(
hostname, port, username, and password.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
try:
database = next(
database_config
for database_config in databases_config
if database_config.get('name') == database_name
)
except StopIteration:
raise ValueError(
f'A database named "{database_name}" could not be found in the configuration'
)
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
hostname = connection_params['hostname'] or data_source.get(
'restore_hostname', data_source.get('hostname')
)
port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
username = connection_params['username'] or database.get(
'restore_username', database.get('username')
port = str(
connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
)
username = connection_params['username'] or data_source.get(
'restore_username', data_source.get('username')
)
all_databases = bool(database['name'] == 'all')
dump_filename = dump.make_database_dump_filename(
make_dump_path(config), database['name'], database.get('hostname')
all_databases = bool(data_source['name'] == 'all')
dump_filename = dump.make_data_source_dump_filename(
make_dump_path(config), data_source['name'], data_source.get('hostname')
)
psql_command = tuple(
shlex.quote(part) for part in shlex.split(data_source.get('psql_command') or 'psql')
)
psql_command = shlex.split(database.get('psql_command') or 'psql')
analyze_command = (
tuple(psql_command)
psql_command
+ ('--no-password', '--no-psqlrc', '--quiet')
+ (('--host', hostname) if hostname else ())
+ (('--port', port) if port else ())
+ (('--username', username) if username else ())
+ (('--dbname', database['name']) if not all_databases else ())
+ (tuple(database['analyze_options'].split(' ')) if 'analyze_options' in database else ())
+ (('--dbname', data_source['name']) if not all_databases else ())
+ (
tuple(data_source['analyze_options'].split(' '))
if 'analyze_options' in data_source
else ()
)
+ ('--command', 'ANALYZE')
)
use_psql_command = all_databases or database.get('format') == 'plain'
pg_restore_command = shlex.split(database.get('pg_restore_command') or 'pg_restore')
use_psql_command = all_databases or data_source.get('format') == 'plain'
pg_restore_command = tuple(
shlex.quote(part)
for part in shlex.split(data_source.get('pg_restore_command') or 'pg_restore')
)
restore_command = (
tuple(psql_command if use_psql_command else pg_restore_command)
(psql_command if use_psql_command else pg_restore_command)
+ ('--no-password',)
+ (('--no-psqlrc',) if use_psql_command else ('--if-exists', '--exit-on-error', '--clean'))
+ (('--dbname', database['name']) if not all_databases else ())
+ (('--dbname', data_source['name']) if not all_databases else ())
+ (('--host', hostname) if hostname else ())
+ (('--port', port) if port else ())
+ (('--username', username) if username else ())
+ (('--no-owner',) if database.get('no_owner', False) else ())
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (('--no-owner',) if data_source.get('no_owner', False) else ())
+ (
tuple(data_source['restore_options'].split(' '))
if 'restore_options' in data_source
else ()
)
+ (() if extract_process else (dump_filename,))
+ tuple(
itertools.chain.from_iterable(('--schema', schema) for schema in database['schemas'])
if database.get('schemas')
itertools.chain.from_iterable(('--schema', schema) for schema in data_source['schemas'])
if data_source.get('schemas')
else ()
)
)
extra_environment = make_extra_environment(
database, restore_connection_params=connection_params
data_source, restore_connection_params=connection_params
)
logger.debug(f"{log_prefix}: Restoring PostgreSQL database {database['name']}{dry_run_label}")
logger.debug(
f"{log_prefix}: Restoring PostgreSQL database {data_source['name']}{dry_run_label}"
)
if dry_run:
return

View File

@ -1,5 +1,6 @@
import logging
import os
import shlex
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump
@ -11,17 +12,27 @@ def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_database_dump_path(
return dump.make_data_source_dump_path(
config.get('borgmatic_source_directory'), 'sqlite_databases'
)
def dump_databases(databases, config, log_prefix, dry_run):
def use_streaming(databases, config, log_prefix):
'''
Dump the given SQLite3 databases to a file. The databases are supplied as a sequence of
Given a sequence of SQLite database configuration dicts, a configuration dict (ignored), and a
log prefix (ignored), return whether streaming will be using during dumps.
'''
return any(databases)
def dump_data_sources(databases, config, log_prefix, dry_run):
'''
Dump the given SQLite databases to a named pipe. The databases are supplied as a sequence of
configuration dicts, as per the configuration schema. Use the given configuration dict to
construct the destination path and the given log prefix in any log entries. If this is a dry
run, then don't actually dump anything.
construct the destination path and the given log prefix in any log entries.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
@ -32,14 +43,15 @@ def dump_databases(databases, config, log_prefix, dry_run):
database_path = database['path']
if database['name'] == 'all':
logger.warning('The "all" database name has no meaning for SQLite3 databases')
logger.warning('The "all" database name has no meaning for SQLite databases')
if not os.path.exists(database_path):
logger.warning(
f'{log_prefix}: No SQLite database at {database_path}; An empty database will be created and dumped'
f'{log_prefix}: No SQLite database at {database_path}; an empty database will be created and dumped'
)
dump_path = make_dump_path(config)
dump_filename = dump.make_database_dump_filename(dump_path, database['name'])
dump_filename = dump.make_data_source_dump_filename(dump_path, database['name'])
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of SQLite database at {database_path} to {dump_filename}'
@ -48,10 +60,10 @@ def dump_databases(databases, config, log_prefix, dry_run):
command = (
'sqlite3',
database_path,
shlex.quote(database_path),
'.dump',
'>',
dump_filename,
shlex.quote(dump_filename),
)
logger.debug(
f'{log_prefix}: Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}'
@ -59,55 +71,43 @@ def dump_databases(databases, config, log_prefix, dry_run):
if dry_run:
continue
dump.create_parent_directory_for_dump(dump_filename)
dump.create_named_pipe_for_dump(dump_filename)
processes.append(execute_command(command, shell=True, run_to_completion=False))
return processes
def remove_database_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove the given SQLite3 database dumps from the filesystem. The databases are supplied as a
Remove the given SQLite database dumps from the filesystem. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema. Use the given configuration
dict to construct the destination path and the given log prefix in any log entries. If this is a
dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(config), 'SQLite', log_prefix, dry_run)
dump.remove_data_source_dumps(make_dump_path(config), 'SQLite', log_prefix, dry_run)
def make_database_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Make a pattern that matches the given SQLite3 databases. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema.
Make a pattern that matches the given SQLite databases. The databases are supplied as a sequence
of configuration dicts, as per the configuration schema.
'''
return dump.make_database_dump_filename(make_dump_path(config), name)
return dump.make_data_source_dump_filename(make_dump_path(config), name)
def restore_database_dump(
databases_config, config, log_prefix, database_name, dry_run, extract_process, connection_params
def restore_data_source_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params
):
'''
Restore the given SQLite3 database from an extract stream. The databases are supplied as a
sequence containing one dict describing each database (as per the configuration schema), but
only the database corresponding to the given database name is restored. Use the given log prefix
in any log entries. If this is a dry run, then don't actually restore anything. Trigger the
given active extract process (an instance of subprocess.Popen) to produce output to consume.
Restore a database from the given extract stream. The database is supplied as a data source
configuration dict, but the given hook configuration is ignored. The given configuration dict is
used to construct the destination path, and the given log prefix is used for any log entries. If
this is a dry run, then don't actually restore anything. Trigger the given active extract
process (an instance of subprocess.Popen) to produce output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
try:
database = next(
database_config
for database_config in databases_config
if database_config.get('name') == database_name
)
except StopIteration:
raise ValueError(
f'A database named "{database_name}" could not be found in the configuration'
)
database_path = connection_params['restore_path'] or database.get(
'restore_path', database.get('path')
database_path = connection_params['restore_path'] or data_source.get(
'restore_path', data_source.get('path')
)
logger.debug(f'{log_prefix}: Restoring SQLite database at {database_path}{dry_run_label}')

View File

@ -41,6 +41,9 @@ def should_do_markup(no_color, configs):
if any(config.get('output', {}).get('color') is False for config in configs.values()):
return False
if os.environ.get('NO_COLOR', None):
return False
py_colors = os.environ.get('PY_COLORS', None)
if py_colors is not None:
@ -159,22 +162,23 @@ def configure_logging(
monitoring_log_level=None,
log_file=None,
log_file_format=None,
color_enabled=True,
):
'''
Configure logging to go to both the console and (syslog or log file). Use the given log levels,
respectively.
respectively. If color is enabled, set up log formatting accordingly.
Raise FileNotFoundError or PermissionError if the log file could not be opened for writing.
'''
add_custom_log_levels()
if syslog_log_level is None:
syslog_log_level = console_log_level
syslog_log_level = logging.DISABLED
if log_file_log_level is None:
log_file_log_level = console_log_level
if monitoring_log_level is None:
monitoring_log_level = console_log_level
add_custom_log_levels()
# Log certain log levels to console stderr and others to stdout. This supports use cases like
# grepping (non-error) output.
console_disabled = logging.NullHandler()
@ -191,11 +195,17 @@ def configure_logging(
logging.DEBUG: console_standard_handler,
}
)
console_handler.setFormatter(Console_color_formatter())
if color_enabled:
console_handler.setFormatter(Console_color_formatter())
console_handler.setLevel(console_log_level)
syslog_path = None
if log_file is None and syslog_log_level != logging.DISABLED:
handlers = [console_handler]
if syslog_log_level != logging.DISABLED:
syslog_path = None
if os.path.exists('/dev/log'):
syslog_path = '/dev/log'
elif os.path.exists('/var/run/syslog'):
@ -203,14 +213,15 @@ def configure_logging(
elif os.path.exists('/var/run/log'):
syslog_path = '/var/run/log'
if syslog_path and not interactive_console():
syslog_handler = logging.handlers.SysLogHandler(address=syslog_path)
syslog_handler.setFormatter(
logging.Formatter('borgmatic: {levelname} {message}', style='{') # noqa: FS003
)
syslog_handler.setLevel(syslog_log_level)
handlers = (console_handler, syslog_handler)
elif log_file and log_file_log_level != logging.DISABLED:
if syslog_path:
syslog_handler = logging.handlers.SysLogHandler(address=syslog_path)
syslog_handler.setFormatter(
logging.Formatter('borgmatic: {levelname} {message}', style='{') # noqa: FS003
)
syslog_handler.setLevel(syslog_log_level)
handlers.append(syslog_handler)
if log_file and log_file_log_level != logging.DISABLED:
file_handler = logging.handlers.WatchedFileHandler(log_file)
file_handler.setFormatter(
logging.Formatter(
@ -218,11 +229,9 @@ def configure_logging(
)
)
file_handler.setLevel(log_file_log_level)
handlers = (console_handler, file_handler)
else:
handlers = (console_handler,)
handlers.append(file_handler)
logging.basicConfig(
level=min(console_log_level, syslog_log_level, log_file_log_level, monitoring_log_level),
level=min(handler.level for handler in handlers),
handlers=handlers,
)

View File

@ -23,12 +23,20 @@ def handle_signal(signal_number, frame):
if signal_number == signal.SIGTERM:
logger.critical('Exiting due to TERM signal')
sys.exit(EXIT_CODE_FROM_SIGNAL + signal.SIGTERM)
elif signal_number == signal.SIGINT:
raise KeyboardInterrupt()
def configure_signals():
'''
Configure borgmatic's signal handlers to pass relevant signals through to any child processes
like Borg. Note that SIGINT gets passed through even without these changes.
like Borg.
'''
for signal_number in (signal.SIGHUP, signal.SIGTERM, signal.SIGUSR1, signal.SIGUSR2):
for signal_number in (
signal.SIGHUP,
signal.SIGINT,
signal.SIGTERM,
signal.SIGUSR1,
signal.SIGUSR2,
):
signal.signal(signal_number, handle_signal)

View File

@ -2,13 +2,13 @@
font-size: 1rem; /* Reset */
}
.elv-toc details {
--details-force-closed: (max-width: 63.9375em); /* 1023px */
--details-force-closed: (max-width: 79.9375em); /* 1023px */
}
.elv-toc details > summary {
font-size: 1.375rem; /* 22px /16 */
margin-bottom: .5em;
}
@media (min-width: 64em) { /* 1024px */
@media (min-width: 80em) {
.elv-toc {
position: absolute;
left: 3rem;

View File

@ -121,7 +121,7 @@ main h1:first-child,
main .elv-toc + h1 {
border-bottom: 2px dotted #666;
}
@media (min-width: 64em) { /* 1024px */
@media (min-width: 80em) {
main .elv-toc + h1,
main .elv-toc + h2 {
margin-top: 0;
@ -243,10 +243,10 @@ footer.elv-layout {
.elv-layout-full {
max-width: none;
}
@media (min-width: 64em) { /* 1024px */
@media (min-width: 80em) {
.elv-layout-toc {
padding-left: 15rem;
max-width: 60rem;
max-width: 76rem;
margin-right: 1rem;
position: relative;
}

View File

@ -126,7 +126,7 @@ for more information.
## Hook output
Any output produced by your hooks shows up both at the console and in syslog
(when run in a non-interactive console). For more information, read about <a
(when enabled). For more information, read about <a
href="https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/">inspecting
your backups</a>.

View File

@ -15,7 +15,7 @@ consistent snapshot that is more suited for backups.
Fortunately, borgmatic includes built-in support for creating database dumps
prior to running backups. For example, here is everything you need to dump and
backup a couple of local PostgreSQL databases and a MySQL/MariaDB database.
backup a couple of local PostgreSQL databases and a MySQL database.
```yaml
postgresql_databases:
@ -46,6 +46,16 @@ sqlite_databases:
path: /var/lib/sqlite3/mydb.sqlite
```
<span class="minilink minilink-addedin">New in version 1.8.2</span> If you're
using MariaDB, use the MariaDB database hook instead of `mysql_databases:` as
the MariaDB hook calls native MariaDB commands instead of the deprecated MySQL
ones. For instance:
```yaml
mariadb_databases:
- name: comments
```
As part of each backup, borgmatic streams a database dump for each configured
database directly to Borg, so it's included in the backup without consuming
additional disk space. (The exceptions are the PostgreSQL/MongoDB "directory"
@ -75,16 +85,23 @@ postgresql_databases:
password: trustsome1
format: tar
options: "--role=someone"
mariadb_databases:
- name: photos
hostname: database3.example.org
port: 3307
username: root
password: trustsome1
options: "--skip-comments"
mysql_databases:
- name: posts
hostname: database3.example.org
hostname: database4.example.org
port: 3307
username: root
password: trustsome1
options: "--skip-comments"
mongodb_databases:
- name: messages
hostname: database4.example.org
hostname: database5.example.org
port: 27018
username: dbuser
password: trustsome1
@ -108,6 +125,8 @@ If you want to dump all databases on a host, use `all` for the database name:
```yaml
postgresql_databases:
- name: all
mariadb_databases:
- name: all
mysql_databases:
- name: all
mongodb_databases:
@ -123,15 +142,18 @@ The SQLite hook in particular does not consider "all" a special database name.
these options in the `hooks:` section of your configuration.
<span class="minilink minilink-addedin">New in version 1.7.6</span> With
PostgreSQL and MySQL, you can optionally dump "all" databases to separate
files instead of one combined dump file, allowing more convenient restores of
individual databases. Enable this by specifying your desired database dump
`format`:
PostgreSQL, MariaDB, and MySQL, you can optionally dump "all" databases to
separate files instead of one combined dump file, allowing more convenient
restores of individual databases. Enable this by specifying your desired
database dump `format`:
```yaml
postgresql_databases:
- name: all
format: custom
mariadb_databases:
- name: all
format: sql
mysql_databases:
- name: all
format: sql
@ -184,6 +206,36 @@ hooks:
Alter the ports in these examples to suit your particular database system.
Normally, borgmatic dumps a database by running a database dump command (e.g.
`pg_dump`) on the host or wherever borgmatic is running, and this command
connects to your containerized database via the given `hostname` and `port`.
But if you don't have any database dump commands installed on your host and
you'd rather use the commands inside your database container itself, borgmatic
supports that too. Just configure borgmatic to `exec` into your container to
run the dump command.
For instance, if using Docker and PostgreSQL, something like this might work:
```yaml
hooks:
postgresql_databases:
- name: users
hostname: 127.0.0.1
port: 5433
username: postgres
password: trustsome1
pg_dump_command: docker exec my_pg_container pg_dump
```
... where `my_pg_container` is the name of your database container. In this
example, you'd also need to set the `pg_restore_command` and `psql_command`
options.
Similar command override options are available for (some of) the other
supported database types as well. See the [configuration
reference](https://torsion.org/borgmatic/docs/reference/configuration/) for
details.
### No source directories
@ -222,10 +274,16 @@ to prepare for this situation, it's a good idea to include borgmatic's own
configuration files as part of your regular backups. That way, you can always
bring back any missing configuration files in order to restore a database.
<span class="minilink minilink-addedin">New in version 1.7.15</span> borgmatic
automatically includes configuration files in your backup. See [the
documentation on the `config bootstrap`
action](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/#extract-the-configuration-files-used-to-create-an-archive)
for more information.
## Supported databases
As of now, borgmatic supports PostgreSQL, MySQL/MariaDB, MongoDB, and SQLite
As of now, borgmatic supports PostgreSQL, MariaDB, MySQL, MongoDB, and SQLite
databases directly. But see below about general-purpose preparation and
cleanup hooks as a work-around with other database systems. Also, please [file
a ticket](https://torsion.org/borgmatic/#issues) for additional database
@ -234,6 +292,10 @@ systems that you'd like supported.
## Database restoration
When you want to replace an existing database with its backed-up contents, you
can restore it with borgmatic. Note that the database must already exist;
borgmatic does not currently create a database upon restore.
To restore a database dump from an archive, use the `borgmatic restore`
action. But the first step is to figure out which archive to restore from. A
good way to do that is to use the `rlist` action:
@ -282,7 +344,8 @@ problem: the `restore` action figures out which repository to use.
But if you have multiple repositories configured, then you'll need to specify
the repository to use via the `--repository` flag. This can be done either
with the repository's path or its label as configured in your borgmatic configuration file.
with the repository's path or its label as configured in your borgmatic
configuration file.
```bash
borgmatic restore --repository repo.borg --archive host-2023-...
@ -374,19 +437,27 @@ borgmatic's own configuration file. So include your configuration file in
backups to avoid getting caught without a way to restore a database.
3. borgmatic does not currently support backing up or restoring multiple
databases that share the exact same name on different hosts.
4. Because database hooks implicitly enable the `read_special` configuration,
any special files are excluded from backups (named pipes, block devices,
character devices, and sockets) to prevent hanging. Try a command like `find
/your/source/path -type b -or -type c -or -type p -or -type s` to find such
files. Common directories to exclude are `/dev` and `/run`, but that may not
be exhaustive. <span class="minilink minilink-addedin">New in version
1.7.3</span> When database hooks are enabled, borgmatic automatically excludes
special files (and symlinks to special files) that may cause Borg to hang, so
generally you no longer need to manually exclude them. There are potential
edge cases though in which applications on your system create new special files
*after* borgmatic constructs its exclude list, resulting in Borg hangs. If that
occurs, you can resort to the manual excludes described above. And to opt out
of the auto-exclude feature entirely, explicitly set `read_special` to true.
4. Because database hooks implicitly enable the `read_special` option, any
special files are excluded from backups (named pipes, block devices,
character devices, and sockets) to prevent hanging. Try a command like
`find /your/source/path -type b -or -type c -or -type p -or -type s` to
find such files. Common directories to exclude are `/dev` and `/run`, but
that may not be exhaustive. <span class="minilink minilink-addedin">New in
version 1.7.3</span> When database hooks are enabled, borgmatic
automatically excludes special files (and symlinks to special files) that
may cause Borg to hang, so generally you no longer need to manually exclude
them. There are potential edge cases though in which applications on your
system create new special files *after* borgmatic constructs its exclude
list, resulting in Borg hangs. If that occurs, you can resort to the manual
excludes described above. And to opt out of the auto-exclude feature
entirely, explicitly set `read_special` to true.
5. Database hooks also implicitly enable the `one_file_system` option, which
means Borg won't cross filesystem boundaries when looking for files to backup.
This is especially important when running borgmatic in a container, as
container volumes are mounted as separate filesystems. One work-around is to
explicitly add each mounted volume you'd like to backup to
`source_directories` instead of relying on Borg to include them implicitly via
a parent directory.
### Manual restoration
@ -420,9 +491,9 @@ dumps with any database system.
## Troubleshooting
### PostgreSQL/MySQL authentication errors
### Authentication errors
With PostgreSQL and MySQL/MariaDB, if you're getting authentication errors
With PostgreSQL, MariaDB, and MySQL, if you're getting authentication errors
when borgmatic tries to connect to your database, a natural reaction is to
increase your borgmatic verbosity with `--verbosity 2` and go looking in the
logs. You'll notice though that your database password does not show up in the
@ -436,23 +507,24 @@ authenticated. For instance, with PostgreSQL, check your
[pg_hba.conf](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html)
file for that configuration.
Additionally, MySQL/MariaDB may be picking up some of your credentials from a
defaults file like `~/.my.cnf`. If that's the case, then it's possible
MySQL/MariaDB ends up using, say, a username from borgmatic's configuration
and a password from `~/.my.cnf`. This may result in authentication errors if
this combination of credentials is not what you intend.
Additionally, MariaDB or MySQL may be picking up some of your credentials from
a defaults file like `~/mariadb.cnf` or `~/.my.cnf`. If that's the case, then
it's possible MariaDB or MySQL end up using, say, a username from borgmatic's
configuration and a password from `~/mariadb.cnf` or `~/.my.cnf`. This may
result in authentication errors if this combination of credentials is not what
you intend.
### MySQL table lock errors
### MariaDB or MySQL table lock errors
If you encounter table lock errors during a database dump with MySQL/MariaDB,
you may need to [use a
transaction](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_single-transaction).
If you encounter table lock errors during a database dump with MariaDB or
MySQL, you may need to [use a
transaction](https://mariadb.com/docs/skysql-dbaas/ref/mdb/cli/mariadb-dump/single-transaction/).
You can add any additional flags to the `options:` in your database
configuration. Here's an example:
configuration. Here's an example with MariaDB:
```yaml
mysql_databases:
mariadb_databases:
- name: posts
options: "--single-transaction --quick"
```

View File

@ -0,0 +1,86 @@
---
title: How to customize warnings and errors
eleventyNavigation:
key: 💥 Customize warnings/errors
parent: How-to guides
order: 12
---
## When things go wrong
After Borg runs, it indicates whether it succeeded via its exit code, a
numeric ID indicating success, warning, or error. borgmatic consumes this exit
code to decide how to respond. Normally, a Borg error results in a borgmatic
error, while a Borg warning or success doesn't.
But if that default behavior isn't sufficient for your needs, you can
customize how borgmatic interprets [Borg's exit
codes](https://borgbackup.readthedocs.io/en/stable/usage/general.html#return-codes).
For instance, to elevate Borg warnings to errors, thereby causing borgmatic to
error on them, use the following borgmatic configuration:
```yaml
borg_exit_codes:
- exit_code: 1
treat_as: error
```
Be aware though that Borg exits with a warning code for a variety of benign
situations such as files changing while they're being read, so this example
may not meet your needs. Keep reading though for more granular exit code
configuration.
Here's an example that squashes Borg errors to warnings:
```yaml
borg_exit_codes:
- exit_code: 2
treat_as: warning
```
Be careful with this example though, because it prevents borgmatic from
erroring when Borg errors, which may not be desirable.
### More granular configuration
<span class="minilink minilink-addedin">New in Borg version 1.4</span> Borg
support for [more granular exit
codes](https://borgbackup.readthedocs.io/en/1.4-maint/usage/general.html#return-codes)
means that you can configure borgmatic to respond to specific Borg conditions.
See the full list of [Borg 1.4 error and warning exit
codes](https://borgbackup.readthedocs.io/en/1.4.0b1/internals/frontends.html#message-ids).
The `rc:` numeric value there tells you the exit code for each.
For instance, this borgmatic configuration elevates all Borg backup file
permission warnings (exit code `105`)—and only those warnings—to errors:
```yaml
borg_exit_codes:
- exit_code: 105
treat_as: error
```
The following configuration does that *and* elevates backup file not found
warnings (exit code `107`) to errors as well:
```yaml
borg_exit_codes:
- exit_code: 105
treat_as: error
- exit_code: 107
treat_as: error
```
If you don't know the exit code for a particular Borg error or warning you're
experiencing, you can usually find it in your borgmatic output when
`--verbosity 2` is enabled. For instance, here's a snippet of that output when
a backup file is not found:
```
/noexist: stat: [Errno 2] No such file or directory: '/noexist'
...
terminating with warning status, rc 107
```
So if you want to configure borgmatic to treat this as an error instead of a
warning, the exit status to use is `107`.

View File

@ -51,6 +51,11 @@ cron job), while only running expensive consistency checks with `check` on a
much less frequent basis (e.g. with `borgmatic check` called from a separate
cron job).
<span class="minilink minilink-addedin">New in version 1.8.5</span> Instead of
(or in addition to) specifying actions on the command-line, you can configure
borgmatic to [skip particular
actions](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#skipping-actions).
### Consistency check configuration
@ -86,8 +91,9 @@ Here are the available checks from fastest to slowest:
* `repository`: Checks the consistency of the repository itself.
* `archives`: Checks all of the archives in the repository.
* `extract`: Performs an extraction dry-run of the most recent archive.
* `extract`: Performs an extraction dry-run of the latest archive.
* `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data.
* `spot`: Compares file counts and contents between your source files and the latest archive.
Note that the `data` check is a more thorough version of the `archives` check,
so enabling the `data` check implicitly enables the `archives` check as well.
@ -97,6 +103,88 @@ documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html)
for more information.
### Spot check
The various consistency checks all have trade-offs around speed and
thoroughness, but most of them don't even look at your original source
files—arguably one important way to ensure your backups contain the files
you'll want to restore in the case of catastrophe (or just an accidentally
deleted file). Because if something goes wrong with your source files, most
consistency checks will still pass with flying colors and you won't discover
there's a problem until you go to restore.
<span class="minilink minilink-addedin">New in version 1.8.10</span> <span
class="minilink minilink-addedin">Beta feature</span> That's where the spot
check comes in. This check actually compares your source file counts and data
against those in the latest archive, potentially catching problems like
incorrect excludes, inadvertent deletes, files changed by malware, etc.
However, because an exhaustive comparison of all source files against the
latest archive might be too slow, the spot check supports *sampling* a
percentage of your source files for the comparison, ensuring they fall within
configured tolerances.
Here's how it works. Start by installing the `xxhash` OS package if you don't
already have it, so the spot check can run the `xxh64sum` command and
efficiently hash files for comparison. Then add something like the following
to your borgmatic configuration:
```yaml
checks:
- name: spot
count_tolerance_percentage: 10
data_sample_percentage: 1
data_tolerance_percentage: 0.5
```
The `count_tolerance_percentage` is the percentage delta between the source
directories file count and the latest backup archive file count that is
allowed before the entire consistency check fails. For instance, if the spot
check runs and finds 100 source files on disk and 105 files in the latest
archive, that would be within the configured 10% count tolerance and the check
would succeed. But if there were 100 source files and 200 archive files, the
check would fail. (100 source files and only 50 archive files would also
fail.)
The `data_sample_percentage` is the percentage of total files in the source
directories to randomly sample and compare to their corresponding files in the
latest backup archive. A higher value allows a more accurate check—and a
slower one. The comparison is performed by hashing the selected source files
and counting hashes that don't match the latest archive. For instance, if you
have 1,000 source files and your sample percentage is 1%, then only 10 source
files will be compared against the latest archive. These sampled files are
selected randomly each time, so in effect the spot check is probabilistic.
The `data_tolerance_percentage` is the percentage of total files in the source
directories that can fail a spot check data comparison without failing the
entire consistency check. The value must be lower than or equal to the
`contents_sample_percentage`.
All three options are required when using the spot check. And because the
check relies on these configured tolerances, it may not be a
set-it-and-forget-it type of consistency check, at least until you get the
tolerances dialed in so there are minimal false positives or negatives. It is
recommended you run `borgmatic check` several times after configuring the spot
check, tweaking your tolerances as needed. For certain workloads where your
source files experience wild swings of file contents or counts, the spot check
may not suitable at all.
What if you add, delete, or change a bunch of your source files and you don't
want the spot check to fail the next time it's run? Run `borgmatic create` to
create a new backup, thereby allowing the next spot check to run against an
archive that contains your recent changes.
Because the spot check only looks at the most recent archive, you may not want
to run it immediately after a `create` action (borgmatic's default behavior).
Instead, it may make more sense to run the spot check on a separate schedule
from `create`.
As long as the spot check feature is in beta, it may be subject to breaking
changes. But feel free to use it in production if you're okay with that
caveat, and please [provide any
feedback](https://torsion.org/borgmatic/#issues) you have on this feature.
### Check frequency
<span class="minilink minilink-addedin">New in version 1.6.2</span> You can
@ -116,8 +204,17 @@ this option in the `consistency:` section of your configuration.
This tells borgmatic to run the `repository` consistency check at most once
every two weeks for a given repository and the `archives` check at most once a
month. The `frequency` value is a number followed by a unit of time, e.g. "3
days", "1 week", "2 months", etc.
month. The `frequency` value is a number followed by a unit of time, e.g. `3
days`, `1 week`, `2 months`, etc. The set of possible time units is as
follows (singular or plural):
* `second`
* `minute`
* `hour`
* `day`
* `week` (7 days)
* `month` (30 days)
* `year` (365 days)
The `frequency` defaults to `always` for a check configured without a
`frequency`, which means run this check every time checks run. But if you omit
@ -139,6 +236,10 @@ though—or the most frequently configured check will apply.
If you want to temporarily ignore your configured frequencies, you can invoke
`borgmatic check --force` to run checks unconditionally.
<span class="minilink minilink-addedin">New in version 1.8.6</span> `borgmatic
check --force` runs `check` even if it's specified in the `skip_actions`
option.
### Running only checks
@ -162,7 +263,16 @@ location:
If that's still too slow, you can disable consistency checks entirely,
either for a single repository or for all repositories.
Disabling all consistency checks looks like this:
<span class="minilink minilink-addedin">New in version 1.8.5</span> Disabling
all consistency checks looks like this:
```yaml
skip_actions:
- check
```
<span class="minilink minilink-addedin">Prior to version 1.8.5</span> Use this
configuration instead:
```yaml
checks:
@ -170,10 +280,10 @@ checks:
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `consistency:` section of your configuration.
`checks:` in the `consistency:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> `checks`
was a plain list of strings without the `name:` part. For instance:
<span class="minilink minilink-addedin">Prior to version 1.6.2</span>
`checks:` was a plain list of strings without the `name:` part. For instance:
```yaml
checks:

View File

@ -3,11 +3,16 @@ title: How to develop on borgmatic
eleventyNavigation:
key: 🏗️ Develop on borgmatic
parent: How-to guides
order: 13
order: 14
---
## Source code
To get set up to develop on borgmatic, first clone it via HTTPS or SSH:
To get set up to develop on borgmatic, first [`install
pipx`](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#installation)
to make managing your borgmatic environment easy without impacting other
Python applications on your system.
Then, clone borgmatic via HTTPS or SSH:
```bash
git clone https://projects.torsion.org/borgmatic-collective/borgmatic.git
@ -19,36 +24,42 @@ Or:
git clone ssh://git@projects.torsion.org:3022/borgmatic-collective/borgmatic.git
```
Then, install borgmatic
"[editable](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs)"
Finally, install borgmatic
"[editable](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs)"
so that you can run borgmatic actions during development to make sure your
changes work.
changes work:
```bash
cd borgmatic
pip3 install --user --editable .
pipx ensurepath
pipx install --editable .
```
Note that this will typically install the borgmatic commands into
`~/.local/bin`, which may or may not be on your PATH. There are other ways to
install borgmatic editable as well, for instance into the system Python
install (so without `--user`, as root), or even into a
[virtualenv](https://virtualenv.pypa.io/en/stable/). How or where you install
borgmatic is up to you, but generally an editable install makes development
and testing easier.
Or to work on the [Apprise
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook),
change that last line to:
```bash
pipx install --editable .[Apprise]
```
To get oriented with the borgmatic source code, have a look at the [source
code reference](https://torsion.org/borgmatic/docs/reference/source-code/).
## Automated tests
Assuming you've cloned the borgmatic source code as described above, and
you're in the `borgmatic/` working copy, install tox, which is used for
setting up testing environments:
Assuming you've cloned the borgmatic source code as described above and you're
in the `borgmatic/` working copy, install tox, which is used for setting up
testing environments. You can either install a system package of tox (likely
called `tox` or `python-tox`) or you can install tox with pipx:
```bash
pip3 install --user tox
pipx install tox
```
Finally, to actually run tests, run:
Finally, to actually run tests, run tox from inside the borgmatic
sourcedirectory:
```bash
tox
@ -89,14 +100,14 @@ with Borg and supported databases for a few representative scenarios. These
tests don't run by default when running `tox`, because they're relatively slow
and depend on containers for runtime dependencies. These tests do run on the
continuous integration (CI) server, and running them on your developer machine
is the closest thing to CI-test parity.
is the closest thing to dev-CI parity.
If you would like to run the full test suite, first install Docker (or Podman;
see below) and [Docker Compose](https://docs.docker.com/compose/install/).
Then run:
```bash
scripts/run-end-to-end-dev-tests
scripts/run-end-to-end-tests
```
This script assumes you have permission to run `docker`. If you don't, then
@ -138,6 +149,9 @@ the following deviations from it:
separate from their contents.
* Within multiline constructs, use standard four-space indentation. Don't align
indentation with an opening delimiter.
* In general, spell out words in variable names instead of shortening them.
So, think `index` instead of `idx`. There are some notable exceptions to
this though (like `config`).
borgmatic code uses the [Black](https://black.readthedocs.io/en/stable/) code
formatter, the [Flake8](http://flake8.pycqa.org/en/latest/) code checker, and
@ -145,12 +159,17 @@ the [isort](https://github.com/timothycrosley/isort) import orderer, so
certain code style requirements will be enforced when running automated tests.
See the Black, Flake8, and isort documentation for more information.
## Continuous integration
Each pull request triggers a continuous integration build which runs the test
suite. You can view these builds on
[build.torsion.org](https://build.torsion.org/borgmatic-collective/borgmatic),
and they're also linked from the commits list on each pull request.
Each commit to
[main](https://projects.torsion.org/borgmatic-collective/borgmatic/branches)
triggers [a continuous integration
build](https://projects.torsion.org/borgmatic-collective/borgmatic/actions)
which runs the test suite and updates
[documentation](https://torsion.org/borgmatic/). These builds are also linked
from the [commits for the main
branch](https://projects.torsion.org/borgmatic-collective/borgmatic/commits/branch/main).
## Documentation development

View File

@ -148,25 +148,51 @@ borgmatic umount --mount-point /mnt
## Extract the configuration files used to create an archive
<span class="minilink minilink-addedin">New in version 1.7.15</span> borgmatic
automatically stores all the configuration files used to create an archive inside the
archive itself. This is useful in cases where you've lost a configuration
file or you want to see what configurations were used to create a particular
archive.
automatically stores all the configuration files used to create an archive
inside the archive itself. They are stored in the archive using their full
paths from the machine being backed up. This is useful in cases where you've
lost a configuration file or you want to see what configurations were used to
create a particular archive.
To extract the configuration files from an archive, use the `config bootstrap` action. For example:
To extract the configuration files from an archive, use the `config bootstrap`
action. For example:
```bash
borgmatic config bootstrap --repository repo.borg --destination /tmp
```
This extracts the configuration file from the latest archive in the repository `repo.borg` to `/tmp/etc/borgmatic/config.yaml`, assuming that the only configuration file used to create this archive was located at `/etc/borgmatic/config.yaml` when the archive was created.
This extracts the configuration file from the latest archive in the repository
`repo.borg` to `/tmp/etc/borgmatic/config.yaml`, assuming that the only
configuration file used to create this archive was located at
`/etc/borgmatic/config.yaml` when the archive was created.
Note that to run the `config bootstrap` action, you don't need to have a borgmatic configuration file. You only need to specify the repository to use via the `--repository` flag; borgmatic will figure out the rest.
Note that to run the `config bootstrap` action, you don't need to have a
borgmatic configuration file. You only need to specify the repository to use
via the `--repository` flag; borgmatic will figure out the rest.
If a destination directory is not specified, the configuration files will be extracted to their original locations, silently **overwriting** any configuration files that may already exist. For example, if a configuration file was located at `/etc/borgmatic/config.yaml` when the archive was created, it will be extracted to `/etc/borgmatic/config.yaml` too.
If a destination directory is not specified, the configuration files will be
extracted to their original locations, silently *overwriting* any configuration
files that may already exist. For example, if a configuration file was located
at `/etc/borgmatic/config.yaml` when the archive was created, it will be
extracted to `/etc/borgmatic/config.yaml` too.
If you want to extract the configuration file from a specific archive, use the `--archive` flag:
If you want to extract the configuration file from a specific archive, use the
`--archive` flag:
```bash
borgmatic config bootstrap --repository repo.borg --archive host-2023-01-02T04:06:07.080910 --destination /tmp
```
See the output of `config bootstrap --help` for additional flags you may need
for bootstrapping.
<span class="minilink minilink-addedin">New in version 1.8.1</span> Set the
`store_config_files` option to `false` to disable the automatic backup of
borgmatic configuration files, for instance if they contain sensitive
information you don't want to store even inside your encrypted backups. If you
do this though, the `config bootstrap` action will no longer work.
<span class="minilink minilink-addedin">New in version 1.8.7</span> Included
configuration files are stored in each backup archive. This means that the
`config bootstrap` action not only extracts the top-level configuration files
but also the includes they depend upon.

View File

@ -116,27 +116,30 @@ archive, complete with file sizes.
## Logging
By default, borgmatic logs to a local syslog-compatible daemon if one is
present and borgmatic is running in a non-interactive console. Where those
logs show up depends on your particular system. If you're using systemd, try
running `journalctl -xe`. Otherwise, try viewing `/var/log/syslog` or
similar.
You can customize the log level used for syslog logging with the
`--syslog-verbosity` flag, and this is independent from the console logging
`--verbosity` flag described above. For instance, to get additional
information about the progress of the backup as it proceeds:
By default, borgmatic logs to the console. You can enable simultaneous syslog
logging and customize its log level with the `--syslog-verbosity` flag, which
is independent from the console logging `--verbosity` flag described above.
For instance, to enable syslog logging, run:
```bash
borgmatic --syslog-verbosity 1
```
Or to increase syslog logging to include debug spew:
To increase syslog logging further to include debugging information, run:
```bash
borgmatic --syslog-verbosity 2
```
See above for further details about the verbosity levels.
Where these logs show up depends on your particular system. If you're using
systemd, try running `journalctl -xe`. Otherwise, try viewing
`/var/log/syslog` or similar.
<span class="minilink minilink-addedin">Prior to version 1.8.3</span>borgmatic
logged to syslog by default whenever run at a non-interactive console.
### Rate limiting
If you are using rsyslog or systemd's journal, be aware that by default they
@ -165,7 +168,7 @@ Note that if you use the `--log-file` flag, you are responsible for rotating
the log file so it doesn't grow too large, for example with
[logrotate](https://wiki.archlinux.org/index.php/Logrotate).
You can the `--log-file-verbosity` flag to customize the log file's log level:
You can use the `--log-file-verbosity` flag to customize the log file's log level:
```bash
borgmatic --log-file /path/to/file.log --log-file-verbosity 2
@ -197,5 +200,5 @@ See the [Python logging
documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes)
for additional placeholders.
Note that this `--log-file-format` flg only applies to the specified
Note that this `--log-file-format` flag only applies to the specified
`--log-file` and not to syslog or other logging.

View File

@ -139,8 +139,8 @@ Some borgmatic command-line actions also have a `--match-archives` flag that
overrides both the auto-matching behavior and the `match_archives`
configuration option.
<span class="minilink minilink-addedin">Prior to 1.7.11</span> The way to
limit the archives used for the `prune` action was a `prefix` option in the
<span class="minilink minilink-addedin">Prior to version 1.7.11</span> The way
to limit the archives used for the `prune` action was a `prefix` option in the
`retention` section for matching against the start of archive names. And the
option for limiting the archives used for the `check` action was a separate
`prefix` in the `consistency` section. Both of these options are deprecated in
@ -151,7 +151,7 @@ in newer versions of borgmatic.
## Configuration includes
Once you have multiple different configuration files, you might want to share
common configuration options across these files with having to copy and paste
common configuration options across these files without having to copy and paste
them. To achieve this, you can put fragments of common configuration options
into a file and then include or inline that file into one or more borgmatic
configuration files.
@ -301,7 +301,7 @@ options via an include and then overrides one of them locally:
<<: !include /etc/borgmatic/common.yaml
constants:
hostname: myhostname
base_directory: /opt
repositories:
- path: repo.borg
@ -311,13 +311,13 @@ This is what `common.yaml` might look like:
```yaml
constants:
prefix: myprefix
hostname: otherhost
app_name: myapp
base_directory: /var/lib
```
Once this include gets merged in, the resulting configuration would have a
`prefix` value of `myprefix` and an overridden `hostname` value of
`myhostname`.
Once this include gets merged in, the resulting configuration would have an
`app_name` value of `myapp` and an overridden `base_directory` value of
`/opt`.
When there's an option collision between the local file and the merged
include, the local file's option takes precedence.
@ -495,21 +495,29 @@ borgmatic create --override parent_option.option1=value1 --override parent_optio
forget to specify the section that an option is in. That looks like a prefix
on the option name, e.g. `location.repositories`.
Note that each value is parsed as an actual YAML string, so you can even set
list values by using brackets. For instance:
Note that each value is parsed as an actual YAML string, so you can set list
values by using brackets. For instance:
```bash
borgmatic create --override repositories=[test1.borg,test2.borg]
```
Or even a single list element:
Or a single list element:
```bash
borgmatic create --override repositories=[/root/test.borg]
```
If your override value contains special YAML characters like colons, then
you'll need quotes for it to parse correctly:
Or a single list element that is a key/value pair:
```bash
borgmatic create --override repositories="[{path: test.borg, label: test}]"
```
If your override value contains characters like colons or spaces, then you'll
need to use quotes for it to parse correctly.
Another example:
```bash
borgmatic create --override repositories="['user@server:test.borg']"
@ -518,16 +526,12 @@ borgmatic create --override repositories="['user@server:test.borg']"
There is not currently a way to override a single element of a list without
replacing the whole list.
Note that if you override an option of the list type (like
`location.repositories`), you do need to use the `[ ]` list syntax. See the
[configuration
Using the `[ ]` list syntax is required when overriding an option of the list
type (like `location.repositories`). See the [configuration
reference](https://torsion.org/borgmatic/docs/reference/configuration/) for
which options are list types. (YAML list values look like `- this` with an
indentation and a leading dash.)
Be sure to quote your overrides if they contain spaces or other characters
that your shell may interpret.
An alternate to command-line overrides is passing in your values via
[environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
@ -540,8 +544,7 @@ tool is borgmatic's support for defining custom constants. This is similar to
the [variable interpolation
feature](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation)
for command hooks, but the constants feature lets you substitute your own
custom values into anywhere in the entire configuration file. (Constants don't
work across includes or separate configuration files though.)
custom values into any option values in the entire configuration file.
Here's an example usage:
@ -564,10 +567,15 @@ forget to specify the section (like `location:` or `storage:`) that any option
is in.
In this example, when borgmatic runs, all instances of `{user}` get replaced
with `foo` and all instances of `{archive-prefix}` get replaced with `bar-`.
(And in this particular example, `{now}` doesn't get replaced with anything,
but gets passed directly to Borg.) After substitution, the logical result
looks something like this:
with `foo` and all instances of `{archive_prefix}` get replaced with `bar`.
And `{now}` doesn't get replaced with anything, but gets passed directly to
Borg, which has its own
[placeholders](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-help-placeholders)
using the same syntax as borgmatic constants. So borgmatic options like
`archive_name_format` that get passed directly to Borg can use either Borg
placeholders or borgmatic constants or both!
After substitution, the logical result looks something like this:
```yaml
source_directories:
@ -579,5 +587,24 @@ source_directories:
archive_name_format: 'bar-{now}'
```
Note that if you'd like to interpolate a constant into the beginning of a
value, you'll need to quote it. For instance, this won't work:
```yaml
source_directories:
- {my_home_directory}/.config # This will error!
```
Instead, do this:
```yaml
source_directories:
- "{my_home_directory}/.config"
```
<span class="minilink minilink-addedin">New in version 1.8.5</span> Constants
work across includes, meaning you can define a constant and then include a
separate configuration file that uses that constant.
An alternate to constants is passing in your values via [environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).

View File

@ -36,25 +36,24 @@ below for how to configure this.
### Third-party monitoring services
borgmatic integrates with monitoring services like
[Healthchecks](https://healthchecks.io/), [Cronitor](https://cronitor.io),
[Cronhub](https://cronhub.io), [PagerDuty](https://www.pagerduty.com/), and
[ntfy](https://ntfy.sh/) and pings these services whenever borgmatic runs.
That way, you'll receive an alert when something goes wrong or (for certain
hooks) the service doesn't hear from borgmatic for a configured interval. See
[Healthchecks
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook),
[Cronitor
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook),
[Cronhub
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook),
[PagerDuty
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook),
and [ntfy hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
below for how to configure this.
borgmatic integrates with these monitoring services and libraries, pinging
them as backups happen:
* [Healthchecks](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook)
* [Cronitor](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook)
* [Cronhub](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook)
* [PagerDuty](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook)
* [ntfy](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
* [Grafana Loki](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook)
* [Apprise](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook)
The idea is that you'll receive an alert when something goes wrong or when the
service doesn't hear from borgmatic for a configured interval (if supported).
See the documentation links above for configuration information.
While these services and libraries offer different features, you probably only
need to use one of them at most.
While these services offer different features, you probably only need to use
one of them at most.
### Third-party monitoring software
@ -102,7 +101,7 @@ script to handle the alerting:
```yaml
on_error:
- send-text-message.sh "{configuration_filename}" "{repository}"
- send-text-message.sh {configuration_filename} {repository}
```
In this example, when the error occurs, borgmatic interpolates runtime values
@ -125,11 +124,32 @@ actions. borgmatic does not run `on_error` hooks if an error occurs within a
documentation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/),
especially the security information.
<span class="minilink minilink-addedin">New in version 1.8.7</span> borgmatic
automatically escapes these interpolated values to prevent shell injection
attacks. One implication of this change is that you shouldn't wrap the
interpolated values in your own quotes, as that will interfere with the
quoting performed by borgmatic and result in your command receiving incorrect
arguments. For instance, this won't work:
```yaml
on_error:
# Don't do this! It won't work, as the {error} value is already quoted.
- send-text-message.sh "Uh oh: {error}"
```
Do this instead:
```yaml
on_error:
- send-text-message.sh {error}
```
## Healthchecks hook
[Healthchecks](https://healthchecks.io/) is a service that provides "instant
alerts when your cron jobs fail silently", and borgmatic has built-in
alerts when your cron jobs fail silently," and borgmatic has built-in
integration with it. Once you create a Healthchecks account and project on
their site, all you need to do is configure borgmatic with the unique "Ping
URL" for your project. Here's an example:
@ -143,22 +163,20 @@ healthchecks:
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
With this hook in place, borgmatic pings your Healthchecks project when a
backup begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Healthchecks know that it has started if any of
the `create`, `prune`, `compact`, or `check` actions are run.
With this configuration, borgmatic pings your Healthchecks project when a
backup begins, ends, or errors, but only when any of the `create`, `prune`,
`compact`, or `check` actions are run.
Then, if the actions complete successfully, borgmatic notifies Healthchecks of
the success after the `after_backup` hooks run and includes borgmatic logs in
the payload data sent to Healthchecks. This means that borgmatic logs show up
in the Healthchecks UI, although be aware that Healthchecks currently has a
10-kilobyte limit for the logs in each ping.
the success and includes borgmatic logs in the payload data sent to
Healthchecks. This means that borgmatic logs show up in the Healthchecks UI,
although be aware that Healthchecks currently has a 100-kilobyte limit for the
logs in each ping.
If an error occurs during any action or hook, borgmatic notifies Healthchecks
after the `on_error` hooks run, also tacking on logs including the error
itself. But the logs are only included for errors that occur when a `create`,
`prune`, `compact`, or `check` action is run.
If an error occurs during any action or hook, borgmatic notifies Healthchecks,
also tacking on logs including the error itself. But the logs are only
included for errors that occur when a `create`, `prune`, `compact`, or `check`
action is run.
You can customize the verbosity of the logs that are sent to Healthchecks with
borgmatic's `--monitoring-verbosity` flag. The `--list` and `--stats` flags
@ -175,7 +193,7 @@ or it doesn't hear from borgmatic for a certain period of time.
## Cronitor hook
[Cronitor](https://cronitor.io/) provides "Cron monitoring and uptime healthchecks
for websites, services and APIs", and borgmatic has built-in
for websites, services and APIs," and borgmatic has built-in
integration with it. Once you create a Cronitor account and cron job monitor on
their site, all you need to do is configure borgmatic with the unique "Ping
API URL" for your monitor. Here's an example:
@ -189,14 +207,10 @@ cronitor:
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
With this hook in place, borgmatic pings your Cronitor monitor when a backup
begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Cronitor know that it has started if any of the
`prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
complete successfully, borgmatic notifies Cronitor of the success after the
`after_backup` hooks run. And if an error occurs during any action or hook,
borgmatic notifies Cronitor after the `on_error` hooks run.
With this configuration, borgmatic pings your Cronitor monitor when a backup
begins, ends, or errors, but only when any of the `prune`, `compact`,
`create`, or `check` actions are run. Then, if the actions complete
successfully or errors, borgmatic notifies Cronitor accordingly.
You can configure Cronitor to notify you by a [variety of
mechanisms](https://cronitor.io/docs/cron-job-notifications) when backups fail
@ -206,7 +220,7 @@ or it doesn't hear from borgmatic for a certain period of time.
## Cronhub hook
[Cronhub](https://cronhub.io/) provides "instant alerts when any of your
background jobs fail silently or run longer than expected", and borgmatic has
background jobs fail silently or run longer than expected," and borgmatic has
built-in integration with it. Once you create a Cronhub account and monitor on
their site, all you need to do is configure borgmatic with the unique "Ping
URL" for your monitor. Here's an example:
@ -220,14 +234,10 @@ cronhub:
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
With this hook in place, borgmatic pings your Cronhub monitor when a backup
begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Cronhub know that it has started if any of the
`prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
complete successfully, borgmatic notifies Cronhub of the success after the
`after_backup` hooks run. And if an error occurs during any action or hook,
borgmatic notifies Cronhub after the `on_error` hooks run.
With this configuration, borgmatic pings your Cronhub monitor when a backup
begins, ends, or errors, but only when any of the `prune`, `compact`,
`create`, or `check` actions are run. Then, if the actions complete
successfully or errors, borgmatic notifies Cronhub accordingly.
Note that even though you configure borgmatic with the "start" variant of the
ping URL, borgmatic substitutes the correct state into the URL when pinging
@ -265,11 +275,10 @@ pagerduty:
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
With this hook in place, borgmatic creates a PagerDuty event for your service
whenever backups fail. Specifically, if an error occurs during a `create`,
`prune`, `compact`, or `check` action, borgmatic sends an event to PagerDuty
before the `on_error` hooks run. Note that borgmatic does not contact
PagerDuty when a backup starts or ends without error.
With this configuration, borgmatic creates a PagerDuty event for your service
whenever backups fail, but only when any of the `create`, `prune`, `compact`,
or `check` actions are run. Note that borgmatic does not contact PagerDuty
when a backup starts or when it ends without error.
You can configure PagerDuty to notify you by a [variety of
mechanisms](https://support.pagerduty.com/docs/notifications) when backups
@ -281,28 +290,30 @@ us](https://torsion.org/borgmatic/#support-and-contributing).
## ntfy hook
[ntfy](https://ntfy.sh) is a free, simple, service (either hosted or self-hosted)
which offers simple pub/sub push notifications to multiple platforms including
[web](https://ntfy.sh/stats), [Android](https://play.google.com/store/apps/details?id=io.heckel.ntfy)
and [iOS](https://apps.apple.com/us/app/ntfy/id1625396347).
<span class="minilink minilink-addedin">New in version 1.6.3</span>
[ntfy](https://ntfy.sh) is a free, simple, service (either hosted or
self-hosted) which offers simple pub/sub push notifications to multiple
platforms including [web](https://ntfy.sh/stats),
[Android](https://play.google.com/store/apps/details?id=io.heckel.ntfy) and
[iOS](https://apps.apple.com/us/app/ntfy/id1625396347).
Since push notifications for regular events might soon become quite annoying,
this hook only fires on any errors by default in order to instantly alert you to issues.
The `states` list can override this.
this hook only fires on any errors by default in order to instantly alert you
to issues. The `states` list can override this. Each state can have its own
custom messages, priorities and tags or, if none are provided, will use the
default.
As ntfy is unauthenticated, it isn't a suitable channel for any private information
so the default messages are intentionally generic. These can be overridden, depending
on your risk assessment. Each `state` can have its own custom messages, priorities and tags
or, if none are provided, will use the default.
An example configuration is shown here, with all the available options, including
[priorities](https://ntfy.sh/docs/publish/#message-priority) and
An example configuration is shown here with all the available options,
including [priorities](https://ntfy.sh/docs/publish/#message-priority) and
[tags](https://ntfy.sh/docs/publish/#tags-emojis):
```yaml
ntfy:
topic: my-unique-topic
server: https://ntfy.my-domain.com
username: myuser
password: secret
start:
title: A borgmatic backup started
message: Watch this space...
@ -327,6 +338,172 @@ ntfy:
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
the `ntfy:` option in the `hooks:` section of your configuration.
<span class="minilink minilink-addedin">New in version 1.8.9</span> Instead of
`username`/`password`, you can specify an [ntfy access
token](https://docs.ntfy.sh/config/#access-tokens):
```yaml
ntfy:
topic: my-unique-topic
server: https://ntfy.my-domain.com
access_token: tk_AgQdq7mVBoFD37zQVN29RhuMzNIz2
````
## Loki hook
<span class="minilink minilink-addedin">New in version 1.8.3</span> [Grafana
Loki](https://grafana.com/oss/loki/) is a "horizontally scalable, highly
available, multi-tenant log aggregation system inspired by Prometheus."
borgmatic has built-in integration with Loki, sending both backup status and
borgmatic logs.
You can configure borgmatic to use either a [self-hosted Loki
instance](https://grafana.com/docs/loki/latest/installation/) or [a Grafana
Cloud account](https://grafana.com/auth/sign-up/create-user). Start by setting
your Loki API push URL. Here's an example:
```yaml
loki:
url: http://localhost:3100/loki/api/v1/push
```
With this configuration, borgmatic sends its logs to your Loki instance as any
of the `prune`, `compact`, `create`, or `check` actions are run. Then, after
the actions complete, borgmatic notifies Loki of success or failure.
This hook supports sending arbitrary labels to Loki. For instance:
```yaml
loki:
url: http://localhost:3100/loki/api/v1/push
labels:
app: borgmatic
hostname: example.org
```
There are also a few placeholders you can optionally use as label values:
* `__config`: name of the borgmatic configuration file
* `__config_path`: full path of the borgmatic configuration file
* `__hostname`: the local machine hostname
These placeholders are only substituted for the whole label value, not
interpolated into a larger string. For instance:
```yaml
loki:
url: http://localhost:3100/loki/api/v1/push
labels:
app: borgmatic
config: __config
hostname: __hostname
```
Also check out this [Loki dashboard for
borgmatic](https://grafana.com/grafana/dashboards/20736-borgmatic-logs/) if
you'd like to see your backup logs and statistics in one place.
## Apprise hook
<span class="minilink minilink-addedin">New in version 1.8.4</span>
[Apprise](https://github.com/caronc/apprise/wiki) is a local notification library
that "allows you to send a notification to almost all of the most popular
[notification services](https://github.com/caronc/apprise/wiki) available to
us today such as: Telegram, Discord, Slack, Amazon SNS, Gotify, etc."
Depending on how you installed borgmatic, it may not have come with Apprise.
For instance, if you originally [installed borgmatic with
pipx](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#installation),
run the following to install Apprise so borgmatic can use it:
```bash
sudo pipx install --force borgmatic[Apprise]
```
Omit `sudo` if borgmatic is installed as a non-root user.
Once Apprise is installed, configure borgmatic to notify one or more [Apprise
services](https://github.com/caronc/apprise/wiki). For example:
```yaml
apprise:
services:
- url: gotify://hostname/token
label: gotify
- url: mastodons://access_key@hostname/@user
label: mastodon
states:
- start
- finish
- fail
```
With this configuration, borgmatic pings each of the configured Apprise
services when a backup begins, ends, or errors, but only when any of the
`prune`, `compact`, `create`, or `check` actions are run. (By default, if
`states` is not specified, Apprise services are only pinged on error.)
You can optionally customize the contents of the default messages sent to
these services:
```yaml
apprise:
services:
- url: gotify://hostname/token
label: gotify
start:
title: Ping!
body: Starting backup process.
finish:
title: Ping!
body: Backups successfully made.
fail:
title: Ping!
body: Your backups have failed.
states:
- start
- finish
- fail
```
<span class="minilink minilink-addedin">New in version 1.8.9</span> borgmatic
logs are automatically included in the body data sent to your Apprise services
when a backup finishes or fails.
You can customize the verbosity of the logs that are sent with borgmatic's
`--monitoring-verbosity` flag. The `--list` and `--stats` flags may also be of
use. See `borgmatic create --help` for more information.
If you don't want any logs sent, you can disable this feature by setting
`send_logs` to `false`:
```yaml
apprise:
services:
- url: gotify://hostname/token
label: gotify
send_logs: false
```
Or to limit the size of logs sent to Apprise services:
```yaml
apprise:
services:
- url: gotify://hostname/token
label: gotify
logs_size_limit: 500
```
This may be necessary for some services that reject large requests.
See the [configuration
reference](https://torsion.org/borgmatic/docs/reference/configuration/) for
details.
## Scripting borgmatic

View File

@ -5,13 +5,31 @@ eleventyNavigation:
parent: How-to guides
order: 2
---
## Environment variable interpolation
## Providing passwords and secrets to borgmatic
If you want to use a Borg repository passphrase or database passwords with
borgmatic, you can set them directly in your borgmatic configuration file,
treating those secrets like any other option value. But if you'd rather store
them outside of borgmatic, whether for convenience or security reasons, read
on.
treating those secrets like any other option value. For instance, you can
specify your Borg passhprase with:
```yaml
encryption_passphrase: yourpassphrase
```
But if you'd rather store them outside of borgmatic, whether for convenience
or security reasons, read on.
### Delegating to another application
borgmatic supports calling another application such as a password manager to
obtain the Borg passphrase to a repository.
For example, to ask the *Pass* password manager to provide the passphrase:
```yaml
encryption_passcommand: pass path/to/borg-repokey
```
### Environment variable interpolation
<span class="minilink minilink-addedin">New in version 1.6.4</span> borgmatic
supports interpolating arbitrary environment variables directly into option
@ -20,14 +38,14 @@ pull your repository passphrase, your database passwords, or any other option
values from environment variables. For instance:
```yaml
encryption_passphrase: ${MY_PASSPHRASE}
encryption_passphrase: ${YOUR_PASSPHRASE}
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `storage:` section of your configuration.
This uses the `MY_PASSPHRASE` environment variable as your encryption
passphrase. Note that the `{` `}` brackets are required. `$MY_PASSPHRASE` by
This uses the `YOUR_PASSPHRASE` environment variable as your encryption
passphrase. Note that the `{` `}` brackets are required. `$YOUR_PASSPHRASE` by
itself will not work.
In the case of `encryption_passphrase` in particular, an alternate approach
@ -42,30 +60,31 @@ the same approach applies. For example:
```yaml
postgresql_databases:
- name: users
password: ${MY_DATABASE_PASSWORD}
password: ${YOUR_DATABASE_PASSWORD}
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
This uses the `MY_DATABASE_PASSWORD` environment variable as your database
This uses the `YOUR_DATABASE_PASSWORD` environment variable as your database
password.
### Interpolation defaults
#### Interpolation defaults
If you'd like to set a default for your environment variables, you can do so with the following syntax:
If you'd like to set a default for your environment variables, you can do so
with the following syntax:
```yaml
encryption_passphrase: ${MY_PASSPHRASE:-defaultpass}
encryption_passphrase: ${YOUR_PASSPHRASE:-defaultpass}
```
Here, "`defaultpass`" is the default passphrase if the `MY_PASSPHRASE`
Here, "`defaultpass`" is the default passphrase if the `YOUR_PASSPHRASE`
environment variable is not set. Without a default, if the environment
variable doesn't exist, borgmatic will error.
### Disabling interpolation
#### Disabling interpolation
To disable this environment variable interpolation feature entirely, you can
pass the `--no-environment-interpolation` flag on the command-line.
@ -78,7 +97,7 @@ can escape it with a backslash. For instance, if your password is literally
encryption_passphrase: \${A}@!
```
### Related features
## Related features
Another way to override particular options within a borgmatic configuration
file is to use a [configuration
@ -90,3 +109,9 @@ Additionally, borgmatic action hooks support their own [variable
interpolation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation),
although in that case it's for particular borgmatic runtime values rather than
(only) environment variables.
Lastly, if you do want to specify your passhprase directly within borgmatic
configuration, but you'd like to keep it in a separate file from your main
configuration, you can [use a configuration include or a merge
include](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-includes)
to pull in an external password.

View File

@ -7,74 +7,70 @@ eleventyNavigation:
---
## Installation
Many users need to backup system files that require privileged access, so
these instructions install and run borgmatic as root. If you don't need to
backup such files, then you are welcome to install and run borgmatic as a
non-root user.
### Prerequisites
First, manually [install
First, [install
Borg](https://borgbackup.readthedocs.io/en/stable/installation.html), at least
version 1.1. borgmatic does not install Borg automatically so as to avoid
conflicts with existing Borg installations.
Then, download and install borgmatic as a [user site
installation](https://packaging.python.org/tutorials/installing-packages/#installing-to-the-user-site)
by running the following command:
Then, [install pipx](https://pypa.github.io/pipx/installation/) as the root
user (with `sudo`) to make installing borgmatic easy without impacting other
Python applications on your system. If you have trouble installing pipx with
pip, then you can install a system package instead. E.g. on Ubuntu or Debian,
run:
```bash
sudo pip3 install --user --upgrade borgmatic
sudo apt update
sudo apt install pipx
```
This installs borgmatic and its commands at the `/root/.local/bin` path.
### Root install
Your pip binary may have a different name than "pip3". Make sure you're using
Python 3.7+, as borgmatic does not support older versions of Python.
The next step is to ensure that borgmatic's commands available are on your
system `PATH`, so that you can run borgmatic:
If you want to run borgmatic on a schedule with privileged access to your
files, then you should install borgmatic as the root user by running the
following commands:
```bash
echo export 'PATH="$PATH:/root/.local/bin"' >> ~/.bashrc
source ~/.bashrc
sudo pipx ensurepath
sudo pipx install borgmatic
```
This adds `/root/.local/bin` to your non-root user's system `PATH`.
If you're using a command shell other than Bash, you may need to use different
commands here.
You can check whether all of this worked with:
Check whether this worked with:
```bash
sudo borgmatic --version
sudo su -
borgmatic --version
```
If borgmatic is properly installed, that should output your borgmatic version.
As an alternative to adding the path to `~/.bashrc` file, if you're using sudo
to run borgmatic, you can configure [sudo's
`secure_path` option](https://man.archlinux.org/man/sudoers.5) to include
borgmatic's path.
And if you'd also like `sudo borgmatic` to work, keep reading!
### Global install option
### Non-root install
If you try the user site installation above and have problems making borgmatic
commands runnable on your system `PATH`, an alternate approach is to install
borgmatic globally.
The following uninstalls borgmatic and then reinstalls it such that borgmatic
commands are on the default system `PATH`:
If you only want to run borgmatic as a non-root user (without privileged file
access) *or* you want to make `sudo borgmatic` work so borgmatic runs as root,
then install borgmatic as a non-root user by running the following commands as
that user:
```bash
sudo pip3 uninstall borgmatic
sudo pip3 install --upgrade borgmatic
pipx ensurepath
pipx install borgmatic
```
The main downside of a global install is that borgmatic is less cleanly
separated from the rest of your Python software, and there's the theoretical
possibility of library conflicts. But if you're okay with that, for instance
on a relatively dedicated system, then a global install can work out fine.
This should work even if you've also installed borgmatic as the root user.
Check whether this worked with:
```bash
borgmatic --version
```
If borgmatic is properly installed, that should output your borgmatic version.
You can also try `sudo borgmatic --version` if you intend to run borgmatic
with `sudo`. If that doesn't work, you may need to update your [sudoers
`secure_path` option](https://wiki.archlinux.org/title/Sudo).
### Other ways to install
@ -286,6 +282,21 @@ due to things like file damage. For instance:
sudo borgmatic --verbosity 1 --list --stats
```
### Skipping actions
<span class="minilink minilink-addedin">New in version 1.8.5</span> You can
configure borgmatic to skip running certain actions (default or otherwise).
For instance, to always skip the `compact` action when using [Borg's
append-only
mode](https://borgbackup.readthedocs.io/en/stable/usage/notes.html#append-only-mode-forbid-compaction),
set the `skip_actions` option:
```
skip_actions:
- compact
```
## Autopilot
Running backups manually is good for validating your configuration, but I'm
@ -395,8 +406,9 @@ source /usr/share/fish/vendor_completions.d/borgmatic.fish
borgmatic produces colored terminal output by default. It is disabled when a
non-interactive terminal is detected (like a cron job), or when you use the
`--json` flag. Otherwise, you can disable it by passing the `--no-color` flag,
setting the environment variable `PY_COLORS=False`, or setting the `color`
option to `false` in the `output` section of configuration.
setting the environment variables `PY_COLORS=False` or `NO_COLOR=True`, or
setting the `color` option to `false` in the `output` section of
configuration.
## Troubleshooting

View File

@ -3,30 +3,42 @@ title: How to upgrade borgmatic and Borg
eleventyNavigation:
key: 📦 Upgrade borgmatic/Borg
parent: How-to guides
order: 12
order: 13
---
## Upgrading borgmatic
In general, all you should need to do to upgrade borgmatic is run the
following:
In general, all you should need to do to upgrade borgmatic if you've
[installed it with
pipx](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#installation)
is to run the following:
```bash
sudo pip3 install --user --upgrade borgmatic
sudo pipx upgrade borgmatic
```
See below about special cases with old versions of borgmatic. Additionally, if
you installed borgmatic [without using `pip3 install
--user`](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#other-ways-to-install),
then your upgrade process may be different.
Omit `sudo` if you installed borgmatic as a non-root user. And if you
installed borgmatic *both* as root and as a non-root user, you'll need to
upgrade each installation independently.
If you originally installed borgmatic with `sudo pip3 install --user`, you can
uninstall it first with `sudo pip3 uninstall borgmatic` and then [install it
again with
pipx](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#installation),
which should better isolate borgmatic from your other Python applications.
But if you [installed borgmatic without pipx or
pip3](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#other-ways-to-install),
then your upgrade method may be different.
### Upgrading your configuration
The borgmatic configuration file format is almost always backwards-compatible
from release to release without any changes, but you may still want to update
your configuration file when you upgrade to take advantage of new
configuration options. This is completely optional. If you prefer, you can add
new configuration options manually.
The borgmatic configuration file format is usually backwards-compatible from
release to release without any changes, but you may still want to update your
configuration file when you upgrade to take advantage of new configuration
options or avoid old configuration from eventually becoming unsupported. If
you prefer, you can add new configuration options manually.
If you do want to upgrade your configuration file to include new options, use
the `borgmatic config generate` action with its optional `--source` flag that
@ -64,45 +76,10 @@ and, if desired, replace your original configuration file with it.
borgmatic changed its configuration file format in version 1.1.0 from
INI-style to YAML. This better supports validation and has a more natural way
to express lists of values. To upgrade your existing configuration, first
upgrade to the last version of borgmatic to support converting configuration:
borgmatic 1.7.14.
As of version 1.1.0, borgmatic no longer supports Python 2. If you were
already running borgmatic with Python 3, then you can upgrade borgmatic
in-place:
```bash
sudo pip3 install --user --upgrade borgmatic==1.7.14
```
But if you were running borgmatic with Python 2, uninstall and reinstall instead:
```bash
sudo pip uninstall borgmatic
sudo pip3 install --user borgmatic==1.7.14
```
The pip binary names for different versions of Python can differ, so the above
commands may need some tweaking to work on your machine.
Once borgmatic is upgraded, run:
```bash
sudo upgrade-borgmatic-config
```
That will generate a new YAML configuration file at /etc/borgmatic/config.yaml
(by default) using the values from both your existing configuration and
excludes files. The new version of borgmatic will consume the YAML
configuration file instead of the old one.
Now you can upgrade to a newer version of borgmatic:
```bash
sudo pip3 install --user borgmatic
```
to express lists of values. Modern versions of borgmatic no longer include
support for upgrading configuration files this old, but feel free to [file a
ticket](https://torsion.org/borgmatic/#issues) for help with upgrading any old
INI-style configuration files you may have.
## Upgrading Borg

View File

@ -21,5 +21,3 @@ version](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#configuration
```yaml
{% include borgmatic/config.yaml %}
```
Note that you can also [download this configuration

View File

@ -0,0 +1,29 @@
---
title: Source code reference
eleventyNavigation:
key: 🐍 Source code reference
parent: Reference guides
order: 3
---
## getting oriented
If case you're interested in [developing on
borgmatic](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/),
here's an abridged primer on how its Python source code is organized to help
you get started. At the top level we have:
* [borgmatic](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/borgmatic): The main borgmatic source module. Most of the code is here.
* [docs](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/docs): How-to and reference documentation, including the document you're reading now.
* [sample](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/sample): Example configurations for cron and systemd.
* [scripts](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/scripts): Dev-facing scripts for things like building documentation and running end-to-end tests.
* [tests](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/tests): Automated tests organized by: end-to-end, integration, and unit.
Within the `borgmatic` directory you'll find:
* [actions](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/borgmatic/actions): Mid-level code for running each borgmatic action (create, list, check, etc.).
* [borg](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/borgmatic/borg): Lower-level code that actually shells out to Borg for each action.
* [commands](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/borgmatic/commands): Looking to add a new flag or action? Start here. This contains borgmatic's entry point, argument parsing, and shell completion.
* [config](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/borgmatic/config): Code responsible for loading, normalizing, and validating borgmatic's configuration.
* [hooks](https://projects.torsion.org/borgmatic-collective/borgmatic/src/branch/main/borgmatic/hooks): Looking to add a new database or monitoring integration? Start here.
So, broadly speaking, the control flow goes: `commands``config` followed by `commands``actions``borg` and `hooks`.

BIN
docs/static/apprise.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

BIN
docs/static/loki.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@ -32,16 +32,16 @@ RestrictSUIDSGID=yes
SystemCallArchitectures=native
SystemCallFilter=@system-service
SystemCallErrorNumber=EPERM
# To restrict write access further, change "ProtectSystem" to "strict" and uncomment
# "ReadWritePaths", "ReadOnlyPaths", "ProtectHome", and "BindPaths". Then add any local repository
# paths to the list of "ReadWritePaths" and local backup source paths to "ReadOnlyPaths". This
# leaves most of the filesystem read-only to borgmatic.
# To restrict write access further, change "ProtectSystem" to "strict" and
# uncomment "ReadWritePaths", "TemporaryFileSystem", "BindPaths" and
# "BindReadOnlyPaths". Then add any local repository paths to the list of
# "ReadWritePaths". This leaves most of the filesystem read-only to borgmatic.
ProtectSystem=full
# ReadWritePaths=-/mnt/my_backup_drive
# ReadOnlyPaths=-/var/lib/my_backup_source
# This will mount a tmpfs on top of /root and pass through needed paths
# ProtectHome=tmpfs
# TemporaryFileSystem=/root:ro
# BindPaths=-/root/.cache/borg -/root/.config/borg -/root/.borgmatic
# BindReadOnlyPaths=-/root/.ssh
# May interfere with running external programs within borgmatic hooks.
CapabilityBoundingSet=CAP_DAC_READ_SEARCH CAP_NET_RAW

View File

@ -16,5 +16,7 @@ if [ -e "$USER_PODMAN_SOCKET_PATH" ]; then
export DOCKER_HOST="unix://$USER_PODMAN_SOCKET_PATH"
fi
docker-compose --file tests/end-to-end/docker-compose.yaml up --force-recreate \
--renew-anon-volumes --abort-on-container-exit
docker-compose --file tests/end-to-end/docker-compose.yaml --progress quiet up --force-recreate \
--renew-anon-volumes --detach
docker-compose --file tests/end-to-end/docker-compose.yaml --progress quiet attach tests
docker-compose --file tests/end-to-end/docker-compose.yaml --progress quiet down

View File

@ -3,7 +3,7 @@
# This script installs test dependencies and runs all tests, including end-to-end tests. It
# is designed to run inside a test container, and presumes that other test infrastructure like
# databases are already running. Therefore, on a developer machine, you should not run this script
# directly. Instead, run scripts/run-end-to-end-dev-tests
# directly. Instead, run scripts/run-end-to-end-tests
#
# For more information, see:
# https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/
@ -18,15 +18,12 @@ if [ -z "$TEST_CONTAINER" ]; then
fi
apk add --no-cache python3 py3-pip borgbackup postgresql-client mariadb-client mongodb-tools \
py3-ruamel.yaml py3-ruamel.yaml.clib bash sqlite fish
py3-ruamel.yaml py3-ruamel.yaml.clib py3-yaml bash sqlite fish
# If certain dependencies of black are available in this version of Alpine, install them.
apk add --no-cache py3-typed-ast py3-regex || true
python3 -m pip install --no-cache --upgrade pip==22.2.2 setuptools==64.0.1
pip3 install --ignore-installed tox==3.25.1
pip3 install --ignore-installed tox==4.11.3
export COVERAGE_FILE=/tmp/.coverage
if [ "$1" != "--end-to-end-only" ]; then
tox --workdir /tmp/.tox --sitepackages
fi
tox --workdir /tmp/.tox --sitepackages
tox --workdir /tmp/.tox --sitepackages -e end-to-end

View File

@ -1,6 +1,6 @@
from setuptools import find_packages, setup
VERSION = '1.8.1'
VERSION = '1.8.11.dev0'
setup(
@ -33,9 +33,10 @@ setup(
'jsonschema',
'packaging',
'requests',
'ruamel.yaml>0.15.0,<0.18.0',
'ruamel.yaml>0.15.0',
'setuptools',
),
extras_require={"Apprise": ["apprise"]},
include_package_data=True,
python_requires='>=3.7',
python_requires='>=3.8',
)

View File

@ -1,8 +1,10 @@
appdirs==1.4.4; python_version >= '3.8'
attrs==22.2.0; python_version >= '3.8'
black==23.3.0; python_version >= '3.8'
appdirs==1.4.4
apprise==1.3.0
attrs==22.2.0
black==24.3.0
certifi==2023.7.22
chardet==5.1.0
click==8.1.3; python_version >= '3.8'
click==8.1.3
codespell==2.2.4
colorama==0.4.6
coverage==7.2.3
@ -11,23 +13,22 @@ flake8-quotes==3.3.2
flake8-use-fstring==1.4
flake8-variables-names==0.0.5
flexmock==0.11.3
idna==3.4
importlib_metadata==6.3.0; python_version < '3.8'
idna==3.7
isort==5.12.0
jsonschema==4.17.3
Markdown==3.4.1
mccabe==0.7.0
packaging==23.1
pathspec==0.11.1
pluggy==1.0.0
pathspec==0.11.1; python_version >= '3.8'
py==1.11.0
pycodestyle==2.10.0
pyflakes==3.0.1
jsonschema==4.17.3
pytest==7.3.0
pytest-cov==4.0.0
regex; python_version >= '3.8'
PyYAML>5.0.0
regex
requests==2.31.0
ruamel.yaml>0.15.0,<0.18.0
toml==0.10.2; python_version >= '3.8'
typed-ast; python_version >= '3.8'
typing-extensions==4.5.0; python_version < '3.8'
zipp==3.15.0; python_version < '3.8'
ruamel.yaml>0.15.0
toml==0.10.2
typed-ast

View File

@ -10,18 +10,28 @@ services:
environment:
POSTGRES_PASSWORD: test2
POSTGRES_DB: test
POSTGRES_USER: postgres2
command: docker-entrypoint.sh -p 5433
mysql:
image: docker.io/mariadb:10.5
mariadb:
image: docker.io/mariadb:10.11.4
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
mysql2:
image: docker.io/mariadb:10.5
MARIADB_ROOT_PASSWORD: test
MARIADB_DATABASE: test
mariadb2:
image: docker.io/mariadb:10.11.4
environment:
MYSQL_ROOT_PASSWORD: test2
MYSQL_DATABASE: test
MARIADB_ROOT_PASSWORD: test2
MARIADB_DATABASE: test
command: docker-entrypoint.sh --port=3307
not-actually-mysql:
image: docker.io/mariadb:10.11.4
environment:
MARIADB_ROOT_PASSWORD: test
MARIADB_DATABASE: test
not-actually-mysql2:
image: docker.io/mariadb:10.11.4
environment:
MARIADB_ROOT_PASSWORD: test2
MARIADB_DATABASE: test
command: docker-entrypoint.sh --port=3307
mongodb:
image: docker.io/mongo:5.0.5
@ -39,18 +49,17 @@ services:
environment:
TEST_CONTAINER: true
volumes:
- "../..:/app:ro"
- "../..:/app"
tmpfs:
- "/app/borgmatic.egg-info"
- "/app/build"
tty: true
working_dir: /app
entrypoint: /app/scripts/run-full-tests
command: --end-to-end-only
depends_on:
- postgresql
- postgresql2
- mysql
- mysql2
- mariadb
- mariadb2
- mongodb
- mongodb2

View File

@ -5,7 +5,9 @@ import subprocess
import sys
import tempfile
import pymongo
import pytest
import ruamel.yaml
def write_configuration(
@ -21,7 +23,7 @@ def write_configuration(
for testing. This includes injecting the given repository path, borgmatic source directory for
storing database dumps, dump format (for PostgreSQL), and encryption passphrase.
'''
config = f'''
config_yaml = f'''
source_directories:
- {source_directory}
repositories:
@ -45,18 +47,32 @@ postgresql_databases:
hostname: postgresql
username: postgres
password: test
mysql_databases:
mariadb_databases:
- name: test
hostname: mysql
hostname: mariadb
username: root
password: test
- name: all
hostname: mysql
hostname: mariadb
username: root
password: test
- name: all
format: sql
hostname: mysql
hostname: mariadb
username: root
password: test
mysql_databases:
- name: test
hostname: not-actually-mysql
username: root
password: test
- name: all
hostname: not-actually-mysql
username: root
password: test
- name: all
format: sql
hostname: not-actually-mysql
username: root
password: test
mongodb_databases:
@ -76,7 +92,9 @@ sqlite_databases:
'''
with open(config_path, 'w') as config_file:
config_file.write(config)
config_file.write(config_yaml)
return ruamel.yaml.YAML(typ='safe').load(config_yaml)
def write_custom_restore_configuration(
@ -92,7 +110,7 @@ def write_custom_restore_configuration(
for testing with custom restore options. This includes a custom restore_hostname, restore_port,
restore_username, restore_password and restore_path.
'''
config = f'''
config_yaml = f'''
source_directories:
- {source_directory}
repositories:
@ -109,14 +127,22 @@ postgresql_databases:
format: {postgresql_dump_format}
restore_hostname: postgresql2
restore_port: 5433
restore_username: postgres2
restore_password: test2
mariadb_databases:
- name: test
hostname: mariadb
username: root
password: test
restore_hostname: mariadb2
restore_port: 3307
restore_username: root
restore_password: test2
mysql_databases:
- name: test
hostname: mysql
hostname: not-actually-mysql
username: root
password: test
restore_hostname: mysql2
restore_hostname: not-actually-mysql2
restore_port: 3307
restore_username: root
restore_password: test2
@ -138,7 +164,9 @@ sqlite_databases:
'''
with open(config_path, 'w') as config_file:
config_file.write(config)
config_file.write(config_yaml)
return ruamel.yaml.YAML(typ='safe').load(config_yaml)
def write_simple_custom_restore_configuration(
@ -154,7 +182,7 @@ def write_simple_custom_restore_configuration(
custom restore_hostname, restore_port, restore_username and restore_password as we only test
these options for PostgreSQL.
'''
config = f'''
config_yaml = f'''
source_directories:
- {source_directory}
repositories:
@ -172,7 +200,147 @@ postgresql_databases:
'''
with open(config_path, 'w') as config_file:
config_file.write(config)
config_file.write(config_yaml)
return ruamel.yaml.YAML(typ='safe').load(config_yaml)
def get_connection_params(database, use_restore_options=False):
hostname = (database.get('restore_hostname') if use_restore_options else None) or database.get(
'hostname'
)
port = (database.get('restore_port') if use_restore_options else None) or database.get('port')
username = (database.get('restore_username') if use_restore_options else None) or database.get(
'username'
)
password = (database.get('restore_password') if use_restore_options else None) or database.get(
'password'
)
return (hostname, port, username, password)
def run_postgresql_command(command, config, use_restore_options=False):
(hostname, port, username, password) = get_connection_params(
config['postgresql_databases'][0], use_restore_options
)
subprocess.check_call(
[
'/usr/bin/psql',
f'--host={hostname}',
f'--port={port or 5432}',
f"--username={username or 'root'}",
f'--command={command}',
'test',
],
env={'PGPASSWORD': password},
)
def run_mariadb_command(command, config, use_restore_options=False, binary_name='mariadb'):
(hostname, port, username, password) = get_connection_params(
config[f'{binary_name}_databases'][0], use_restore_options
)
subprocess.check_call(
[
f'/usr/bin/{binary_name}',
f'--host={hostname}',
f'--port={port or 3306}',
f'--user={username}',
f'--execute={command}',
'test',
],
env={'MYSQL_PWD': password},
)
def get_mongodb_database_client(config, use_restore_options=False):
(hostname, port, username, password) = get_connection_params(
config['mongodb_databases'][0], use_restore_options
)
return pymongo.MongoClient(f'mongodb://{username}:{password}@{hostname}:{port or 27017}').test
def run_sqlite_command(command, config, use_restore_options=False):
database = config['sqlite_databases'][0]
path = (database.get('restore_path') if use_restore_options else None) or database.get('path')
subprocess.check_call(
[
'/usr/bin/sqlite3',
path,
command,
'.exit',
],
)
DEFAULT_HOOK_NAMES = {'postgresql', 'mariadb', 'mysql', 'mongodb', 'sqlite'}
def create_test_tables(config, use_restore_options=False):
'''
Create test tables for borgmatic to dump and backup.
'''
command = 'create table test{id} (thing int); insert into test{id} values (1);'
if 'postgresql_databases' in config:
run_postgresql_command(command.format(id=1), config, use_restore_options)
if 'mariadb_databases' in config:
run_mariadb_command(command.format(id=2), config, use_restore_options)
if 'mysql_databases' in config:
run_mariadb_command(command.format(id=3), config, use_restore_options, binary_name='mysql')
if 'mongodb_databases' in config:
get_mongodb_database_client(config, use_restore_options)['test4'].insert_one({'thing': 1})
if 'sqlite_databases' in config:
run_sqlite_command(command.format(id=5), config, use_restore_options)
def drop_test_tables(config, use_restore_options=False):
'''
Drop the test tables in preparation for borgmatic restoring them.
'''
command = 'drop table if exists test{id};'
if 'postgresql_databases' in config:
run_postgresql_command(command.format(id=1), config, use_restore_options)
if 'mariadb_databases' in config:
run_mariadb_command(command.format(id=2), config, use_restore_options)
if 'mysql_databases' in config:
run_mariadb_command(command.format(id=3), config, use_restore_options, binary_name='mysql')
if 'mongodb_databases' in config:
get_mongodb_database_client(config, use_restore_options)['test4'].drop()
if 'sqlite_databases' in config:
run_sqlite_command(command.format(id=5), config, use_restore_options)
def select_test_tables(config, use_restore_options=False):
'''
Select the test tables to make sure they exist.
Raise if the expected tables cannot be selected, for instance if a restore hasn't worked as
expected.
'''
command = 'select count(*) from test{id};'
if 'postgresql_databases' in config:
run_postgresql_command(command.format(id=1), config, use_restore_options)
if 'mariadb_databases' in config:
run_mariadb_command(command.format(id=2), config, use_restore_options)
if 'mysql_databases' in config:
run_mariadb_command(command.format(id=3), config, use_restore_options, binary_name='mysql')
if 'mongodb_databases' in config:
assert (
get_mongodb_database_client(config, use_restore_options)['test4'].count_documents(
filter={}
)
> 0
)
if 'sqlite_databases' in config:
run_sqlite_command(command.format(id=5), config, use_restore_options)
def test_database_dump_and_restore():
@ -188,15 +356,17 @@ def test_database_dump_and_restore():
try:
config_path = os.path.join(temporary_directory, 'test.yaml')
write_configuration(
config = write_configuration(
temporary_directory, config_path, repository_path, borgmatic_source_directory
)
create_test_tables(config)
select_test_tables(config)
subprocess.check_call(
['borgmatic', '-v', '2', '--config', config_path, 'rcreate', '--encryption', 'repokey']
)
# Run borgmatic to generate a backup archive including a database dump.
# Run borgmatic to generate a backup archive including database dumps.
subprocess.check_call(['borgmatic', 'create', '--config', config_path, '-v', '2'])
# Get the created archive name.
@ -209,16 +379,21 @@ def test_database_dump_and_restore():
assert len(parsed_output[0]['archives']) == 1
archive_name = parsed_output[0]['archives'][0]['archive']
# Restore the database from the archive.
# Restore the databases from the archive.
drop_test_tables(config)
subprocess.check_call(
['borgmatic', '-v', '2', '--config', config_path, 'restore', '--archive', archive_name]
)
# Ensure the test tables have actually been restored.
select_test_tables(config)
finally:
os.chdir(original_working_directory)
shutil.rmtree(temporary_directory)
drop_test_tables(config)
def test_database_dump_and_restore_with_restore_cli_arguments():
def test_database_dump_and_restore_with_restore_cli_flags():
# Create a Borg repository.
temporary_directory = tempfile.mkdtemp()
repository_path = os.path.join(temporary_directory, 'test.borg')
@ -228,9 +403,11 @@ def test_database_dump_and_restore_with_restore_cli_arguments():
try:
config_path = os.path.join(temporary_directory, 'test.yaml')
write_simple_custom_restore_configuration(
config = write_simple_custom_restore_configuration(
temporary_directory, config_path, repository_path, borgmatic_source_directory
)
create_test_tables(config)
select_test_tables(config)
subprocess.check_call(
['borgmatic', '-v', '2', '--config', config_path, 'rcreate', '--encryption', 'repokey']
@ -250,6 +427,7 @@ def test_database_dump_and_restore_with_restore_cli_arguments():
archive_name = parsed_output[0]['archives'][0]['archive']
# Restore the database from the archive.
drop_test_tables(config)
subprocess.check_call(
[
'borgmatic',
@ -264,15 +442,25 @@ def test_database_dump_and_restore_with_restore_cli_arguments():
'postgresql2',
'--port',
'5433',
'--username',
'postgres2',
'--password',
'test2',
]
)
# Ensure the test tables have actually been restored. But first modify the config to contain
# the altered restore values from the borgmatic command above. This ensures that the test
# tables are selected from the correct database.
database = config['postgresql_databases'][0]
database['restore_hostname'] = 'postgresql2'
database['restore_port'] = '5433'
database['restore_password'] = 'test2'
select_test_tables(config, use_restore_options=True)
finally:
os.chdir(original_working_directory)
shutil.rmtree(temporary_directory)
drop_test_tables(config)
drop_test_tables(config, use_restore_options=True)
def test_database_dump_and_restore_with_restore_configuration_options():
@ -285,9 +473,11 @@ def test_database_dump_and_restore_with_restore_configuration_options():
try:
config_path = os.path.join(temporary_directory, 'test.yaml')
write_custom_restore_configuration(
config = write_custom_restore_configuration(
temporary_directory, config_path, repository_path, borgmatic_source_directory
)
create_test_tables(config)
select_test_tables(config)
subprocess.check_call(
['borgmatic', '-v', '2', '--config', config_path, 'rcreate', '--encryption', 'repokey']
@ -307,12 +497,18 @@ def test_database_dump_and_restore_with_restore_configuration_options():
archive_name = parsed_output[0]['archives'][0]['archive']
# Restore the database from the archive.
drop_test_tables(config)
subprocess.check_call(
['borgmatic', '-v', '2', '--config', config_path, 'restore', '--archive', archive_name]
)
# Ensure the test tables have actually been restored.
select_test_tables(config, use_restore_options=True)
finally:
os.chdir(original_working_directory)
shutil.rmtree(temporary_directory)
drop_test_tables(config)
drop_test_tables(config, use_restore_options=True)
def test_database_dump_and_restore_with_directory_format():
@ -325,7 +521,7 @@ def test_database_dump_and_restore_with_directory_format():
try:
config_path = os.path.join(temporary_directory, 'test.yaml')
write_configuration(
config = write_configuration(
temporary_directory,
config_path,
repository_path,
@ -333,6 +529,8 @@ def test_database_dump_and_restore_with_directory_format():
postgresql_dump_format='directory',
mongodb_dump_format='directory',
)
create_test_tables(config)
select_test_tables(config)
subprocess.check_call(
['borgmatic', '-v', '2', '--config', config_path, 'rcreate', '--encryption', 'repokey']
@ -342,12 +540,17 @@ def test_database_dump_and_restore_with_directory_format():
subprocess.check_call(['borgmatic', 'create', '--config', config_path, '-v', '2'])
# Restore the database from the archive.
drop_test_tables(config)
subprocess.check_call(
['borgmatic', '--config', config_path, 'restore', '--archive', 'latest']
)
# Ensure the test tables have actually been restored.
select_test_tables(config)
finally:
os.chdir(original_working_directory)
shutil.rmtree(temporary_directory)
drop_test_tables(config)
def test_database_dump_with_error_causes_borgmatic_to_exit():

View File

@ -1,38 +0,0 @@
import ruamel.yaml
def test_dev_docker_compose_has_same_services_as_build_server_configuration():
'''
The end-to-end test configuration for local development and the build server's test
configuration use two different mechanisms for configuring and spinning up "services"the
database containers upon which the end-to-end tests are reliant. The dev configuration uses
Docker Compose, while the Drone build server configuration uses its own similar-but-different
configuration file format.
Therefore, to ensure dev-build parity, these tests assert that the services are the same across
the dev and build configurations. This includes service name, container image, environment
variables, and commands.
This test only compares services and does not assert anything else about the respective testing
environments.
'''
yaml = ruamel.yaml.YAML(typ='safe')
dev_services = {
name: service
for name, service in yaml.load(open('tests/end-to-end/docker-compose.yaml').read())[
'services'
].items()
if name != 'tests'
}
build_server_services = tuple(yaml.load_all(open('.drone.yml').read()))[0]['services']
assert len(dev_services) == len(build_server_services)
for build_service in build_server_services:
dev_service = dev_services[build_service['name']]
assert dev_service['image'] == build_service['image']
assert dev_service['environment'] == build_service['environment']
if 'command' in dev_service or 'commands' in build_service:
assert len(build_service['commands']) <= 1
assert dev_service['command'] == build_service['commands'][0]

View File

@ -0,0 +1,11 @@
import subprocess
import sys
def test_borgmatic_command_with_invalid_flag_shows_error_but_not_traceback():
output = subprocess.run(
'borgmatic -v 2 --invalid'.split(' '), stdout=subprocess.PIPE, stderr=subprocess.STDOUT
).stdout.decode(sys.stdout.encoding)
assert 'Unrecognized argument' in output
assert 'Traceback' not in output

View File

@ -32,6 +32,9 @@ def assert_command_does_not_duplicate_flags(command, *args, **kwargs):
flag_name: 1 for flag_name in flag_counts
}, f"Duplicate flags found in: {' '.join(command)}"
if '--json' in command:
return '{}'
def fuzz_argument(arguments, argument_name):
'''

View File

@ -13,8 +13,9 @@ def test_parse_arguments_with_no_arguments_uses_defaults():
global_arguments = arguments['global']
assert global_arguments.config_paths == config_paths
assert global_arguments.verbosity == 0
assert global_arguments.syslog_verbosity == 0
assert global_arguments.log_file_verbosity == 0
assert global_arguments.syslog_verbosity == -2
assert global_arguments.log_file_verbosity == 1
assert global_arguments.monitoring_verbosity == 1
def test_parse_arguments_with_multiple_config_flags_parses_as_list():
@ -25,8 +26,9 @@ def test_parse_arguments_with_multiple_config_flags_parses_as_list():
global_arguments = arguments['global']
assert global_arguments.config_paths == ['myconfig', 'otherconfig']
assert global_arguments.verbosity == 0
assert global_arguments.syslog_verbosity == 0
assert global_arguments.log_file_verbosity == 0
assert global_arguments.syslog_verbosity == -2
assert global_arguments.log_file_verbosity == 1
assert global_arguments.monitoring_verbosity == 1
def test_parse_arguments_with_action_after_config_path_omits_action():
@ -71,8 +73,9 @@ def test_parse_arguments_with_verbosity_overrides_default():
global_arguments = arguments['global']
assert global_arguments.config_paths == config_paths
assert global_arguments.verbosity == 1
assert global_arguments.syslog_verbosity == 0
assert global_arguments.log_file_verbosity == 0
assert global_arguments.syslog_verbosity == -2
assert global_arguments.log_file_verbosity == 1
assert global_arguments.monitoring_verbosity == 1
def test_parse_arguments_with_syslog_verbosity_overrides_default():
@ -85,6 +88,8 @@ def test_parse_arguments_with_syslog_verbosity_overrides_default():
assert global_arguments.config_paths == config_paths
assert global_arguments.verbosity == 0
assert global_arguments.syslog_verbosity == 2
assert global_arguments.log_file_verbosity == 1
assert global_arguments.monitoring_verbosity == 1
def test_parse_arguments_with_log_file_verbosity_overrides_default():
@ -96,8 +101,9 @@ def test_parse_arguments_with_log_file_verbosity_overrides_default():
global_arguments = arguments['global']
assert global_arguments.config_paths == config_paths
assert global_arguments.verbosity == 0
assert global_arguments.syslog_verbosity == 0
assert global_arguments.syslog_verbosity == -2
assert global_arguments.log_file_verbosity == -1
assert global_arguments.monitoring_verbosity == 1
def test_parse_arguments_with_single_override_parses():
@ -616,3 +622,16 @@ def test_parse_arguments_config_with_subaction_and_explicit_config_file_does_not
module.parse_arguments(
'config', 'bootstrap', '--repository', 'repo.borg', '--config', 'test.yaml'
)
def test_parse_arguments_with_borg_action_and_dry_run_raises():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
with pytest.raises(ValueError):
module.parse_arguments('--dry-run', 'borg', 'list')
def test_parse_arguments_with_borg_action_and_no_dry_run_does_not_raise():
flexmock(module.collect).should_receive('get_default_config_paths').and_return(['default'])
module.parse_arguments('borg', 'list')

View File

@ -10,7 +10,7 @@ from borgmatic.config import generate as module
def test_insert_newline_before_comment_does_not_raise():
field_name = 'foo'
config = module.yaml.comments.CommentedMap([(field_name, 33)])
config = module.ruamel.yaml.comments.CommentedMap([(field_name, 33)])
config.yaml_set_comment_before_after_key(key=field_name, before='Comment')
module.insert_newline_before_comment(config, field_name)
@ -125,14 +125,16 @@ def test_write_configuration_with_already_existing_directory_does_not_raise():
def test_add_comments_to_configuration_sequence_of_strings_does_not_raise():
config = module.yaml.comments.CommentedSeq(['foo', 'bar'])
config = module.ruamel.yaml.comments.CommentedSeq(['foo', 'bar'])
schema = {'type': 'array', 'items': {'type': 'string'}}
module.add_comments_to_configuration_sequence(config, schema)
def test_add_comments_to_configuration_sequence_of_maps_does_not_raise():
config = module.yaml.comments.CommentedSeq([module.yaml.comments.CommentedMap([('foo', 'yo')])])
config = module.ruamel.yaml.comments.CommentedSeq(
[module.ruamel.yaml.comments.CommentedMap([('foo', 'yo')])]
)
schema = {
'type': 'array',
'items': {'type': 'object', 'properties': {'foo': {'description': 'yo'}}},
@ -142,7 +144,9 @@ def test_add_comments_to_configuration_sequence_of_maps_does_not_raise():
def test_add_comments_to_configuration_sequence_of_maps_without_description_does_not_raise():
config = module.yaml.comments.CommentedSeq([module.yaml.comments.CommentedMap([('foo', 'yo')])])
config = module.ruamel.yaml.comments.CommentedSeq(
[module.ruamel.yaml.comments.CommentedMap([('foo', 'yo')])]
)
schema = {'type': 'array', 'items': {'type': 'object', 'properties': {'foo': {}}}}
module.add_comments_to_configuration_sequence(config, schema)
@ -150,7 +154,7 @@ def test_add_comments_to_configuration_sequence_of_maps_without_description_does
def test_add_comments_to_configuration_object_does_not_raise():
# Ensure that it can deal with fields both in the schema and missing from the schema.
config = module.yaml.comments.CommentedMap([('foo', 33), ('bar', 44), ('baz', 55)])
config = module.ruamel.yaml.comments.CommentedMap([('foo', 33), ('bar', 44), ('baz', 55)])
schema = {
'type': 'object',
'properties': {'foo': {'description': 'Foo'}, 'bar': {'description': 'Bar'}},
@ -160,7 +164,7 @@ def test_add_comments_to_configuration_object_does_not_raise():
def test_add_comments_to_configuration_object_with_skip_first_does_not_raise():
config = module.yaml.comments.CommentedMap([('foo', 33)])
config = module.ruamel.yaml.comments.CommentedMap([('foo', 33)])
schema = {'type': 'object', 'properties': {'foo': {'description': 'Foo'}}}
module.add_comments_to_configuration_object(config, schema, skip_first=True)
@ -168,7 +172,7 @@ def test_add_comments_to_configuration_object_with_skip_first_does_not_raise():
def test_remove_commented_out_sentinel_keeps_other_comments():
field_name = 'foo'
config = module.yaml.comments.CommentedMap([(field_name, 33)])
config = module.ruamel.yaml.comments.CommentedMap([(field_name, 33)])
config.yaml_set_comment_before_after_key(key=field_name, before='Actual comment.\nCOMMENT_OUT')
module.remove_commented_out_sentinel(config, field_name)
@ -180,7 +184,7 @@ def test_remove_commented_out_sentinel_keeps_other_comments():
def test_remove_commented_out_sentinel_without_sentinel_keeps_other_comments():
field_name = 'foo'
config = module.yaml.comments.CommentedMap([(field_name, 33)])
config = module.ruamel.yaml.comments.CommentedMap([(field_name, 33)])
config.yaml_set_comment_before_after_key(key=field_name, before='Actual comment.')
module.remove_commented_out_sentinel(config, field_name)
@ -192,7 +196,7 @@ def test_remove_commented_out_sentinel_without_sentinel_keeps_other_comments():
def test_remove_commented_out_sentinel_on_unknown_field_does_not_raise():
field_name = 'foo'
config = module.yaml.comments.CommentedMap([(field_name, 33)])
config = module.ruamel.yaml.comments.CommentedMap([(field_name, 33)])
config.yaml_set_comment_before_after_key(key=field_name, before='Actual comment.')
module.remove_commented_out_sentinel(config, 'unknown')
@ -201,7 +205,9 @@ def test_remove_commented_out_sentinel_on_unknown_field_does_not_raise():
def test_generate_sample_configuration_does_not_raise():
builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('schema.yaml').and_return('')
flexmock(module.yaml).should_receive('round_trip_load')
flexmock(module.ruamel.yaml).should_receive('YAML').and_return(
flexmock(load=lambda filename: {})
)
flexmock(module).should_receive('schema_to_sample_configuration')
flexmock(module).should_receive('merge_source_configuration_into_destination')
flexmock(module).should_receive('render_configuration')
@ -214,7 +220,9 @@ def test_generate_sample_configuration_does_not_raise():
def test_generate_sample_configuration_with_source_filename_does_not_raise():
builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('schema.yaml').and_return('')
flexmock(module.yaml).should_receive('round_trip_load')
flexmock(module.ruamel.yaml).should_receive('YAML').and_return(
flexmock(load=lambda filename: {})
)
flexmock(module.load).should_receive('load_configuration')
flexmock(module.normalize).should_receive('normalize')
flexmock(module).should_receive('schema_to_sample_configuration')
@ -229,7 +237,9 @@ def test_generate_sample_configuration_with_source_filename_does_not_raise():
def test_generate_sample_configuration_with_dry_run_does_not_write_file():
builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('schema.yaml').and_return('')
flexmock(module.yaml).should_receive('round_trip_load')
flexmock(module.ruamel.yaml).should_receive('YAML').and_return(
flexmock(load=lambda filename: {})
)
flexmock(module).should_receive('schema_to_sample_configuration')
flexmock(module).should_receive('merge_source_configuration_into_destination')
flexmock(module).should_receive('render_configuration')

View File

@ -12,36 +12,10 @@ def test_load_configuration_parses_contents():
config_file = io.StringIO('key: value')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
assert module.load_configuration('config.yaml') == {'key': 'value'}
config_paths = {'other.yaml'}
def test_load_configuration_replaces_constants():
builtins = flexmock(sys.modules['builtins'])
config_file = io.StringIO(
'''
constants:
key: value
key: {key}
'''
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
assert module.load_configuration('config.yaml') == {'key': 'value'}
def test_load_configuration_replaces_complex_constants():
builtins = flexmock(sys.modules['builtins'])
config_file = io.StringIO(
'''
constants:
key:
subkey: value
key: {key}
'''
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
assert module.load_configuration('config.yaml') == {'key': {'subkey': 'value'}}
assert module.load_configuration('config.yaml', config_paths) == {'key': 'value'}
assert config_paths == {'config.yaml', 'other.yaml'}
def test_load_configuration_with_only_integer_value_does_not_raise():
@ -49,7 +23,10 @@ def test_load_configuration_with_only_integer_value_does_not_raise():
config_file = io.StringIO('33')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
assert module.load_configuration('config.yaml') == 33
config_paths = {'other.yaml'}
assert module.load_configuration('config.yaml', config_paths) == 33
assert config_paths == {'config.yaml', 'other.yaml'}
def test_load_configuration_inlines_include_relative_to_current_directory():
@ -63,8 +40,10 @@ def test_load_configuration_inlines_include_relative_to_current_directory():
config_file = io.StringIO('key: !include include.yaml')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = {'other.yaml'}
assert module.load_configuration('config.yaml') == {'key': 'value'}
assert module.load_configuration('config.yaml', config_paths) == {'key': 'value'}
assert config_paths == {'config.yaml', '/tmp/include.yaml', 'other.yaml'}
def test_load_configuration_inlines_include_relative_to_config_parent_directory():
@ -85,8 +64,10 @@ def test_load_configuration_inlines_include_relative_to_config_parent_directory(
config_file = io.StringIO('key: !include include.yaml')
config_file.name = '/etc/config.yaml'
builtins.should_receive('open').with_args('/etc/config.yaml').and_return(config_file)
config_paths = {'other.yaml'}
assert module.load_configuration('/etc/config.yaml') == {'key': 'value'}
assert module.load_configuration('/etc/config.yaml', config_paths) == {'key': 'value'}
assert config_paths == {'/etc/config.yaml', '/etc/include.yaml', 'other.yaml'}
def test_load_configuration_raises_if_relative_include_does_not_exist():
@ -99,9 +80,10 @@ def test_load_configuration_raises_if_relative_include_does_not_exist():
config_file = io.StringIO('key: !include include.yaml')
config_file.name = '/etc/config.yaml'
builtins.should_receive('open').with_args('/etc/config.yaml').and_return(config_file)
config_paths = set()
with pytest.raises(FileNotFoundError):
module.load_configuration('/etc/config.yaml')
module.load_configuration('/etc/config.yaml', config_paths)
def test_load_configuration_inlines_absolute_include():
@ -115,8 +97,10 @@ def test_load_configuration_inlines_absolute_include():
config_file = io.StringIO('key: !include /root/include.yaml')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = {'other.yaml'}
assert module.load_configuration('config.yaml') == {'key': 'value'}
assert module.load_configuration('config.yaml', config_paths) == {'key': 'value'}
assert config_paths == {'config.yaml', '/root/include.yaml', 'other.yaml'}
def test_load_configuration_raises_if_absolute_include_does_not_exist():
@ -127,9 +111,10 @@ def test_load_configuration_raises_if_absolute_include_does_not_exist():
config_file = io.StringIO('key: !include /root/include.yaml')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = set()
with pytest.raises(FileNotFoundError):
assert module.load_configuration('config.yaml')
assert module.load_configuration('config.yaml', config_paths)
def test_load_configuration_inlines_multiple_file_include_as_list():
@ -146,8 +131,15 @@ def test_load_configuration_inlines_multiple_file_include_as_list():
config_file = io.StringIO('key: !include [/root/include1.yaml, /root/include2.yaml]')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = {'other.yaml'}
assert module.load_configuration('config.yaml') == {'key': ['value2', 'value1']}
assert module.load_configuration('config.yaml', config_paths) == {'key': ['value2', 'value1']}
assert config_paths == {
'config.yaml',
'/root/include1.yaml',
'/root/include2.yaml',
'other.yaml',
}
def test_load_configuration_include_with_unsupported_filename_type_raises():
@ -158,9 +150,10 @@ def test_load_configuration_include_with_unsupported_filename_type_raises():
config_file = io.StringIO('key: !include {path: /root/include.yaml}')
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = set()
with pytest.raises(ValueError):
module.load_configuration('config.yaml')
module.load_configuration('config.yaml', config_paths)
def test_load_configuration_merges_include():
@ -184,8 +177,13 @@ def test_load_configuration_merges_include():
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = {'other.yaml'}
assert module.load_configuration('config.yaml') == {'foo': 'override', 'baz': 'quux'}
assert module.load_configuration('config.yaml', config_paths) == {
'foo': 'override',
'baz': 'quux',
}
assert config_paths == {'config.yaml', '/tmp/include.yaml', 'other.yaml'}
def test_load_configuration_merges_multiple_file_include():
@ -217,12 +215,14 @@ def test_load_configuration_merges_multiple_file_include():
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = {'other.yaml'}
assert module.load_configuration('config.yaml') == {
assert module.load_configuration('config.yaml', config_paths) == {
'foo': 'override',
'baz': 'second',
'original': 'yes',
}
assert config_paths == {'config.yaml', '/tmp/include1.yaml', '/tmp/include2.yaml', 'other.yaml'}
def test_load_configuration_with_retain_tag_merges_include_but_keeps_local_values():
@ -255,11 +255,13 @@ def test_load_configuration_with_retain_tag_merges_include_but_keeps_local_value
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = {'other.yaml'}
assert module.load_configuration('config.yaml') == {
assert module.load_configuration('config.yaml', config_paths) == {
'stuff': {'foo': 'override'},
'other': {'a': 'override', 'c': 'd'},
}
assert config_paths == {'config.yaml', '/tmp/include.yaml', 'other.yaml'}
def test_load_configuration_with_retain_tag_but_without_merge_include_raises():
@ -285,9 +287,10 @@ def test_load_configuration_with_retain_tag_but_without_merge_include_raises():
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = set()
with pytest.raises(ValueError):
module.load_configuration('config.yaml')
module.load_configuration('config.yaml', config_paths)
def test_load_configuration_with_retain_tag_on_scalar_raises():
@ -313,9 +316,10 @@ def test_load_configuration_with_retain_tag_on_scalar_raises():
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = set()
with pytest.raises(ValueError):
module.load_configuration('config.yaml')
module.load_configuration('config.yaml', config_paths)
def test_load_configuration_with_omit_tag_merges_include_and_omits_requested_values():
@ -344,8 +348,10 @@ def test_load_configuration_with_omit_tag_merges_include_and_omits_requested_val
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = {'other.yaml'}
assert module.load_configuration('config.yaml') == {'stuff': ['a', 'c', 'x', 'y']}
assert module.load_configuration('config.yaml', config_paths) == {'stuff': ['a', 'c', 'x', 'y']}
assert config_paths == {'config.yaml', '/tmp/include.yaml', 'other.yaml'}
def test_load_configuration_with_omit_tag_on_unknown_value_merges_include_and_does_not_raise():
@ -374,8 +380,12 @@ def test_load_configuration_with_omit_tag_on_unknown_value_merges_include_and_do
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = {'other.yaml'}
assert module.load_configuration('config.yaml') == {'stuff': ['a', 'b', 'c', 'x', 'y']}
assert module.load_configuration('config.yaml', config_paths) == {
'stuff': ['a', 'b', 'c', 'x', 'y']
}
assert config_paths == {'config.yaml', '/tmp/include.yaml', 'other.yaml'}
def test_load_configuration_with_omit_tag_on_non_list_item_raises():
@ -403,9 +413,10 @@ def test_load_configuration_with_omit_tag_on_non_list_item_raises():
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = set()
with pytest.raises(ValueError):
module.load_configuration('config.yaml')
module.load_configuration('config.yaml', config_paths)
def test_load_configuration_with_omit_tag_on_non_scalar_list_item_raises():
@ -432,9 +443,10 @@ def test_load_configuration_with_omit_tag_on_non_scalar_list_item_raises():
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = set()
with pytest.raises(ValueError):
module.load_configuration('config.yaml')
module.load_configuration('config.yaml', config_paths)
def test_load_configuration_with_omit_tag_but_without_merge_raises():
@ -462,9 +474,10 @@ def test_load_configuration_with_omit_tag_but_without_merge_raises():
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = set()
with pytest.raises(ValueError):
module.load_configuration('config.yaml')
module.load_configuration('config.yaml', config_paths)
def test_load_configuration_does_not_merge_include_list():
@ -489,9 +502,10 @@ def test_load_configuration_does_not_merge_include_list():
)
config_file.name = 'config.yaml'
builtins.should_receive('open').with_args('config.yaml').and_return(config_file)
config_paths = set()
with pytest.raises(module.ruamel.yaml.error.YAMLError):
assert module.load_configuration('config.yaml')
assert module.load_configuration('config.yaml', config_paths)
@pytest.mark.parametrize(

View File

@ -4,19 +4,24 @@ from borgmatic.config import override as module
@pytest.mark.parametrize(
'value,expected_result',
'value,expected_result,option_type',
(
('thing', 'thing'),
('33', 33),
('33b', '33b'),
('true', True),
('false', False),
('[foo]', ['foo']),
('[foo, bar]', ['foo', 'bar']),
('thing', 'thing', 'string'),
('33', 33, 'integer'),
('33', '33', 'string'),
('33b', '33b', 'integer'),
('33b', '33b', 'string'),
('true', True, 'boolean'),
('false', False, 'boolean'),
('true', 'true', 'string'),
('[foo]', ['foo'], 'array'),
('[foo]', '[foo]', 'string'),
('[foo, bar]', ['foo', 'bar'], 'array'),
('[foo, bar]', '[foo, bar]', 'string'),
),
)
def test_convert_value_type_coerces_values(value, expected_result):
assert module.convert_value_type(value) == expected_result
def test_convert_value_type_coerces_values(value, expected_result, option_type):
assert module.convert_value_type(value, option_type) == expected_result
def test_apply_overrides_updates_config():
@ -24,17 +29,28 @@ def test_apply_overrides_updates_config():
'section.key=value1',
'other_section.thing=value2',
'section.nested.key=value3',
'location.no_longer_in_location=value4',
'new.foo=bar',
'new.mylist=[baz]',
'new.nonlist=[quux]',
]
config = {
'section': {'key': 'value', 'other': 'other_value'},
'other_section': {'thing': 'thing_value'},
'no_longer_in_location': 'because_location_is_deprecated',
}
schema = {
'properties': {
'new': {'properties': {'mylist': {'type': 'array'}, 'nonlist': {'type': 'string'}}}
}
}
module.apply_overrides(config, raw_overrides)
module.apply_overrides(config, schema, raw_overrides)
assert config == {
'section': {'key': 'value1', 'other': 'other_value', 'nested': {'key': 'value3'}},
'other_section': {'thing': 'value2'},
'new': {'foo': 'bar'},
'new': {'foo': 'bar', 'mylist': ['baz'], 'nonlist': '[quux]'},
'location': {'no_longer_in_location': 'value4'},
'no_longer_in_location': 'value4',
}

View File

@ -1,3 +1,9 @@
import pkgutil
import borgmatic.actions
import borgmatic.config.load
import borgmatic.config.validate
MAXIMUM_LINE_LENGTH = 80
@ -6,3 +12,23 @@ def test_schema_line_length_stays_under_limit():
for line in schema_file.readlines():
assert len(line.rstrip('\n')) <= MAXIMUM_LINE_LENGTH
ACTIONS_MODULE_NAMES_TO_OMIT = {'arguments', 'export_key', 'json'}
ACTIONS_MODULE_NAMES_TO_ADD = {'key', 'umount'}
def test_schema_skip_actions_correspond_to_supported_actions():
'''
Ensure that the allowed actions in the schema's "skip_actions" option don't drift from
borgmatic's actual supported actions.
'''
schema = borgmatic.config.load.load_configuration(borgmatic.config.validate.schema_filename())
schema_skip_actions = set(schema['properties']['skip_actions']['items']['enum'])
supported_actions = {
module.name.replace('_', '-')
for module in pkgutil.iter_modules(borgmatic.actions.__path__)
if module.name not in ACTIONS_MODULE_NAMES_TO_OMIT
}.union(ACTIONS_MODULE_NAMES_TO_ADD)
assert schema_skip_actions == supported_actions

View File

@ -1,4 +1,5 @@
import io
import os
import string
import sys
@ -57,7 +58,7 @@ def test_parse_configuration_transforms_file_into_mapping():
'''
)
config, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
config, config_paths, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert config == {
'source_directories': ['/home', '/etc'],
@ -67,6 +68,7 @@ def test_parse_configuration_transforms_file_into_mapping():
'keep_minutely': 60,
'checks': [{'name': 'repository'}, {'name': 'archives'}],
}
assert config_paths == {'/tmp/config.yaml'}
assert logs == []
@ -83,12 +85,13 @@ def test_parse_configuration_passes_through_quoted_punctuation():
'''
)
config, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
config, config_paths, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert config == {
'source_directories': [f'/home/{string.punctuation}'],
'repositories': [{'path': 'test.borg'}],
}
assert config_paths == {'/tmp/config.yaml'}
assert logs == []
@ -140,7 +143,7 @@ def test_parse_configuration_inlines_include_inside_deprecated_section():
include_file.name = 'include.yaml'
builtins.should_receive('open').with_args('/tmp/include.yaml').and_return(include_file)
config, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
config, config_paths, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert config == {
'source_directories': ['/home'],
@ -148,6 +151,7 @@ def test_parse_configuration_inlines_include_inside_deprecated_section():
'keep_daily': 7,
'keep_hourly': 24,
}
assert config_paths == {'/tmp/include.yaml', '/tmp/config.yaml'}
assert len(logs) == 1
@ -174,7 +178,7 @@ def test_parse_configuration_merges_include():
include_file.name = 'include.yaml'
builtins.should_receive('open').with_args('/tmp/include.yaml').and_return(include_file)
config, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
config, config_paths, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert config == {
'source_directories': ['/home'],
@ -182,6 +186,7 @@ def test_parse_configuration_merges_include():
'keep_daily': 1,
'keep_hourly': 24,
}
assert config_paths == {'/tmp/include.yaml', '/tmp/config.yaml'}
assert logs == []
@ -193,6 +198,9 @@ def test_parse_configuration_raises_for_missing_config_file():
def test_parse_configuration_raises_for_missing_schema_file():
mock_config_and_schema('')
builtins = flexmock(sys.modules['builtins'])
builtins.should_receive('open').with_args('/tmp/config.yaml').and_return(
io.StringIO('foo: bar')
)
builtins.should_receive('open').with_args('/tmp/schema.yaml').and_raise(FileNotFoundError)
with pytest.raises(FileNotFoundError):
@ -232,8 +240,8 @@ def test_parse_configuration_applies_overrides():
'''
)
config, logs = module.parse_configuration(
'/tmp/config.yaml', '/tmp/schema.yaml', overrides=['location.local_path=borg2']
config, config_paths, logs = module.parse_configuration(
'/tmp/config.yaml', '/tmp/schema.yaml', overrides=['local_path=borg2']
)
assert config == {
@ -241,10 +249,11 @@ def test_parse_configuration_applies_overrides():
'repositories': [{'path': 'hostname.borg'}],
'local_path': 'borg2',
}
assert config_paths == {'/tmp/config.yaml'}
assert logs == []
def test_parse_configuration_applies_normalization():
def test_parse_configuration_applies_normalization_after_environment_variable_interpolation():
mock_config_and_schema(
'''
location:
@ -252,17 +261,19 @@ def test_parse_configuration_applies_normalization():
- /home
repositories:
- path: hostname.borg
- ${NO_EXIST:-user@hostname:repo}
exclude_if_present: .nobackup
'''
)
flexmock(os).should_receive('getenv').replace_with(lambda variable_name, default: default)
config, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
config, config_paths, logs = module.parse_configuration('/tmp/config.yaml', '/tmp/schema.yaml')
assert config == {
'source_directories': ['/home'],
'repositories': [{'path': 'hostname.borg'}],
'repositories': [{'path': 'ssh://user@hostname/./repo'}],
'exclude_if_present': ['.nobackup'],
}
assert config_paths == {'/tmp/config.yaml'}
assert logs

View File

@ -0,0 +1,28 @@
import logging
from flexmock import flexmock
from borgmatic.hooks import apprise as module
def test_destroy_monitor_removes_apprise_handler():
logger = logging.getLogger()
original_handlers = list(logger.handlers)
module.borgmatic.hooks.logs.add_handler(
module.borgmatic.hooks.logs.Forgetful_buffering_handler(
identifier=module.HANDLER_IDENTIFIER, byte_capacity=100, log_level=1
)
)
module.destroy_monitor(flexmock(), flexmock(), flexmock(), flexmock(), flexmock())
assert logger.handlers == original_handlers
def test_destroy_monitor_without_apprise_handler_does_not_raise():
logger = logging.getLogger()
original_handlers = list(logger.handlers)
module.destroy_monitor(flexmock(), flexmock(), flexmock(), flexmock(), flexmock())
assert logger.handlers == original_handlers

View File

@ -8,7 +8,11 @@ from borgmatic.hooks import healthchecks as module
def test_destroy_monitor_removes_healthchecks_handler():
logger = logging.getLogger()
original_handlers = list(logger.handlers)
logger.addHandler(module.Forgetful_buffering_handler(byte_capacity=100, log_level=1))
module.borgmatic.hooks.logs.add_handler(
module.borgmatic.hooks.logs.Forgetful_buffering_handler(
identifier=module.HANDLER_IDENTIFIER, byte_capacity=100, log_level=1
)
)
module.destroy_monitor(flexmock(), flexmock(), flexmock(), flexmock(), flexmock())

View File

@ -0,0 +1,89 @@
import logging
import platform
from flexmock import flexmock
from borgmatic.hooks import loki as module
def test_initialize_monitor_replaces_labels():
'''
Assert that label placeholders get replaced.
'''
hook_config = {
'url': 'http://localhost:3100/loki/api/v1/push',
'labels': {'hostname': '__hostname', 'config': '__config', 'config_full': '__config_path'},
}
config_filename = '/mock/path/test.yaml'
dry_run = True
module.initialize_monitor(hook_config, flexmock(), config_filename, flexmock(), dry_run)
for handler in tuple(logging.getLogger().handlers):
if isinstance(handler, module.Loki_log_handler):
assert handler.buffer.root['streams'][0]['stream']['hostname'] == platform.node()
assert handler.buffer.root['streams'][0]['stream']['config'] == 'test.yaml'
assert handler.buffer.root['streams'][0]['stream']['config_full'] == config_filename
return
assert False
def test_initialize_monitor_adds_log_handler():
'''
Assert that calling initialize_monitor adds our logger to the root logger.
'''
hook_config = {'url': 'http://localhost:3100/loki/api/v1/push', 'labels': {'app': 'borgmatic'}}
module.initialize_monitor(
hook_config,
flexmock(),
config_filename='test.yaml',
monitoring_log_level=flexmock(),
dry_run=True,
)
for handler in tuple(logging.getLogger().handlers):
if isinstance(handler, module.Loki_log_handler):
return
assert False
def test_ping_monitor_adds_log_message():
'''
Assert that calling ping_monitor adds a message to our logger.
'''
hook_config = {'url': 'http://localhost:3100/loki/api/v1/push', 'labels': {'app': 'borgmatic'}}
config_filename = 'test.yaml'
dry_run = True
module.initialize_monitor(hook_config, flexmock(), config_filename, flexmock(), dry_run)
module.ping_monitor(
hook_config, flexmock(), config_filename, module.monitor.State.FINISH, flexmock(), dry_run
)
for handler in tuple(logging.getLogger().handlers):
if isinstance(handler, module.Loki_log_handler):
assert any(
map(
lambda log: log
== f'{config_filename}: {module.MONITOR_STATE_TO_LOKI[module.monitor.State.FINISH]} backup',
map(lambda x: x[1], handler.buffer.root['streams'][0]['values']),
)
)
return
assert False
def test_destroy_monitor_removes_log_handler():
'''
Assert that destroy_monitor removes the logger from the root logger.
'''
hook_config = {'url': 'http://localhost:3100/loki/api/v1/push', 'labels': {'app': 'borgmatic'}}
config_filename = 'test.yaml'
dry_run = True
module.initialize_monitor(hook_config, flexmock(), config_filename, flexmock(), dry_run)
module.destroy_monitor(hook_config, flexmock(), config_filename, flexmock(), dry_run)
for handler in tuple(logging.getLogger().handlers):
if isinstance(handler, module.Loki_log_handler):
assert False

Some files were not shown because too many files have changed in this diff Show More