Compare commits

...

565 Commits

Author SHA1 Message Date
Dan Helfman 08843d51d9 Replace "sequence" with "list" in docs for consistency. 2023-04-12 10:30:23 -07:00
Dan Helfman ea9213cb03 Spelling. 2023-04-11 22:12:57 -07:00
Dan Helfman 1ea4433aa9 Selectively shallow merge certain mappings or sequences when including configuration files (#672). 2023-04-11 21:49:10 -07:00
Dan Helfman 4c0e2cab78 View the results of configuration file merging via "validate-borgmatic-config --show" flag (#673). 2023-04-11 10:49:09 -07:00
Dan Helfman 31a2ac914a Add optional support for running end-to-end tests and building documentation with rootless Podman instead of Docker. 2023-04-10 14:26:54 -07:00
Dan Helfman d6ef0df50d Mention #670 being fixed in NEWS. 2023-04-09 10:01:08 -07:00
Dan Helfman cc60a71210 Clarify "log_file" NEWS (#413). 2023-04-06 14:12:12 -07:00
Dan Helfman 4cd7556a34 Add "log_file" command hook context to NEWS and docs (#413). 2023-04-06 13:58:37 -07:00
Dan Helfman b4b1fa939d
feat: add logfile name to hook context for interpolation
Merge pull request #68 from diivi/feat/add-log-filename-to-hook-context
2023-04-06 13:46:45 -07:00
Divyansh Singh 16d7131fb7 refactor tests 2023-04-07 01:00:38 +05:30
Divyansh Singh 091d60c226 refactor and improve tests 2023-04-06 12:36:10 +05:30
Divyansh Singh 0fbdf8d860 feat: add logfile name to hook context for interpolation 2023-04-06 09:31:24 +05:30
Dan Helfman 192bfe46a9 Fix error when running the "prune" action with both "archive_name_format" and "prefix" options set (#668). 2023-04-05 14:58:05 -07:00
Dan Helfman 080c3afa0d Fix documentation referring to "archive_name_format" in wrong configuration section. 2023-04-05 14:00:21 -07:00
Dan Helfman a9a65ebe54 Fix integration tests to actually assert (#666). 2023-04-04 22:11:36 -07:00
Dan Helfman 616eb6b6da Fix error with "info --match-archives" and fix "--match-archives" overriding logic (#666). 2023-04-04 21:25:10 -07:00
Dan Helfman 00d1dea94e Bump version for release. 2023-04-03 16:11:25 -07:00
Dan Helfman 127ad1dd1f
Add favicon to documentation.
Merge pull request #66 from diivi/add-favicon
2023-04-03 10:22:12 -07:00
Divyansh Singh fc58ba5763 add favicon to documentation 2023-04-03 17:36:24 +05:30
Dan Helfman 7e6bee84b0 Add "--log-file-format" flag for customizing the log message format (#658). 2023-04-02 23:06:36 -07:00
Dan Helfman 01811e03ba Tagged the auto-matching archive behavior as breaking in NEWS. 2023-04-02 14:38:35 -07:00
Dan Helfman 9712d00680 Add "match_archives" option (#588). 2023-04-01 23:57:55 -07:00
Dan Helfman 275e99d0b9 Add codespell link to documentation. 2023-04-01 14:38:52 -07:00
Dan Helfman b9328e6d42 Add spellchecking of source code to NEWS. 2023-04-01 14:09:48 -07:00
Dan Helfman 2934d0902c Code spell checking on every test run! 2023-04-01 11:03:59 -07:00
Dan Helfman 1ad43ad4b5
Fix: run typos to fix various typos in source code.
Merge pull request #65 from diivi/fix/run-typos
2023-04-01 10:44:11 -07:00
Divyansh Singh 32ab17fa46 merge 2023-04-01 22:12:41 +05:30
Divyansh Singh 6054ced931 fix: run typos 2023-04-01 22:10:32 +05:30
Dan Helfman 1412038ed3
Fix randomly failing test: test_log_outputs_kills_other_processes_when_one_errors (#635).
Merge pull request #64 from kxxt/master
2023-03-31 23:19:57 -07:00
kxxt fa8bc285c8 Fix randomly failing test. 2023-04-01 14:02:30 +08:00
Dan Helfman f256908b27 Document wording tweaks (#479). 2023-03-31 15:36:59 -07:00
Dan Helfman 3f78ac4085 Automatically use the "archive_name_format" option to filter which archives get used for borgmatic actions that operate on multiple archives (#479). 2023-03-31 15:21:08 -07:00
Dan Helfman 5f595f7ac3 Fix regression in which the "transfer" action produced a traceback (#663). 2023-03-30 23:21:20 -07:00
Dan Helfman b27e625a77 Update schema comment for check_repositories to mention labels (#635). 2023-03-28 15:44:38 -07:00
Dan Helfman fc2c181b74 Add missing Docker Compose depends. 2023-03-28 15:31:37 -07:00
Dan Helfman 010b82d6d8 Remove unnecessary cd in dev documentation. 2023-03-28 12:45:39 -07:00
Dan Helfman aaf3462d17 Fix Drone intentation. 2023-03-28 12:03:12 -07:00
Dan Helfman f709125110 Error out if run-full-tests is run not inside a test container. 2023-03-28 12:02:07 -07:00
Dan Helfman 3512191f3e Add check_repositories regression fix to NEWS (#662). 2023-03-28 11:45:55 -07:00
Dan Helfman 06b5d81baa Merge branch 'master' of github.com:borgmatic-collective/borgmatic 2023-03-28 11:15:31 -07:00
Dan Helfman 9d71bf916e
fix: make check repositories work with dict and str repositories (#662).
Merge pull request #63 from diivi/fix/check-repositories-by-label
2023-03-28 11:15:01 -07:00
Dan Helfman 59fe01b56d Update script comment. 2023-03-28 11:09:25 -07:00
Divyansh Singh 08e358e27f add and update tests 2023-03-28 22:51:35 +05:30
Divyansh Singh ce22d2d302 reformat 2023-03-28 22:29:21 +05:30
Divyansh Singh 2d08a63e60 fix: make check repositories work with dict and str repositories 2023-03-28 22:14:50 +05:30
Dan Helfman d96f2239c1 Update OpenBSD borgmatic link. 2023-03-27 23:43:39 -07:00
Dan Helfman 67a349ae44 I had one job... (#461). 2023-03-27 23:28:36 -07:00
Dan Helfman dcefded0fa Document that most command-line flags are not config-file-able (#461). 2023-03-27 23:21:14 -07:00
Dan Helfman 1bcdebd1cc Fix multiple repositories example. 2023-03-27 23:16:44 -07:00
Dan Helfman 7a8e0e89dd Mention prior versions of borgmatic in repositories schema. 2023-03-27 21:54:01 -07:00
Dan Helfman 489ae080e5 Update docs with a few more "path:" repositories references (#635). 2023-03-27 21:49:31 -07:00
Dan Helfman 0e3da7be63 Fix repository schema description. 2023-03-27 16:15:24 -07:00
Dan Helfman c5ffb76dfa Bump version for release. 2023-03-27 15:56:49 -07:00
Dan Helfman 61c7b8f13c Add optional repository labels so you can select a repository via "--repository yourlabel" at the command-line (#635). 2023-03-27 15:54:55 -07:00
Dan Helfman 3e8e38011b
Labels for repositories (#635).
Merge pull request #57 from diivi/feat/tag-repos
2023-03-27 15:46:22 -07:00
Dan Helfman d0d3a39833 When a database command errors, display and log the error message instead of swallowing it (#396). 2023-03-27 10:36:39 -07:00
Divyansh Singh 8bef1c698b add feature to docs 2023-03-27 22:16:39 +05:30
Dan Helfman acbbd6670a Removing debugging command output. 2023-03-26 21:26:35 -07:00
Divyansh Singh b336b9bedf add tests for repo labels 2023-03-27 00:19:23 +05:30
Divyansh Singh ec9def4e71 rename repository arg to repository_path in all borg actions 2023-03-26 23:52:25 +05:30
Divyansh Singh a136fda92d check all tests 2023-03-26 23:35:47 +05:30
Divyansh Singh b511e679ae remove optional label for repos from tests 2023-03-26 16:59:29 +05:30
Dan Helfman f56fdab7a9 Add troubleshooting documentation on PostgreSQL/MySQL authentication errors. 2023-03-25 17:08:17 -07:00
Dan Helfman 8c0eea7229 Add additional documentation link to environment variable feature. Rename constants section. 2023-03-25 08:56:25 -07:00
Dan Helfman 19e95628c3 Add documentation and NEWS for custom constants feature (#612). 2023-03-24 23:47:05 -07:00
Dan Helfman 4d01e53414
Fix: replace primitive values in config without quotes (#612).
Merge pull request #62 from diivi/fix/config-json-replacement
2023-03-24 23:45:36 -07:00
Divyansh Singh a082cb87cb fix: replace primitive values in config without quotes 2023-03-25 12:12:56 +05:30
Dan Helfman 1c51a8e229
Allow defining custom variables in config file (#612).
Merge pull request #60 from diivi/feat/constants-support
2023-03-24 22:50:57 -07:00
Dan Helfman d14a8df71a Hide obnoxious ruamel.yaml warnings during test runs. 2023-03-24 22:43:10 -07:00
Dan Helfman 739a58fe47 Rename scripts/run-full-dev-tests to scripts/run-end-to-end-dev-tests and make it run end-to-end tests only. 2023-03-24 16:24:00 -07:00
Dan Helfman af3431d6ae
fix: docs cli reference create spelling
Merge pull request #61 from diivi/docs/cli-reference
2023-03-24 16:09:50 -07:00
Dan Helfman 9851abc2e1 Add documentation on backing up a database running in a container (#649). 2023-03-24 15:18:49 -07:00
Divyansh Singh 61ce6f0473 fix: docs cli reference create spelling 2023-03-25 02:44:56 +05:30
Divyansh Singh 78e8bb6c8c reformat 2023-03-25 02:08:52 +05:30
Divyansh Singh af95134cd2 add test for complex constant 2023-03-25 02:03:36 +05:30
Divyansh Singh d6dfb8753a reformat 2023-03-25 01:50:47 +05:30
Divyansh Singh 1bc003560a Merge branch 'master' of https://github.com/diivi/borgmatic into feat/tag-repos 2023-03-25 01:39:26 +05:30
Divyansh Singh aeaf69f49e pass all tests 2023-03-25 01:34:03 +05:30
Divyansh Singh e83ad9e1e4 use repository["path"] instead of repository 2023-03-25 01:04:57 +05:30
Dan Helfman f42890430c Add code style plugins to enforce use of Python f-strings and prevent single-letter variables. 2023-03-23 23:11:14 -07:00
Divyansh Singh 6f300b0079 feat: constants support 2023-03-24 02:39:37 +05:30
Dan Helfman 9bec029b4f
Fix: remove extra links from docs css.
Merge pull request #59 from diivi/fix/remove-extra-links-from-css
2023-03-23 12:57:55 -07:00
Divyansh Singh 08afad5d81 end with newline 2023-03-24 01:25:15 +05:30
Divyansh Singh a01dc62468 fix: remove extra links from docs css 2023-03-24 01:23:40 +05:30
Dan Helfman 8b61225b13
Copy to clipboard support in documentation.
Merge pull request #58 from diivi/docs/copy-to-clipboard-support
2023-03-23 12:39:41 -07:00
Divyansh Singh 66d2f49f18 docs: copy to clipboard support 2023-03-23 14:45:23 +05:30
Dan Helfman 0a72c67c6c Add missing source directory error fix to NEWS (#655). 2023-03-22 13:02:22 -07:00
Dan Helfman ab64b7ef67
Fix error when a source directory doesn't exist and databases are configured (#655).
Merge pull request #56 from diivi/fix/no-error-on-database-backup-without-source-dirs
2023-03-22 12:59:01 -07:00
Divyansh Singh 1e3a3bf1e7 review 2023-03-23 01:18:06 +05:30
Divyansh Singh 7a2f287918 reformat base 2023-03-23 01:08:30 +05:30
Divyansh Singh 8a63c49498 feat: tag repos 2023-03-23 01:01:26 +05:30
Divyansh Singh 3b5ede8044 remove extra parameter from function call 2023-03-22 23:11:44 +05:30
Divyansh Singh bd235f0426 use exit_code_indicates_error and modify it to accept a command 2023-03-22 16:23:53 +05:30
Divyansh Singh 09183464cd fix: no error on database backups without source dirs 2023-03-22 09:41:39 +05:30
Dan Helfman ca6fd6b061 Add confusing error message fix to NEWS (#623). 2023-03-21 14:25:20 -07:00
Dan Helfman dd9a64f4b6
Fix confusing message when an error occurs running actions for a configuration file (#623).
Merge pull request #55 from diivi/fix/rephrase-error-message
2023-03-21 14:23:09 -07:00
Divyansh Singh 23e7f27ee4 fix: rephrase error when running from config
to avoid confusion, as the user might think the problem is with their config file
2023-03-22 02:22:43 +05:30
Dan Helfman f9ef52f9a5 Remove unused module and outdated test expectations (#576). 2023-03-21 10:29:17 -07:00
Dan Helfman 3f17c355ca Add "file://" paths to NEWS (#576). 2023-03-21 10:24:51 -07:00
Dan Helfman c83fae5e5b
Support file:// paths for repositories (#576).
Merge pull request #54 from diivi/feat/file-urls-support
2023-03-21 10:22:39 -07:00
Divyansh Singh 39ad8f64c4 add tests and remove magic number 2023-03-21 17:06:03 +05:30
Divyansh Singh e86d223bbf Merge branch 'master' of https://github.com/diivi/borgmatic into feat/file-urls-support 2023-03-21 16:55:05 +05:30
Divyansh Singh 86587ab2dc send repo directly to extract and export_tar 2023-03-20 21:51:45 +05:30
Divyansh Singh 58c95d8015 feat: file:// URLs support 2023-03-20 02:43:23 +05:30
Dan Helfman 6351747da5 Add NixOS package link to installation docs. 2023-03-19 09:02:47 -07:00
Dan Helfman 55c153409e Add "source_directories_must_exist" option to NEWS (#501). 2023-03-18 14:07:38 -07:00
Dan Helfman b115fb2fbe Merge branch 'master' of github.com:borgmatic-collective/borgmatic 2023-03-18 14:01:52 -07:00
Dan Helfman 31d04d9ee3
Optionally error if a source directory does not exist.
feat: add optional check for existence of source directories
2023-03-18 13:59:20 -07:00
Divyansh Singh f803836416 reformat 2023-03-18 17:27:33 +05:30
Divyansh Singh 997f60b3e6 add tests 2023-03-18 17:24:21 +05:30
Dan Helfman c84b26499b Add "borg_files_cache_ttl" option to NEWS. 2023-03-17 19:29:10 -07:00
Dan Helfman 214ae81cbb Add option to set borg_files_cache_ttl in config (#618).
Reviewed-on: borgmatic-collective/borgmatic#654
2023-03-18 02:24:41 +00:00
Divyansh Singh d17b2c74db feat: add optional check for existence of source directories 2023-03-18 04:35:55 +05:30
Soumik Dutta fb9677230b add test to ensure integers are converted to string
before setting them up to be environment variable values

Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-18 02:57:56 +05:30
Soumik Dutta 0db137efdf add option to set borg_files_cache_ttl in config
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-18 01:48:24 +05:30
Dan Helfman e6605c868d Clarify check frequency default behavior (#653). 2023-03-17 10:09:36 -07:00
Dan Helfman bdfe4b61eb Bump version for release. 2023-03-16 13:42:15 -07:00
Dan Helfman ca4461820d Add support for Python 3.11. 2023-03-16 13:29:37 -07:00
Dan Helfman 7605838bfe Add "--repository" flag to all actions where it makes sense (#564). 2023-03-16 13:27:08 -07:00
Dan Helfman 7a784b8eba Add "--repository" flag to common actions (where it makes sense) (#652).
Reviewed-on: borgmatic-collective/borgmatic#652
2023-03-16 20:21:40 +00:00
Nain 3e22414613 Update tests
Make them more explicit. Also formatting.
2023-03-16 14:01:29 -04:00
Nain 5f87ea3ec5 Add "--repository" flag to the "create" action 2023-03-16 13:15:49 -04:00
Nain a8aeace5b5 Add "--repository" flag to the "compact" action 2023-03-16 11:13:45 -04:00
Nain 480addd7ce Add "--repository" flag to the "check" action 2023-03-16 10:41:13 -04:00
Nain ce0ce4cd1c Merge mostly repetetive tests 2023-03-16 08:23:21 -04:00
Nain 7de9260b0d Remove test now that --repository isn't expected to error
As discussed #652#issuecomment-5579
2023-03-15 14:59:12 -04:00
Nain cdbe6cdf3a Add "--repository" flag to the "prune" action
part of ticket #564
2023-03-15 14:43:17 -04:00
Dan Helfman 95dcc20d5f Better indicate position of additional docs on page (#651).
Reviewed-on: borgmatic-collective/borgmatic#651
2023-03-15 18:13:27 +00:00
Dan Helfman 49e0494924 Fix --editable (mode) option given --user as arg (#648).
Reviewed-on: borgmatic-collective/borgmatic#650
2023-03-15 18:06:46 +00:00
Nain 5fad2bd408 Better indicate position of additional docs on page
On wide screens, the position of the documentation (how-to and reference guide)
is at same level as #it's-your-data.-keep-it-that-way.

So the jump due to anchor link makes it seem like we're taken to top aka
main content. Indicate that links are to the left so reader doesn't recurse.
2023-03-15 07:54:49 -04:00
Nain c6829782a3 Fix --editable (mode) option given --user as arg
--user option should be before, or after `--editable .` not in between.
Before seems better.
2023-03-15 06:50:47 -04:00
Dan Helfman 8cec7c74d8 Add "--strip-components all" on the "extract" action to remove leading path components (#647). 2023-03-09 10:09:16 -08:00
Dan Helfman d3086788eb Document how to list database dumps in an archive. 2023-03-08 16:09:41 -08:00
Dan Helfman 8d860ea02c
Enhanced docs with info on fetching mysql database size
Merge pull request #46 from Jelle-SamsonIT/patch-3
2023-03-08 15:52:28 -08:00
Dan Helfman b343363bb8 Change the default action order to: "create", "prune", "compact", "check" (#304). 2023-03-08 14:05:06 -08:00
Dan Helfman 9db31bd1e9 Run any command-line actions in the order specified instead of using a fixed ordering (#304). 2023-03-08 13:19:41 -08:00
Dan Helfman d88bcc8be9 Add Healthchecks "log" state feature to NEWS. 2023-03-07 15:45:23 -08:00
Dan Helfman 332f7c4bb6 Add support for healthchecks "log" feature (#628).
Reviewed-on: borgmatic-collective/borgmatic#645
2023-03-07 22:21:30 +00:00
Dan Helfman 5d19d86e4a Add flake8-quotes to complain about incorrect quoting so I don't have to! 2023-03-07 14:08:35 -08:00
Soumik Dutta 044ae7869a fix tests
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-08 03:30:12 +05:30
Dan Helfman 62ae82f2c0 Mention searching for files in the extract a backup guide. 2023-03-06 22:59:34 -08:00
Dan Helfman 66194b7304 Update dates in documentation examples. 2023-03-06 22:41:43 -08:00
Soumik Dutta 98e429594e added tests to make sure unsupported log states are detected
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 20:31:00 +05:30
Soumik Dutta 4fcfddbe08 return early if unsupported state is passed
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 19:58:57 +05:30
Soumik Dutta f442aeae9c fix logs_monitor_start_error()
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 05:21:56 +05:30
Soumik Dutta e211863cba update test_borgmatic.py
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 05:12:24 +05:30
Soumik Dutta 45256ae33f add test for healthchecks
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 03:38:08 +05:30
Soumik Dutta 1573d68fe2 update schema.yaml description
also add monitor.State.LOG to cronitor.

Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-05 21:57:13 +05:30
Soumik Dutta 69f6695253 Add support for healthchecks "log" feature #628
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-05 19:27:32 +05:30
Dan Helfman a7c055264d
Fix incorrect documentation TOC background by removing extra dark mode styles.
Merge pull request #52 from diivi/fix/remove-special-dark-mode-attributes
2023-03-04 16:18:04 -08:00
Divyansh Singh db18364a73 fix: remove extra dark mode styles 2023-03-05 03:16:46 +05:30
Dan Helfman 22498ebd4c In the documentation, mention what version of borgmatic introduced SQLite support. 2023-03-04 10:50:28 -08:00
Dan Helfman e1f02d9fa5 Add SQLite feature to NEWS and also integrations. 2023-03-04 09:59:16 -08:00
Dan Helfman 9ec220c600
Add SQLite database dump/restore hook (#295).
feat: add dump-restore support for sqlite databases
2023-03-04 09:47:21 -08:00
Divyansh Singh cf0275a3ed remove test path 2023-03-04 23:00:57 +05:30
Divyansh Singh c71eb60cd2 mock os.remove instead of actually removing a file 2023-03-04 13:08:30 +05:30
Divyansh Singh 675e54ba9f use os.remove and improve tests 2023-03-04 12:43:07 +05:30
Divyansh Singh 1793ad74bd add sqlite for e2e tests 2023-03-04 02:41:14 +05:30
Divyansh Singh 767a7d900b e2e tests schema update 2023-03-04 01:29:01 +05:30
Divyansh Singh 903507bd03 code review 2023-03-04 01:27:07 +05:30
Dan Helfman b6cf7d2adc Bump version for release. 2023-03-02 15:34:22 -08:00
Dan Helfman a071e02d20 With the "create" action and the "--list" ("--files") flag, only show excluded files at verbosity 2 (#620). 2023-03-02 15:33:42 -08:00
Divyansh Singh 3aa88085ed formatting fix 2023-03-03 00:01:52 +05:30
Divyansh Singh af1cc27988 feat: add dump-restore support for sqlite databases 2023-03-02 23:55:16 +05:30
Dan Helfman dbf8301c19 Add "checkpoint_volume" configuration option to creates checkpoints every specified number of bytes. 2023-02-27 10:47:17 -08:00
Dan Helfman 2a306bef12 Fix tests. 2023-02-26 23:34:17 -08:00
Dan Helfman 2a36a2a312 Add "--repository" flag to the "rcreate" action. Add "--progress" flag to the "transfer" action. 2023-02-26 23:22:23 -08:00
Dan Helfman d7a07f0428 Support status character changes in Borg 2.0.0b5 when filtering out special files that cause Borg to hang. 2023-02-26 22:36:13 -08:00
Dan Helfman da321e180d Fix the "create" action with the "--dry-run" flag querying for databases when a PostgreSQL/MySQL "all" database is configured. 2023-02-26 22:15:12 -08:00
Dan Helfman c6582e1171 Internally support new Borg 2.0.0b5 "--filter" status characters / item flags for the "create" action. 2023-02-26 17:17:25 -08:00
Dan Helfman 9b83afe491 With the "create" action, only one of "--list" ("--files") and "--progress" flags can be used. 2023-02-26 17:05:56 -08:00
Dan Helfman 2814ac3642 Update Borg 2.0 documentation links. 2023-02-26 16:44:43 -08:00
Dan Helfman 8a9d5d93f5 Add ntfy authentication to NEWS. 2023-02-25 14:23:42 -08:00
Dan Helfman 783a6d3b45 Add authentication to the ntfy hook (#621).
Reviewed-on: borgmatic-collective/borgmatic#644
2023-02-25 22:04:37 +00:00
Tom Hubrecht 95575c3450 Add auth test for the ntfy hook 2023-02-25 20:04:39 +01:00
Tom Hubrecht 9b071ff92f Make the auth logic more explicit and warnings if necessary 2023-02-25 20:04:39 +01:00
Tom Hubrecht d80e716822 Add authentication to the ntfy hook 2023-02-24 17:35:53 +01:00
Dan Helfman 418ebc8843 Add MySQL database hook "add_drop_database" configuration option to control whether dumped MySQL databases get dropped right before restore (#642). 2023-02-20 15:32:47 -08:00
Dan Helfman f5a448c7c2 Fix for potential data loss (data not getting backed up) when dumping large "directory" format PostgreSQL/MongoDB databases (#643). 2023-02-20 15:18:51 -08:00
Dan Helfman 37ac542b31 Merge pull request 'setup: Add link to MacPorts package' (#641) from neverpanic/borgmatic:cal-docs-macports-port into master
Reviewed-on: borgmatic-collective/borgmatic#641
2023-02-15 17:31:03 +00:00
Clemens Lang 8c7d7e3e41 setup: Add link to MacPorts package 2023-02-15 10:47:59 +01:00
Dan Helfman b811f125b2 Clarify "checks" configuration documentation for older versions of borgmatic (#639). 2023-02-12 21:42:43 -08:00
Dan Helfman 061f3e7917 Remove related documentation links. 2023-01-26 16:12:01 -08:00
Dan Helfman 6055918907 Upgrade documentation image dependencies. 2023-01-26 16:11:41 -08:00
Dan Helfman 4a90e090ad Clarify NEWS on database "all" dump feature applying to MySQL as well. 2023-01-26 15:28:17 -08:00
Dan Helfman 301b29ee11 Bump version for release. 2023-01-26 15:17:19 -08:00
Dan Helfman c1eb210253 Fix code style flake issue. 2023-01-26 15:09:35 -08:00
Dan Helfman 30cca62d09 Add configuration options for database command customization (#630). 2023-01-26 14:59:17 -08:00
Dan Helfman 113c0e7616 Update documentation about changes to "all" database restores (#438, #560). 2023-01-26 10:53:58 -08:00
Dan Helfman 0e6b2c6773 Optionally dump "all" PostgreSQL databases to separate files instead of one combined dump file (#438, #560). 2023-01-25 23:31:07 -08:00
Dan Helfman 22c750b949 Mention "before_actions" command hook in soft failure documentation (#631). 2023-01-25 13:01:52 -08:00
Dan Helfman 504cce39a1 Add NEWS entry for #629. 2023-01-14 09:17:27 -08:00
Dan Helfman 6c4abb6803 Merge pull request 'Log warning for excluding special files only if list is not empty' (#629) from palto42/borgmatic:special_files_warn into master
Reviewed-on: borgmatic-collective/borgmatic#629
2023-01-14 17:15:01 +00:00
palto42 fd7ad86daa
conditional warning for excluding special files 2023-01-03 21:53:51 +01:00
Dan Helfman 6f3b23c79d Lowercase borgmatic in documentation. 2022-12-23 14:12:48 -08:00
Dan Helfman 4838f5e810 Add borgmatic minimum version to compact docs (#624).
Reviewed-on: borgmatic-collective/borgmatic#625
2022-12-23 22:11:45 +00:00
Macguire Rintoul 116f1ab989 add borgmatic minimum version to compact docs 2022-12-23 13:32:01 -08:00
Dan Helfman 5e15c9f2bc Fix traceback when include merging on ARM64 (#622). 2022-12-23 10:07:53 -08:00
Dan Helfman 442641f9f6 Update borgmatic social links. 2022-12-16 11:39:05 -08:00
Dan Helfman f67c544be6 Optionally dump "all" PostgreSQL databases to separate files instead of one combined dump file (#438, #560). 2022-12-15 22:59:42 -08:00
Dan Helfman 437fd4dbae Update developer constributing instructions as well. 2022-12-13 23:56:32 -08:00
Dan Helfman 36873252d6 Update developer instructions. 2022-12-13 23:44:27 -08:00
Dan Helfman 1ef82a27fa Clarify data/archives check implicit enabling. 2022-12-12 16:03:05 -08:00
Dan Helfman 6837dcbf42 Clarify documentation about transferring archives between related repositories. 2022-12-10 12:59:44 -08:00
Dan Helfman c657764367 Fix logs that interfere with JSON output by making warnings go to stderr instead of stdout (#602). 2022-12-02 12:12:10 -08:00
Dan Helfman f79286fc91 Bump version for release. 2022-11-27 09:00:40 -08:00
Dan Helfman 694d376d15 Clarify documentation about multiple repositories and separate configuration files (#613). 2022-11-21 13:33:01 -08:00
Dan Helfman ab4c08019c Upgrade pytest test dependency (security). 2022-11-18 11:13:51 -08:00
Dan Helfman fd39f54df7 Code formatting. 2022-11-18 08:35:01 -08:00
Dan Helfman ca7e18bb29
Override PostgreSQL dump/restore commands via configuration options (#311).
Merge pull request #49 from jpaniagualaconich/specify-pg-dump-restore-commands
2022-11-18 08:33:14 -08:00
Dan Helfman 6975a5b155 Fix "data" consistency check to support "check_last" and consistency "prefix" options (#611). 2022-11-17 10:19:48 -08:00
Dan Helfman b627d00595 More consistency checks documentation edits. 2022-11-14 15:13:47 -08:00
Dan Helfman 9bd8f1a6df Clarify consistency check configuration. 2022-11-14 14:58:42 -08:00
Javier Paniagua faf682ca35 specify pg dump/restore commands (#311) 2022-11-06 11:12:53 +01:00
Dan Helfman 6aeb74550d Clarify examples in include merging and deep merging documentation (#607). 2022-10-28 19:33:19 -07:00
Dan Helfman 89500df429 Fix traceback when a configuration section is present but lacking any options (#604). 2022-10-23 13:56:03 -07:00
Dan Helfman 82b072d0b7 Update documentation to mention using blake2 with "transfer" action. 2022-10-17 15:04:30 -07:00
Dan Helfman 018c0296fd Document that special file exclusion also excludes symlinks to special files (#596). 2022-10-15 10:14:46 -07:00
Dan Helfman 9c42e7e817 Fix regression in which "check" action errored on certain systems (#597, #598). 2022-10-14 16:19:26 -07:00
Dan Helfman 953277a066 Fix special file detection when broken symlinks are encountered (#596). 2022-10-14 09:41:08 -07:00
Dan Helfman e2002b5488 Bump version for release. 2022-10-12 10:59:54 -07:00
Dan Helfman c9742e1d04 Code formatting. 2022-10-12 10:52:32 -07:00
Dan Helfman 906da838ef Add missing break-lock action command-line help (#357). 2022-10-12 10:48:10 -07:00
Dan Helfman d7f1c10c8c To prevent Borg hangs, unconditionally delete stale named pipes before dumping databases (#360). 2022-10-12 10:26:09 -07:00
Dan Helfman e8e4d17168 Clean up changelog for the current dev release. 2022-10-06 22:06:03 -07:00
Dan Helfman a31ce337e9 Skip auto-exclusion of special files when user explicitly sets read_special to true (#587). 2022-10-06 11:07:43 -07:00
Dan Helfman 902730df46 Update sample systemd file to allow system idle (#589). 2022-10-05 10:20:25 -07:00
Dan Helfman c969c822ee Do not inhibit idle in borgmatic.service (#589).
Reviewed-on: borgmatic-collective/borgmatic#589
2022-10-05 17:14:19 +00:00
Dan Helfman c31702d092 Fix for potential data loss with "patterns_from". Also, display excluded files (#590). 2022-10-04 22:57:18 -07:00
Dan Helfman ba8fbe7a44 Add "break-lock" action for removing any repository and cache locks leftover from Borg aborting (#357). 2022-10-04 13:42:18 -07:00
Dan Helfman 2774c2e4c0 Add support for Borg 2's "--match-archives" flag (replaces "--glob-archives") (#591). 2022-10-03 22:50:37 -07:00
Dan Helfman ae036aebd7 When the "read_special" option is true or database hooks are enabled, auto-exclude special files for a "create" action to prevent Borg from hanging (#587). 2022-10-03 12:58:13 -07:00
LaserEyess 2e9f70d496 Do not inhibit idle in borgmatic.service
When backing up a machine with a monitor using logind to control idle
timeout and things like DPMS, borgmatic can block the screen from
turning on/off with systemd-inhibit. This is because by default
systemd-inhibit will block "idle:sleep:shutdown". Borgmatic does not
need to care about idle, only about suspend and shutdown. So, add an
explicit `--what` flag for what borgmatic should inhibit.

For more information see systemd-inhibit(1).
2022-10-01 09:33:38 -04:00
Dan Helfman 90be5b84b1 Fix changelog development version. 2022-09-20 14:02:48 -07:00
Dan Helfman 80e95f20a3 Add "borgmatic borg" documentation note about interactive prompts. 2022-09-20 14:01:47 -07:00
Dan Helfman ac7c7d4036 Warn when ignoring a configured "read_special" value of false, as true is needed when database hooks are enabled (#587). 2022-09-20 13:52:13 -07:00
Dan Helfman 858b0b9fbe Note version of borgmatic needed for "borgmatic borg" action (#586). 2022-09-13 09:05:18 -07:00
Dan Helfman 9cc043f60e Update "find" command in documentation to work on BSDs and not just Linux (#583). 2022-09-11 20:02:30 -07:00
Dan Helfman 276a27d485 Bump version for release. 2022-09-08 10:29:44 -07:00
Dan Helfman 679bb839d7 Fix hang when database hooks are enabled and "patterns" contains a parent directory of "~/.borgmatic" (#582). 2022-09-08 10:16:42 -07:00
Dan Helfman 9e64d847ef Fix regression in which "borgmatic info --archive ..." showed repository info instead of archive info with Borg 1 (#577). 2022-08-30 20:42:42 -07:00
Dan Helfman 61fb275896 Fix duplicate-appearing log entries for "list" action. 2022-08-30 20:29:26 -07:00
Dan Helfman ca0c79c93c Fix duplicate bind path in sample systemd service. 2022-08-28 14:49:23 -07:00
Dan Helfman 87c97b7568 Fixed spurious, intermittent test failures related to command execution and logging. 2022-08-28 09:06:06 -07:00
Dan Helfman 80b8c25bba Update docs about "source_directories" being optional. 2022-08-25 13:24:26 -07:00
Dan Helfman d1837cd1d3 Bump version for release. 2022-08-25 11:58:06 -07:00
Dan Helfman c46f2b8508 Fix conflict between "patterns" and "source_directories" (#574), make "source_directories" optional (#542). 2022-08-25 11:55:34 -07:00
Dan Helfman a274c0dbf7 In generate-borgmatic-config, indicate that the example options are exhaustive. 2022-08-24 09:53:54 -07:00
Dan Helfman ef7e95e22a Fix end-to-end tests. 2022-08-21 23:29:13 -07:00
Dan Helfman 3be99de5b1 Update "repositories" examples in configuration to use ssh:// style syntax. 2022-08-21 22:40:31 -07:00
Dan Helfman e7b7560477 Bump version for release. 2022-08-21 21:54:13 -07:00
Dan Helfman 317dc7fbce Add "before_actions" and "after_actions" command hooks that run before/after all the actions for each repository, update docs to cover per-repository configurations (#463). 2022-08-21 21:48:37 -07:00
Dan Helfman 97fad15009 Switch to more accessible header permalink anchors in documentation. 2022-08-21 21:48:07 -07:00
Dan Helfman 462326406e Drop only-style actions like "--create", rename "prune --files" to "prune --list", and add "--list" alias to "create" and "export-tar" (#571). 2022-08-21 14:25:16 -07:00
Dan Helfman bbdf4893d1 Clarify --format flag in documentation. 2022-08-19 15:27:03 -07:00
Dan Helfman ef6617cfe6
Add link to Borg list --format documentation. 2022-08-19 15:16:56 -07:00
Dan Helfman dbef0a440f
Merge branch 'master' into patch-2 2022-08-19 15:16:17 -07:00
Dan Helfman 22628ba5d4 Update ssh:// examples in documentation to use relative paths on the remote machine (#557). 2022-08-19 12:00:40 -07:00
Dan Helfman 8576ac86b9 Fix incorrect version in documentation (#557). 2022-08-19 09:44:31 -07:00
Dan Helfman 540f9f6b72 Add missing test for "transfer" action (#557). 2022-08-19 09:40:29 -07:00
Dan Helfman f9d7faf884 Fix mount action to work without archive again (#557). 2022-08-18 23:33:05 -07:00
Dan Helfman 7dee6194a2 Add new "transfer" action for Borg 2 (#557). 2022-08-18 23:06:51 -07:00
Dan Helfman 68f9c1b950 Add generate-borgmatic-config end-to-end test. 2022-08-18 14:28:46 -07:00
Dan Helfman 43d711463c Add additional command-line flags to rcreate action (#557). 2022-08-18 14:28:12 -07:00
Dan Helfman 00255a2437 Various documentation edits for Borg 2 (#557). 2022-08-18 10:19:11 -07:00
Dan Helfman b40e9b7da2 Ignore archive filter parameters passed to list action when --archive is given (#557). 2022-08-18 09:59:48 -07:00
Dan Helfman 89d201c8ff Fleshing out NEWS for the Borg 2 changes. 2022-08-17 21:54:00 -07:00
Dan Helfman f47c98c4a5 Rename several configuration options to match Borg 2 (#557). 2022-08-17 21:14:58 -07:00
Dan Helfman 3b6ed06686 Add --other-repo flag to rcreate action (#557). 2022-08-17 17:33:09 -07:00
Dan Helfman 57009e22b5 Use flag-related utility functions in info action (#557). 2022-08-17 17:11:02 -07:00
Dan Helfman 3ab7a3b64a Replace use of --prefix with --glob-archives in info action (#557). 2022-08-17 15:36:19 -07:00
Dan Helfman 596dd49cf5 Use --glob-archives instead of --prefix on rlist command (#557). 2022-08-17 14:26:35 -07:00
Dan Helfman 28d847b8b1 Warn and tranform on non-ssh://-style repositories (#557). 2022-08-17 10:13:11 -07:00
Dan Helfman 2a1c6b1477 Update documentation with newly required ssh:// repository syntax for Borg 2 (#557). 2022-08-16 11:41:35 -07:00
Dan Helfman 30abd0e3de Update borg action for Borg 2 support (#557). 2022-08-16 09:30:00 -07:00
Dan Helfman f36e38ec20 Update mount action for Borg 2 support (#557). 2022-08-15 19:32:37 -07:00
Dan Helfman d807ce095e Update export-tar action for Borg 2 support (#557). 2022-08-15 17:34:12 -07:00
Dan Helfman 7626fe1189 Disallow borg list --json with --archive or --find (#557). 2022-08-15 15:40:28 -07:00
Dan Helfman cc04bf57df Update list action for Borg 2 support, add rinfo action, and update extract consistency check for Borg 2. 2022-08-15 15:04:40 -07:00
Dan Helfman cce6d56661 Update extract action for Borg 2 support (#557). 2022-08-13 23:07:29 -07:00
Dan Helfman a05d0f378e Factor out repository/archive flags formatting code from create action (#557). 2022-08-13 22:50:14 -07:00
Dan Helfman 94321aec7a Update compact action for Borg 2 support (#557). 2022-08-13 22:07:15 -07:00
Dan Helfman 4a55749bd2 Update prune action for Borg 2 support (#557). 2022-08-13 17:26:51 -07:00
Dan Helfman 2898e63166 Update create action for Borg 2 support (#557). 2022-08-12 23:54:13 -07:00
Dan Helfman c7176bd00a Add rinfo action for Borg 2 support (#557). 2022-08-12 23:06:56 -07:00
Dan Helfman 647ecdac29 Borg 2 support in borgmatic check action (#557). 2022-08-12 15:46:33 -07:00
Dan Helfman e7a8acfb96 Add missing rinfo action source files (#557). 2022-08-12 14:59:03 -07:00
Dan Helfman 622caa0c21 Support for Borg 2's rcreate and rinfo sub-commands (#557). 2022-08-12 14:53:20 -07:00
Dan Helfman 22149c6401 Switch to self-hosted container registry for borgmatic documentation image. 2022-08-01 21:17:59 -07:00
Dan Helfman 9aece3936a Modify "mount" and "extract" actions to require the "--repository" flag when multiple repositories are configured (#566). 2022-07-25 11:30:02 -07:00
Dan Helfman c7e4e6f6c9 Add Healthchecks "verify_tls" option to NEWS. 2022-07-23 23:16:06 -07:00
Dan Helfman bcad0de1a4
Add verify_tls option for Healthchecks to optionally disable TLS verification. 2022-07-23 23:11:41 -07:00
Uli 5c6407047f feat: add verify_tls flag for Healthchecks 2022-07-24 07:37:00 +02:00
Dan Helfman 6ddae20fa1 Fix handling of "repository" and "data" consistency checks to prevent invalid Borg flags (#565). 2022-07-23 21:02:21 -07:00
Dan Helfman 23feac2f4c Bump version for release. 2022-07-19 20:32:41 -07:00
Dan Helfman 16066942e3 Fix traceback with "create" action and "--json" flag when a database hook is configured (#563). 2022-07-19 10:25:10 -07:00
Jelle @ Samson-IT 3720f22234
reworded and added 'all' caveat 2022-07-13 22:03:51 +02:00
Jelle @ Samson-IT f7c8e89a9f
update format specifier syntax link to use anchor 2022-07-13 21:52:21 +02:00
Jelle @ Samson-IT ba377952fd
Added link to borgbackup list --format docs
I kept searching for this link, so it's time to add it to official docs.
2022-07-13 13:52:48 +02:00
Jelle @ Samson-IT 1fdec480d6
Added some info about fetching mysql database size 2022-07-13 13:29:45 +02:00
Dan Helfman e85d551eac Fix all database hooks to error when the requested database to restore isn't present in the Borg archive (#560). 2022-07-06 23:21:24 -07:00
Dan Helfman 2b23a63a08 Add end-to-end test for overrides. 2022-07-06 18:20:51 -07:00
Dan Helfman c0f48e1071 Fix command-line "--override" flag to continue supporting old configuration file formats (#561). 2022-07-06 18:14:44 -07:00
Dan Helfman 6005426684 Update documentation about configuring multiple consistency checks or multiple databases (#559). 2022-07-03 22:24:25 -07:00
Dan Helfman 673ed1a2d3 Clarify check frequency documentation in regards to multiple configuration files. 2022-07-02 09:40:49 -07:00
Dan Helfman 992f62edd2 Bump version for release. 2022-06-30 22:14:41 -07:00
Dan Helfman f1ffa1da1d Add another recommended flag to the backup documentation (#554). 2022-06-30 16:54:22 -07:00
Dan Helfman 457ed80744 Fix environment variable plumbing so options in one configuration file aren't used for others (#555). 2022-06-30 13:42:17 -07:00
Dan Helfman 1fc028ffae In documentation, be more explicit about default actions (#554). 2022-06-29 21:32:00 -07:00
Dan Helfman 10723efc68 Fix all monitoring hooks to warn if the server returns an HTTP 4xx error (#554). 2022-06-29 21:19:40 -07:00
Dan Helfman 2e0b2a308f Clarify --files flag action in documentation (#554). 2022-06-29 09:20:13 -07:00
Dan Helfman bd4d109009 Fix logging to include the full traceback when Borg experiences an internal error (#553). 2022-06-28 13:38:24 -07:00
Dan Helfman ae25386336 Update release script to abort if there are local changes. Prevents accidentally tagging a .dev0 changeset for release. 2022-06-25 09:42:05 -07:00
Dan Helfman d929313d45 Bump version. 2022-06-24 10:18:01 -07:00
Dan Helfman d372a86fe6 Code formatting. 2022-06-23 10:41:04 -07:00
Dan Helfman e306f03e1d Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic 2022-06-23 10:28:09 -07:00
Dan Helfman 8336165f23 Update documentation with environment variable escaping (#546). 2022-06-23 10:25:46 -07:00
Dan Helfman c664c6b17b Fix escaped environment variable in configuration (#546).
Reviewed-on: borgmatic-collective/borgmatic#549
2022-06-23 17:16:09 +00:00
Sébastien MB b63c854509 Fix escaped environment variable in configuration
- when an env variable is escaped in the configuration file, we expect
  not to resolve it and remove the escape char `\`
2022-06-17 09:50:56 +02:00
Dan Helfman aa013af25e Remove some whitespace around "New in version ..." documentation labels. 2022-06-16 20:49:15 -07:00
Dan Helfman cc32f0018b Start formalizing how new features are flagged by version in documentation. 2022-06-16 20:23:16 -07:00
Dan Helfman dfc4db1860 Document environment variable interpolation (#546). 2022-06-16 15:30:53 -07:00
Dan Helfman 35706604ea Upgrade documentation base images. 2022-06-16 15:22:59 -07:00
Dan Helfman 6d76e8e5cb Code formatting. 2022-06-16 14:21:18 -07:00
Dan Helfman aecb6fcd74 Code style, rename command-line flag, and move new code into its own file (#546) 2022-06-16 11:35:24 -07:00
Dan Helfman ea45f6c4c8 Environment variable resolution in configuration file (#546).
Reviewed-on: borgmatic-collective/borgmatic#548
2022-06-16 18:18:12 +00:00
Sébastien MB 97b5cd089d Allow environment variable resolution in configuration file
- all string fields containing an environment variable like ${FOO} will
  be resolved
- supported format ${FOO}, ${FOO:-bar} and ${FOO-bar} to allow default
  values if variable is not present in environment
- add --no-env argument for CLI to disable the feature which is enabled
  by default

Resolves: #546
2022-06-16 18:52:54 +02:00
Dan Helfman f2c2f3139e Add periods to ntfy config descriptions. 2022-06-10 09:42:41 -07:00
Dan Helfman dc4e7093e5 Remove link to related software that hasn't seen updates in the past couple years. 2022-06-09 19:31:50 -07:00
Dan Helfman b6f1025ecb Bump version for release. 2022-06-09 16:38:34 -07:00
Dan Helfman 65b2fe86c6 Fix Bash completion script to no longer alter your shell's settings. 2022-06-09 16:29:54 -07:00
Dan Helfman 0e90a80680 Add links in documentation for ntfy monitoring hook (#543). 2022-06-09 13:41:22 -07:00
Dan Helfman 7648bcff39 Add a hook for sending push notifications via ntfy.sh.
Reviewed-on: borgmatic-collective/borgmatic#543
2022-06-09 20:26:06 +00:00
Gavin Chappell a8b8d507b6
add a hook for sending push notifications via ntfy.sh 2022-06-09 21:10:38 +01:00
Dan Helfman 3561c93d74 Fix Healthchecks tests that leak global state, breaking downstream tests (discovered in #543). 2022-06-09 11:05:44 -07:00
Dan Helfman 331a503a25 Document the borgmatic version in which "borgmatic list --find" is available (#541). 2022-06-03 16:55:54 -07:00
Dan Helfman 9aefb5179f Fix None find paths (#541). 2022-06-03 15:20:05 -07:00
Dan Helfman d14f22e121 Add "borgmatic list --find" flag for searching for files across multiple archives (#541). 2022-06-03 15:12:14 -07:00
Dan Helfman b6893f6455 Exclude deprecated "borg list --successful" flag from getting passed to Borg. 2022-06-02 21:14:25 -07:00
Dan Helfman 80ec3e7d97 Deprecate "borgmatic list --successful" flag, as listing only non-checkpoint (successful) archives is now the default in newer versions of Borg. 2022-06-02 20:35:39 -07:00
Dan Helfman cd834311eb Clarify completion docs. 2022-06-01 10:57:23 -07:00
Dan Helfman d751cceeb0 Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic 2022-06-01 10:38:05 -07:00
Dan Helfman ce78b07e4b Add macOs to install and Bash completion documentation.
Reviewed-on: borgmatic-collective/borgmatic#540
2022-06-01 17:37:51 +00:00
adidalal 87f3c50931 setup: add macOS 2022-06-01 15:56:40 +00:00
Dan Helfman 8e9e06afe6 Bump version for release. 2022-05-31 09:41:20 -07:00
Dan Helfman 2bc91ac3d2 Add "generate-borgmatic-config --overwrite" flag to replace an existing destination file (#539). 2022-05-29 16:03:55 -07:00
Dan Helfman 5b615d51a4 Add support for "borgmatic borg debug" command (#538). 2022-05-29 15:43:03 -07:00
Dan Helfman c7f5d5fd0b Fix broken Bash completion of filenames, as in "-c config.yaml". 2022-05-29 10:49:33 -07:00
Dan Helfman 6ef7538eb0 Fix typo in Bash completions script. 2022-05-28 19:34:13 -07:00
Dan Helfman 8fa90053cf Add "borgmatic check --force" flag to ignore configured check frequencies (#523). 2022-05-28 19:29:33 -07:00
Dan Helfman b3682b61d1 Add another note about the consistency checks schema in old versions (#523). 2022-05-28 19:03:45 -07:00
Dan Helfman ad0e2e0d7c Tweak default check frequency to 1 month (#523). 2022-05-28 15:49:50 -07:00
Dan Helfman 6629f40cab In bash completion script, warn when script is out of date using script contents instead of version. (Fewer spurious warnings that way.) 2022-05-28 15:27:11 -07:00
Dan Helfman e76bfa555f Reduce the default consistency check frequency and support configuring the frequency independently for each check (#523). 2022-05-28 14:42:19 -07:00
Dan Helfman 8ddb7268eb Reuse "borg info" function. 2022-05-27 13:51:11 -07:00
Dan Helfman cb5fe02ebd Fix broken Bash completion end-to-end test. 2022-05-26 11:18:46 -07:00
Dan Helfman 77b84f8a48 Add Bash completion script so you can tab-complete the borgmatic command-line. 2022-05-26 10:27:53 -07:00
Dan Helfman 691ec96909 Fix python_requires to support all versions of 3.7 (#537).
Reviewed-on: borgmatic-collective/borgmatic#537
2022-05-26 15:51:46 +00:00
Steve Atwell 29b4666205 Fix python_requires to support all versions of 3.7
This is the standard way to support "Python 3.7 and newer" and it also
fixes use of borgmatic with some tools that do custom dependency
resolution.  E.g., using pex with --platform.
2022-05-26 07:05:04 -07:00
Dan Helfman 316a22701f Add documentation note about multiple merge limitation (#380). 2022-05-25 23:12:42 -07:00
Dan Helfman be59a3e574 Fix generate-borgmatic-config with "--source" flag to support more complex schema changes like the new Healthchecks configuration options (#536). 2022-05-25 10:26:26 -07:00
Dan Helfman 37327379bc Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic 2022-05-24 17:50:57 -07:00
Dan Helfman 22c2f13611 Remove trailing whitespace (#535).
Reviewed-on: borgmatic-collective/borgmatic#535
2022-05-25 00:50:12 +00:00
polyzen 8708ca07f4 Remove trailing whitespace 2022-05-25 00:43:40 +00:00
Dan Helfman 634d9e4946 Bump version for release. 2022-05-24 16:22:37 -07:00
Dan Helfman 54933ebef5 Change connection failures for monitoring hooks to be warnings instead of errors (#439). 2022-05-24 15:50:04 -07:00
Dan Helfman 157e59ac88 Add Healthchecks monitoring hook "send_logs" option to enable/disable sending borgmatic logs to the Healthchecks server (#460). 2022-05-24 14:44:33 -07:00
Dan Helfman 666f0dd751 Add missing Healthchecks "states" option example in configuration schema (#525). 2022-05-24 14:17:19 -07:00
Dan Helfman 8b179e4647 Reverse logic of Healtchecks "skip_states" option to just "states" (#525). 2022-05-24 14:09:42 -07:00
Dan Helfman 865eff7d98 Add Healthchecks monitoring hook "skip_states" option to disable pinging for particular monitoring states (#525). 2022-05-24 13:59:28 -07:00
Dan Helfman b9741f4d0b Add Healthchecks monitoring hook "ping_body_limit" option to configure how many bytes of logs to send to the Healthchecks server (#294). 2022-05-24 12:23:38 -07:00
Dan Helfman 02781662f8 Change monitoring hooks to specify the ping URL / integration key as a named option. 2022-05-23 20:02:10 -07:00
Dan Helfman 32a1043468 Remove the error when "archive_name_format" is specified but a retention prefix isn't (#402). 2022-05-23 16:11:24 -07:00
Dan Helfman 3e4aeec649 Warn when an unsupported variable is used in a hook command (#420). 2022-05-23 15:27:54 -07:00
Dan Helfman b98b827594 Remove stale comment. 2022-05-23 10:59:56 -07:00
Dan Helfman 255cc6ec23 When deep merging common configuration, merge colliding list values by appending them (#531). 2022-05-20 15:28:28 -07:00
Dan Helfman 51fc37d57a Improve the error message when a configuration override contains an invalid value (#528). 2022-05-20 13:38:53 -07:00
Dan Helfman 1921f55a9d Add emojis to documentation table of contents to make it easier to find particular how-to and reference guides at a glance. 2022-05-20 11:11:35 -07:00
Dan Helfman fbd381fcc1 Clarify manual database extraction documentation. 2022-05-20 10:06:19 -07:00
Dan Helfman cd88f9f2ea Better explain where to find the dump file when doing a manual restore (#510).
Reviewed-on: borgmatic-collective/borgmatic#510
2022-05-20 16:33:21 +00:00
Dan Helfman 788281cfb9 When a configuration include is a relative path, load it from either the current working directory or from the directory containing the file doing the including (#532). 2022-05-19 17:15:05 -07:00
Dan Helfman cd234b689d Link to additional borgmatic Docker image. 2022-05-12 12:00:12 -07:00
Dan Helfman 92354a77ee Mention that database dumps consumed disk space prior to borgmatic 1.5.3. 2022-05-09 16:08:47 -07:00
Dan Helfman 48ff3e70d1 Clarify documentation about include merging mappings vs. values. 2022-05-08 14:48:42 -07:00
Dan Helfman 7e9adfb899 Add NEWS entry for randomized systemd timer delay. 2022-05-07 23:11:26 -07:00
Dan Helfman e238e256f7
Add randomized delay to systemd timer.
Merge pull request from Daniel15/patch-1
2022-05-07 23:08:02 -07:00
Daniel Lo Nigro 3ecb92a8d2
Add randomized delay to systemd timer 2022-05-07 16:42:06 -07:00
Dan Helfman d58d450628 Remove stale borgmatic binary link. 2022-04-30 09:50:40 -07:00
Dan Helfman dee9c6e293 Remove link to stale borgmatic Docker image. 2022-04-30 09:46:08 -07:00
Dan Helfman 897c4487de Add mention in documentation about multiple backup scheduling needs (#511). 2022-04-28 11:16:31 -07:00
Dan Helfman 48b50b5209 Add documentation link to NEWS. 2022-04-26 10:24:25 -07:00
Dan Helfman 13bae8c23b Typo. 2022-04-26 10:12:02 -07:00
Dan Helfman 4a48e6aa04 Bump version for release. 2022-04-26 10:07:04 -07:00
Dan Helfman 525266ede6 Deep merging when including common configuration (#381). 2022-04-25 21:18:37 -07:00
Dan Helfman d045eb55ac Add mention of sudo's "secure_path" option in borgmatic installation documentation (#513). 2022-04-23 14:29:55 -07:00
Dan Helfman 0e6b425ac5 Fix "borgmatic borg key ..." to pass parameters to Borg in correct order (#515). 2022-04-23 14:03:15 -07:00
Dan Helfman bdc26f2117 Add note about old, pre-1.6.0 hooks behavior. 2022-04-22 19:58:28 -07:00
Dan Helfman ed7fe5c6d0 Instead of executing "before" command hooks before all borgmatic actions run (and "after" hooks after), execute these hooks right before/after the corresponding action (#473). 2022-04-21 22:08:25 -07:00
Dan Helfman cbce6707f4 Clarify one_file_system behavior in schema comment (#520). 2022-04-12 11:05:22 -07:00
Dan Helfman e40e726687 Change Healthchecks logs truncation size from 10k bytes to 100k bytes, corresponding to that same change on Healthchecks.io. 2022-04-06 22:00:18 -07:00
Dan Helfman 0c027a3050 Fix handling of TERM signal to exit borgmatic, not just forward the signal to Borg (#516). 2022-04-03 13:12:48 -07:00
Dan Helfman 9f44bbad65 Fix borgmatic exit code (so it's zero) when initial Borg calls fail but later retries succeed (#517). 2022-04-02 22:28:41 -07:00
Dan Helfman 413a079f51 Clarify Python version support. 2022-03-28 21:57:40 -07:00
gerdneuman 6f3accf691 Better explain where to find the dump file
I really had problem finding the dump file with the explanation as give before. I thought that the `~/.borgmatic/` would be my current user. So looked into `/home/gerd/.borgmatic` (wrong). Then I looked into `<EXTRACTED_DESTINATION_PATH/.borgmatic` (again wrong). Then finally (1h later and after having already prepared a bug ticketI figured out that the dump file is within `<EXTRACTED_DESTINATION_PATH/root/.borgmatic`. Hard to find because of course I d not only have `root` within `<EXTRACTED_DESTINATION_PATH/` but also all other backup'ed directories (including /etc/, /home/ on so on...)
2022-03-17 04:51:47 +00:00
Dan Helfman 5b3cfc542d Switch to PyPI API token. 2022-03-14 14:00:03 -07:00
Dan Helfman c838c1d11b Fix header placement in documentation guide. 2022-03-14 13:50:22 -07:00
Dan Helfman 4d1d8d7409 Bump version for release. 2022-03-14 13:43:24 -07:00
Dan Helfman db7499db82 Document "repositories" context to for "before_*" and "after_*" command action hooks (#469). 2022-03-14 13:34:14 -07:00
Dan Helfman 6b500c2a8b Add repositories context for command hooks.
Reviewed-on: borgmatic-collective/borgmatic#469
2022-03-14 20:13:15 +00:00
Dan Helfman 95c518e59b Documentation tip about dealing with hangs when database hook is enabled. 2022-03-12 13:17:32 -08:00
Dan Helfman 976516d0e1 When loading a configuration file that is unreadable due to file permissions, warn instead of erroring (#444). 2022-03-08 10:19:36 -08:00
Dan Helfman 574eb91921 Fix Borg usage error in the "compact" action when running "borgmatic --dry-run". Now, skip "compact" entirely during a dry run (#507). 2022-03-07 21:46:12 -08:00
Dan Helfman 28fef3264b Fix handling of "patterns_from" and "exclude_from" options to error instead of warning when referencing unreadable files and running "create" action (#486). 2022-03-07 15:32:07 -08:00
Dan Helfman 9161dbcb7d Removing unnecessary leading underscores from functions. 2022-03-07 11:58:29 -08:00
Dan Helfman 4b3027e4fc Add test for new working_directory option (#431). 2022-03-03 11:48:18 -08:00
Dan Helfman 0eb2634f9b Working directory option to support source directories with relative paths (#431).
Reviewed-on: borgmatic-collective/borgmatic#477
2022-03-03 19:28:17 +00:00
Dan Helfman 7c5b68c98f Bump version for release. 2022-02-10 10:29:18 -08:00
Dan Helfman 9317cbaaf0 Code formatting. 2022-02-10 10:23:34 -08:00
Dan Helfman 1b5f04b79f When using the "remote_rate_limit" option, tailor the flags passed to Borg depending on the Borg version (#394). 2022-02-10 10:16:09 -08:00
Dan Helfman 948c86f62c When using the "numeric_owner" option with the "extract" action, tailor the flags passed to Borg depending on the Borg version (#394). 2022-02-10 10:09:18 -08:00
Dan Helfman 7e7209322a When using the "numeric_owner" option, tailor the flags passed to Borg depending on the Borg version (#394). 2022-02-10 09:51:13 -08:00
Dan Helfman 00a57fd947 Code formatting. 2022-02-09 21:20:28 -08:00
Dan Helfman 6bf6ac310b When using the "bsd_flags" option, tailor the flags passed to Borg depending on the Borg version (#394). 2022-02-09 21:11:00 -08:00
Dan Helfman 4b5af2770d When the "atime" option is used, tailor the flags passed to Borg depending on version (#394). 2022-02-09 16:54:35 -08:00
Dan Helfman b525e70e1c Run "compact" action by default when no actions are specified (#394). 2022-02-09 14:33:12 -08:00
Dan Helfman 4498671233 Remove references to removed long-deprecated options (#394). 2022-02-09 11:08:02 -08:00
Dan Helfman 9997aa9a92 Fix capitalization on compact help. 2022-02-08 15:58:09 -08:00
Dan Helfman cbf7284f64 Add compact action to command-line reference documentation. 2022-02-08 15:37:24 -08:00
Dan Helfman ee466f870d Fixing ruamel.yaml.clib breakages harder. 2022-02-08 13:21:11 -08:00
Dan Helfman e3f4bf0293 Build fix for ruamel.yaml.clib error. 2022-02-08 12:52:45 -08:00
Dan Helfman 46688f10b1 Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic 2022-02-08 12:10:57 -08:00
Dan Helfman 48f44d2f3d Add tests for compact action (#394). 2022-02-08 12:05:02 -08:00
Dan Helfman bff1347ba3 Fix some test failures (#394). 2022-02-08 09:35:03 -08:00
Dan Helfman 9582324c88 Compact repository segments with new "borgmatic compact" action (#394). 2022-02-07 23:29:44 -08:00
Dan Helfman bb0716421d Add comment about systemd service setting that may interfere with external commands in hooks (#492). 2022-01-25 09:26:11 -08:00
Dan Helfman bec73245e9 Fix traceback when a YAML validation error occurs (#480, #482). 2022-01-19 20:39:03 -08:00
Dan Helfman dcead12e86 Attempt to fix documentation build error introduced by Eleventy upgrade. 2022-01-09 14:21:27 -08:00
Dan Helfman 0119514c11 Add Python version requirements to setup.py. 2022-01-09 10:19:53 -08:00
fabianschilling b39f08694d Merge branch 'master' into pr-working-directory 2022-01-05 09:30:27 +00:00
Dan Helfman 80bdf1430b Bump version for release. 2022-01-04 20:20:13 -08:00
Dan Helfman 2ee75546f5 Add MongoDB database hook documentation. 2022-01-04 16:26:38 -08:00
Dan Helfman 07d7ae60d5 Add MongoDB database hook (#288).
Reviewed-on: borgmatic-collective/borgmatic#483
2022-01-04 23:50:25 +00:00
Andrea Ghensi 87001337b4 Merge master into mongodb_hook 2022-01-04 22:20:44 +01:00
Dan Helfman 2e9964c200 Remove references to Lima Labs (shut down their storage business).
Reviewed-on: borgmatic-collective/borgmatic#488
2022-01-03 17:34:38 +00:00
Ian Kerins 3ec3d8d045 Remove references to Lima Labs
From their homepage:
> Lima Labs is shutting down our storage business. We will try to keep data available as long as possible. No promises but we are targeting 3/1/2022 to bring down Archive and Canada.
2022-01-03 02:29:38 -05:00
Dan Helfman 96384d5ee1 Attempt to fix typed-ast build issue by relaxing version requirements in test. 2022-01-02 23:22:24 -08:00
Dan Helfman 8ed5467435 Drop support for Python 3.6. Add support for 3.10. 2022-01-02 23:17:57 -08:00
Andrea Ghensi 7c6ce9399c fix integration tests and mongodb auth 2021-12-29 22:18:50 +01:00
Andrea Ghensi 6b7653484b Add mongodb dump hook 2021-12-26 01:00:58 +01:00
Fabian Schilling 85e0334826 Add missing working_directory arg to pass tests 2021-12-10 18:24:41 +01:00
Fabian Schilling 2a80e48a92 Pass working directory to execute functions 2021-12-10 18:23:44 +01:00
Fabian Schilling 5821c6782e Add defaults to not set in schema 2021-12-10 18:23:08 +01:00
Fabian Schilling f15498f6d9 Add working_directory to borgmatic schema 2021-12-10 17:58:27 +01:00
Dan Helfman a1673d1fa1 Fix unicode error when restoring particular MySQL databases (#476). 2021-12-08 16:40:25 -08:00
Dan Helfman 2e99a1898c Fix f-string with missing expression. 2021-11-29 14:05:36 -08:00
Dan Helfman 7a086d8430 Fix import ordering. 2021-11-29 14:00:14 -08:00
Dan Helfman 0e8e9ced64 When command-line configuration override produces a parse error, error cleanly (#471). 2021-11-29 12:49:21 -08:00
Dan Helfman f34951c088 Add MySQL dump command adjustment to NEWS. 2021-11-29 12:10:04 -08:00
Dan Helfman c6f47d4d56 Move mysqldump options to the beginning of the command due to MySQL bug 30994 (#470).
Reviewed-on: borgmatic-collective/borgmatic#470
2021-11-29 20:08:59 +00:00
nebulon42 c3e76585fc
move mysqldump options to the beginning of the command due to MySQL bug 30994. 2021-11-26 17:16:03 +01:00
Chen Yufei 0014b149f8 remove configuration_filename as it's already set. 2021-11-26 11:38:58 +08:00
Chen Yufei 091c07bbe2 Add context for various hooks. 2021-11-26 11:35:10 +08:00
Dan Helfman 240547102f Enable auto-play on linked asciicast. 2021-11-25 13:09:55 -08:00
Dan Helfman 2bbd53e25a
Merge pull request #43 from acsfer/patch-1
Github doesn't allow script embedding
2021-11-25 13:06:43 -08:00
acsfer 58f2f63977
Switch to HTML 2021-11-25 22:03:26 +01:00
acsfer 7df6a78c30
Github doesn't allow script embedding 2021-11-25 21:36:31 +01:00
Dan Helfman c646edf2c7 Bump version for release. 2021-11-22 13:19:15 -08:00
Dan Helfman bcc820d646 Add list_options setting (#306).
Reviewed-on: borgmatic-collective/borgmatic#464
2021-11-22 21:14:02 +00:00
nebulon42 3729ba5ca3
add list_options setting, fixes #306 2021-11-20 15:43:58 +01:00
Dan Helfman 9c19591768 Revise hosting provider links. 2021-11-15 20:06:09 -08:00
Dan Helfman 38ebfd2969 Rename retry_timeout to retry_wait and standardize log formatting (#28). 2021-11-15 11:51:17 -08:00
Dan Helfman 180018fd81 Retry failing backups (#28, #432).
Reviewed-on: borgmatic-collective/borgmatic#432
2021-11-15 19:34:24 +00:00
Dan Helfman 794ae94ac4 Attempt to limit documentation pushing to commits (so, not pull requests). 2021-11-15 11:08:26 -08:00
Dan Helfman 4eb6359ed3 Remove now-unneeded build image workaround. 2021-11-15 10:56:12 -08:00
cadamswaite 976a877a25 Formatting 2021-11-14 22:37:42 +00:00
cadamswaite b4117916b8 Add timeout and tests 2021-11-14 22:15:22 +00:00
cadamswaite 19cad89978 Add some tests for retry logic 2021-11-14 21:35:23 +00:00
cadamswaite 6b182c9d2d Merge branch 'master' into master 2021-11-14 18:24:17 +00:00
Dan Helfman 4d6ed27f73 Add to changelog: Add support for old version (2.x) of jsonschema library. 2021-10-23 09:49:16 -07:00
Dan Helfman 745a8f9b8a Add support for both jsonschema v3 and old v2 (#459).
Reviewed-on: borgmatic-collective/borgmatic#459
2021-10-23 16:47:53 +00:00
Dan Helfman 6299d8115d Limit documentation build to master of main repo, as it pushes a Docker image. 2021-10-23 09:45:17 -07:00
Kim B. Heino 717cfd2d37 validate: add support for both jsonschema v3 and old v2
RHEL8 and RHEL7 have old jsonschema v2. Try v3 (Draft7) first but
fallback to v2 (Draft4) if needed.
2021-10-23 15:04:07 +03:00
Dan Helfman 7881327004 Upgrade CI test dependencies. 2021-10-22 14:07:14 -07:00
Dan Helfman 549aa9a25f Update editable link. 2021-10-22 14:06:27 -07:00
Dan Helfman 1c6890492b Bump version for release. 2021-10-11 17:02:32 -07:00
Dan Helfman a7c8e7c823 Bump version for release. 2021-10-11 11:13:32 -07:00
Dan Helfman c8fcf6b336 Mention changing borgmatic path in cron documentation (#455). 2021-10-11 11:02:08 -07:00
Dan Helfman 449896f661 Fix error when configured source directories are not present on the filesystem at the time of backup (#387). 2021-10-11 10:40:10 -07:00
Dan Helfman 1004500d65 Update sample systemd service file comments about more granular read-only filesystem settings. 2021-10-11 09:33:07 -07:00
Dan Helfman 0a8d4e5dfb
Add more strict ProtectHome to systemd sample configuration.
Merge pull request #42 from VTimofeenko/systemd_protecthome
2021-10-11 09:26:28 -07:00
Dan Helfman 38e35bdb12 Skip TLS verify in documentation build clone to work around old drone/git CA certs. 2021-10-04 14:31:15 -07:00
Dan Helfman 65503e38b6 Sigh. 2021-10-04 13:14:19 -07:00
Dan Helfman d0c5bf6f6f Another attempt to unbreak build. 2021-10-04 13:13:35 -07:00
Dan Helfman f129e4c301 Attempt to work-around outdated CA certificates in drone/git Docker image. 2021-10-04 13:09:44 -07:00
Dan Helfman fbbb096cec Note in documentation that borgmatic requires Python 3.6+. 2021-10-04 11:15:51 -07:00
Dan Helfman 77980511c6 Add another glob pattern example to exclude patterns. 2021-09-16 09:51:40 -07:00
Dan Helfman 4ba206f8f4 Update build server URL to new organization namespace. 2021-09-14 11:35:34 -07:00
Dan Helfman ecc849dd07 Move Gitea hosting from a personal namespace to an organization. 2021-09-14 11:32:01 -07:00
Dan Helfman 7ff6066d47 Move GitHub hosting from a personal namespace to an organization. 2021-09-14 10:18:10 -07:00
Dan Helfman 2bb1fc9826 Mention Docker Compose under installation options. 2021-09-12 13:15:34 -07:00
Vladimir Timofeenko 6df6176f3a
Added more strict ProtectHome to systemd unit
This commit changes the comment in sample systemd service.

Using a combination of 'ProtectHome' and 'BindPaths' it's possible to
hide the irrelevant paths inside /root from borgmatic service when it is
run.

ReadWritePaths are suggested to be used only for paths that contain borg
repositories and the backup sources can be specified as ReadOnlyPaths.
2021-08-30 11:20:34 -07:00
Dan Helfman acb2ca79d9 Fix traceback that can occur when dumping a database (#440). 2021-08-06 08:58:11 -07:00
Dan Helfman c9211320e1 Fix dev version in changelog. 2021-08-04 15:32:51 -07:00
Dan Helfman 760286abe1 Dev release bump. 2021-07-30 09:49:07 -07:00
Dan Helfman 5890a1cb48 Fix "message too long" error when logging to rsyslog (#389). 2021-07-30 09:48:13 -07:00
Dan Helfman b3f5a9d18f Fix error when configuration file contains "umask" option (#437). 2021-07-27 10:04:22 -07:00
Dan Helfman 80b33fbf8a Code style reformatting. 2021-07-27 09:39:48 -07:00
Dan Helfman 5389ff6160
Merge pull request #41 from mkszuba/tests_no_xxd
tests/integration/test_execute: use plain Python rather than xxd
2021-07-27 09:39:02 -07:00
Marek Szuba e8b8d86592 tests/integration/test_execute: use plain Python rather than xxd
Removes this test's dependencies on vim and /dev/urandom.

Signed-off-by: Marek Szuba <marek.szuba@cern.ch>
2021-07-27 13:50:16 +01:00
Dan Helfman 92d729a9dd Try temporary work around for Drone build bug: https://github.com/drone-plugins/drone-docker/pull/327 2021-07-26 16:33:41 -07:00
Dan Helfman c63219936e Wording tweaks to security policy. 2021-07-26 13:44:14 -07:00
Dan Helfman 0aff497430 Bump version for release. 2021-07-26 10:17:49 -07:00
Dan Helfman 1f3907a6a5 Fix for failing PostgreSQL directory format test (#430). 2021-07-26 09:42:14 -07:00
Dan Helfman 2a8692c64f Fix integration test to hopefully work on Alpine (#430). 2021-07-25 22:50:00 -07:00
Dan Helfman 1709f57ff0 Fix hang when restoring a PostgreSQL "tar" format database dump (#430). 2021-07-25 22:30:15 -07:00
cadamswaite 89baf757cf Sort imports 2021-07-14 23:17:35 +01:00
cadamswaite 4f36fe2b9f Run Black on changed file 2021-07-14 22:53:01 +01:00
cadamswaite 510449ce65 Change default retries to 0 2021-07-14 22:49:03 +01:00
cadamswaite 4cc4b8d484 Add queue based retry logic 2021-07-14 22:46:02 +01:00
Dan Helfman 9c972cb0e5 Add documentation note about systemd configuration with alternate install methods (#428). 2021-06-29 21:38:53 -07:00
Dan Helfman 9b1779065e Pin ruamel.yaml.clib to work around docs build issue. 2021-06-29 21:35:46 -07:00
Dan Helfman 057ec3e59b Add NEWS entry for #379: Suppress console output in sample crontab and systemd service files. 2021-06-23 10:35:41 -07:00
Dan Helfman bc2e611a74 Suppress console output in sample crontab/systemd service files (#379).
Reviewed-on: witten/borgmatic#379
2021-06-23 17:32:47 +00:00
Dan Helfman b6d3a1e02f Merge branch 'master' of ssh://projects.torsion.org:3022/witten/borgmatic 2021-06-23 10:22:07 -07:00
Dan Helfman 54d57e1349 Add test for #407: Fix syslog logging on FreeBSD. 2021-06-23 10:21:45 -07:00
Dan Helfman af0b3da8ed Fix syslog logging on FreeBSD (#407).
Reviewed-on: witten/borgmatic#407
2021-06-23 17:21:25 +00:00
Dan Helfman 27d37b606b Better error messages! Switch the library used for validating configuration files (from pykwalify to jsonschema). 2021-06-22 13:27:59 -07:00
Dan Helfman 77a860cc62 Link borgmatic Ansible role from installation documentation. 2021-06-19 19:04:22 -07:00
Dan Helfman 7bd6374751 Bump version for release. 2021-06-17 20:44:54 -07:00
Dan Helfman cf8882f2bc Run arbitrary Borg commands with new "borgmatic borg" action (#425). 2021-06-17 20:41:44 -07:00
Dan Helfman b37dd1a79e Document use case of running backups conditionally based on laptop power level (#419). 2021-06-09 10:03:35 -07:00
Dan Helfman fd59776f91 Bump version for release. 2021-06-08 11:44:53 -07:00
Dan Helfman 9fd28d2eed Fix error handling to error loudly when Borg gets killed due to running out of memory (#423)! 2021-06-08 11:43:55 -07:00
Dan Helfman f5c61c8013 Move #borgmatic IRC channel from Freenode to Libera Chat due to Freenode takeover drama. 2021-06-06 21:09:40 -07:00
Dan Helfman 88cb49dcc4 Fix release script based on GitHub authentication query parameter deprecation. 2021-04-24 20:27:53 -07:00
Dan Helfman 73235e59be Upgrade "py" test dependency (security). 2021-04-20 10:39:49 -07:00
Dan Helfman 7076a7ff86 Add link to Hetzner storage offering from the documentation (#390). 2021-04-18 18:03:43 -07:00
Dan Helfman d6e376d32d Fix end-to-end test broken by change in source directory examples. 2021-04-18 17:54:54 -07:00
Dan Helfman 9016f4be43 Clarify that spaces in path names should not be backslashed in path names (#406). 2021-04-18 17:28:11 -07:00
Jeffery To d1c403999f
Reduce console output in sample crontab/systemd service files.
As borgmatic will log to syslog in the sample crontab/systemd service
files, this makes console output redundant. (cron will mail any console
output to the root user; systemd will log any console output to syslog.)

This adds --verbosity -1 to both files to reduce console output to the
minimum.
2021-04-13 01:40:57 +08:00
Dan Helfman d543109ef4 "Fix" build failure with Alpine Edge by switching from Edge to Alpine 3.13. 2021-04-09 15:58:23 -07:00
Dan Helfman 7085a45649 Fix build so as not to attempt to build and push documentation for a non-master branch. 2021-04-09 15:04:09 -07:00
Dan Helfman cf4c603f1d Clarify canonical home of borgmatic in documentation (#398). 2021-04-09 14:54:21 -07:00
Victor Bouvier-Deleau d2533313bc
Fix syslog logging on FreeBSD
The UNIX domain socket to use on FreeBSD is /var/run/log.
See syslogd FreeBSD man page: https://www.freebsd.org/cgi/man.cgi?query=syslogd&sektion=8
2021-04-02 14:11:50 +02:00
Dan Helfman c43b50b6e6 Upgrade PyYAML. 2021-03-30 22:29:20 -07:00
Dan Helfman c072678936 Add support for ruamel.yaml 0.17.x YAML parsing library (#404). 2021-03-30 15:53:19 -07:00
Dan Helfman 631da1465e Add support for Python 3.9. 2021-03-30 15:36:26 -07:00
Dan Helfman f29519a5cd
Merge pull request #38 from lukehsiao/patch-1
Fix link to issue tracker in documentation
2021-03-20 15:45:15 -07:00
Luke Hsiao 5d82b42ab8
Fix link to issue tracker in documentation
Fixes: a1d986d952
2021-03-18 17:26:37 -07:00
Dan Helfman 4897a78fd3 Fix database tests broken by PostgreSQL upgrade in Alpine Edge. 2020-12-24 22:23:09 -08:00
Dan Helfman a1d986d952 Replace "improve this documentation" form with link to support and ticket tracker. 2020-12-24 14:57:51 -08:00
Dan Helfman 717c90a7d0 Clarify in systemd service file comment that security settings are optional. 2020-12-09 10:08:07 -08:00
Dan Helfman 8fde19a7dc Update systemd service example to return a permission error when a system call isn't permitted. 2020-11-30 22:14:28 -08:00
Dan Helfman ad7198ba66 Tweak to test failing on some machines. 2020-11-26 16:22:42 -08:00
Dan Helfman eb4b4cc92b Fix line length in schema. 2020-11-25 19:21:06 -08:00
Dan Helfman 41bf520585 Document that passphrase is used for Borg keyfile encryption, not just repokey encryption (#373). 2020-11-25 18:36:23 -08:00
Dan Helfman c0ae01f5d5 Code formatting. 2020-11-25 17:46:57 -08:00
Dan Helfman 8b8f92d717 Prevent newer (borgmatic-unsupported) version of Black code formatter installing in Alpine Edge. 2020-11-25 17:42:04 -08:00
Dan Helfman ccd1627175 Fix timing-related test error in Alpine Edge. 2020-11-25 15:48:33 -08:00
Dan Helfman b8a7e23f46 Add missing pip to test script. 2020-11-22 17:42:58 -08:00
Dan Helfman 1f4f28b4dc Drop support for Python 3.5. Only support black code formatter on Python 3.8+. 2020-11-22 17:27:21 -08:00
Dan Helfman ea6cd53067 Update versions of test dependencies (test_requirements.txt and test containers). 2020-11-22 14:48:07 -08:00
Dan Helfman 267138776d Add protection for accidentally releasing a dev version. 2020-11-21 14:03:39 -08:00
Dan Helfman 604b3d5e17 Bump version. 2020-11-21 13:56:19 -08:00
Dan Helfman 667e1e5b15 Update document about new --override behavior (#361). 2020-11-19 11:01:53 -08:00
197 changed files with 19270 additions and 4308 deletions

View File

@ -1,110 +1,31 @@
---
kind: pipeline
name: python-3-5-alpine-3-10
name: python-3-8-alpine-3-13
services:
- name: postgresql
image: postgres:11.6-alpine
image: postgres:13.1-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.3
image: mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
- name: mongodb
image: mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: test
clone:
skip_verify: true
steps:
- name: build
image: python:3.5-alpine3.10
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-6-alpine-3-10
services:
- name: postgresql
image: postgres:11.6-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.3
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: python:3.6-alpine3.10
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-7-alpine-3-10
services:
- name: postgresql
image: postgres:11.6-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.3
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: python:3.7-alpine3.10
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-7-alpine-3-7
services:
- name: postgresql
image: postgres:10.11-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.1
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: python:3.7-alpine3.7
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-8-alpine-3-10
services:
- name: postgresql
image: postgres:11.6-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.3
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: python:3.8-alpine3.10
image: alpine:3.13
environment:
TEST_CONTAINER: true
pull: always
commands:
- scripts/run-full-tests
@ -112,6 +33,9 @@ steps:
kind: pipeline
name: documentation
clone:
skip_verify: true
steps:
- name: build
image: plugins/docker
@ -120,8 +44,15 @@ steps:
from_secret: docker_username
password:
from_secret: docker_password
repo: witten/borgmatic-docs
registry: projects.torsion.org
repo: projects.torsion.org/borgmatic-collective/borgmatic
tags: docs
dockerfile: docs/Dockerfile
when:
branch:
- master
trigger:
repo:
- borgmatic-collective/borgmatic
branch:
- master
event:
- push

View File

@ -1,4 +1,5 @@
const pluginSyntaxHighlight = require("@11ty/eleventy-plugin-syntaxhighlight");
const codeClipboard = require("eleventy-plugin-code-clipboard");
const inclusiveLangPlugin = require("@11ty/eleventy-plugin-inclusive-language");
const navigationPlugin = require("@11ty/eleventy-navigation");
@ -6,6 +7,7 @@ module.exports = function(eleventyConfig) {
eleventyConfig.addPlugin(pluginSyntaxHighlight);
eleventyConfig.addPlugin(inclusiveLangPlugin);
eleventyConfig.addPlugin(navigationPlugin);
eleventyConfig.addPlugin(codeClipboard);
let markdownIt = require("markdown-it");
let markdownItAnchor = require("markdown-it-anchor");
@ -23,8 +25,7 @@ module.exports = function(eleventyConfig) {
}
};
let markdownItAnchorOptions = {
permalink: true,
permalinkClass: "direct-link"
permalink: markdownItAnchor.permalink.headerLink()
};
eleventyConfig.setLibrary(
@ -32,10 +33,13 @@ module.exports = function(eleventyConfig) {
markdownIt(markdownItOptions)
.use(markdownItAnchor, markdownItAnchorOptions)
.use(markdownItReplaceLink)
.use(codeClipboard.markdownItCopyButton)
);
eleventyConfig.addPassthroughCopy({"docs/static": "static"});
eleventyConfig.setLiquidOptions({dynamicPartials: false});
return {
templateFormats: [
"md",

1
.flake8 Normal file
View File

@ -0,0 +1 @@
select = Q0

2
.gitignore vendored
View File

@ -2,7 +2,7 @@
*.pyc
*.swp
.cache
.coverage
.coverage*
.pytest_cache
.tox
__pycache__

391
NEWS
View File

@ -1,4 +1,385 @@
1.5.11.dev0
1.7.12.dev0
* #413: Add "log_file" context to command hooks so your scripts can consume the borgmatic log file.
See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #666, #670: Fix error when running the "info" action with the "--match-archives" or "--archive"
flags. Also fix the "--match-archives"/"--archive" flags to correctly override the
"match_archives" configuration option for the "transfer", "list", "rlist", and "info" actions.
* #668: Fix error when running the "prune" action with both "archive_name_format" and "prefix"
options set.
* #672: Selectively shallow merge certain mappings or sequences when including configuration files.
See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#shallow-merge
* #673: View the results of configuration file merging via "validate-borgmatic-config --show" flag.
See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#debugging-includes
* Add optional support for running end-to-end tests and building documentation with rootless Podman
instead of Docker.
1.7.11
* #479, #588: BREAKING: Automatically use the "archive_name_format" option to filter which archives
get used for borgmatic actions that operate on multiple archives. Override this behavior with the
new "match_archives" option in the storage section. This change is "breaking" in that it silently
changes which archives get considered for "rlist", "prune", "check", etc. See the documentation
for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#archive-naming
* #479, #588: The "prefix" options have been deprecated in favor of the new "archive_name_format"
auto-matching behavior and the "match_archives" option.
* #658: Add "--log-file-format" flag for customizing the log message format. See the documentation
for more information:
https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#logging-to-file
* #662: Fix regression in which the "check_repositories" option failed to match repositories.
* #663: Fix regression in which the "transfer" action produced a traceback.
* Add spellchecking of source code during test runs.
1.7.10
* #396: When a database command errors, display and log the error message instead of swallowing it.
* #501: Optionally error if a source directory does not exist via "source_directories_must_exist"
option in borgmatic's location configuration.
* #576: Add support for "file://" paths within "repositories" option.
* #612: Define and use custom constants in borgmatic configuration files. See the documentation for
more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#constant-interpolation
* #618: Add support for BORG_FILES_CACHE_TTL environment variable via "borg_files_cache_ttl" option
in borgmatic's storage configuration.
* #623: Fix confusing message when an error occurs running actions for a configuration file.
* #635: Add optional repository labels so you can select a repository via "--repository yourlabel"
at the command-line. See the configuration reference for more information:
https://torsion.org/borgmatic/docs/reference/configuration/
* #649: Add documentation on backing up a database running in a container:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
* #655: Fix error when databases are configured and a source directory doesn't exist.
* Add code style plugins to enforce use of Python f-strings and prevent single-letter variables.
To join in the pedantry, refresh your test environment with "tox --recreate".
* Rename scripts/run-full-dev-tests to scripts/run-end-to-end-dev-tests and make it run end-to-end
tests only. Continue using tox to run unit and integration tests.
1.7.9
* #295: Add a SQLite database dump/restore hook.
* #304: Change the default action order when no actions are specified on the command-line to:
"create", "prune", "compact", "check". If you'd like to retain the old ordering ("prune" and
"compact" first), then specify actions explicitly on the command-line.
* #304: Run any command-line actions in the order specified instead of using a fixed ordering.
* #564: Add "--repository" flag to all actions where it makes sense, so you can run borgmatic on
a single configured repository instead of all of them.
* #628: Add a Healthchecks "log" state to send borgmatic logs to Healthchecks without signalling
success or failure.
* #647: Add "--strip-components all" feature on the "extract" action to remove leading path
components of files you extract. Must be used with the "--path" flag.
* Add support for Python 3.11.
1.7.8
* #620: With the "create" action and the "--list" ("--files") flag, only show excluded files at
verbosity 2.
* #621: Add optional authentication to the ntfy monitoring hook.
* With the "create" action, only one of "--list" ("--files") and "--progress" flags can be used.
This lines up with the new behavior in Borg 2.0.0b5.
* Internally support new Borg 2.0.0b5 "--filter" status characters / item flags for the "create"
action.
* Fix the "create" action with the "--dry-run" flag querying for databases when a PostgreSQL/MySQL
"all" database is configured. Now, these queries are skipped due to the dry run.
* Add "--repository" flag to the "rcreate" action to optionally select one configured repository to
create.
* Add "--progress" flag to the "transfer" action, new in Borg 2.0.0b5.
* Add "checkpoint_volume" configuration option to creates checkpoints every specified number of
bytes during a long-running backup, new in Borg 2.0.0b5.
1.7.7
* #642: Add MySQL database hook "add_drop_database" configuration option to control whether dumped
MySQL databases get dropped right before restore.
* #643: Fix for potential data loss (data not getting backed up) when dumping large "directory"
format PostgreSQL/MongoDB databases. Prior to the fix, these dumps would not finish writing to
disk before Borg consumed them. Now, the dumping process completes before Borg starts. This only
applies to "directory" format databases; other formats still stream to Borg without using
temporary disk space.
* Fix MongoDB "directory" format to work with mongodump/mongorestore without error. Prior to this
fix, only the "archive" format worked.
1.7.6
* #393, #438, #560: Optionally dump "all" PostgreSQL/MySQL databases to separate files instead of
one combined dump file, allowing more convenient restores of individual databases. You can enable
this by specifying the database dump "format" option when the database is named "all".
* #602: Fix logs that interfere with JSON output by making warnings go to stderr instead of stdout.
* #622: Fix traceback when include merging configuration files on ARM64.
* #629: Skip warning about excluded special files when no special files have been excluded.
* #630: Add configuration options for database command customization: "list_options",
"restore_options", and "analyze_options" for PostgreSQL, "restore_options" for MySQL, and
"restore_options" for MongoDB.
1.7.5
* #311: Override PostgreSQL dump/restore commands via configuration options.
* #604: Fix traceback when a configuration section is present but lacking any options.
* #607: Clarify documentation examples for include merging and deep merging.
* #611: Fix "data" consistency check to support "check_last" and consistency "prefix" options.
* #613: Clarify documentation about multiple repositories and separate configuration files.
1.7.4
* #596: Fix special file detection erroring when broken symlinks are encountered.
* #597, #598: Fix regression in which "check" action errored on certain systems ("Cannot determine
Borg repository ID").
1.7.3
* #357: Add "break-lock" action for removing any repository and cache locks leftover from Borg
aborting.
* #360: To prevent Borg hangs, unconditionally delete stale named pipes before dumping databases.
* #587: When database hooks are enabled, auto-exclude special files from a "create" action to
prevent Borg from hanging. You can override/prevent this behavior by explicitly setting the
"read_special" option to true.
* #587: Warn when ignoring a configured "read_special" value of false, as true is needed when
database hooks are enabled.
* #589: Update sample systemd service file to allow system "idle" (e.g. a video monitor turning
off) while borgmatic is running.
* #590: Fix for potential data loss (data not getting backed up) when the "patterns_from" option
was used with "source_directories" (or the "~/.borgmatic" path existed, which got injected into
"source_directories" implicitly). The fix is for borgmatic to convert "source_directories" into
patterns whenever "patterns_from" is used, working around a Borg bug:
https://github.com/borgbackup/borg/issues/6994
* #590: In "borgmatic create --list" output, display which files get excluded from the backup due
to patterns or excludes.
* #591: Add support for Borg 2's "--match-archives" flag. This replaces "--glob-archives", which
borgmatic now treats as an alias for "--match-archives". But note that the two flags have
slightly different syntax. See the Borg 2 changelog for more information:
https://borgbackup.readthedocs.io/en/2.0.0b3/changes.html#version-2-0-0b3-2022-10-02
* Fix for "borgmatic --archive latest" not finding the latest archive when a verbosity is set.
1.7.2
* #577: Fix regression in which "borgmatic info --archive ..." showed repository info instead of
archive info with Borg 1.
* #582: Fix hang when database hooks are enabled and "patterns" contains a parent directory of
"~/.borgmatic".
1.7.1
* #542: Make the "source_directories" option optional. This is useful for "check"-only setups or
using "patterns" exclusively.
* #574: Fix for potential data loss (data not getting backed up) when the "patterns" option was
used with "source_directories" (or the "~/.borgmatic" path existed, which got injected into
"source_directories" implicitly). The fix is for borgmatic to convert "source_directories" into
patterns whenever "patterns" is used, working around a Borg bug:
https://github.com/borgbackup/borg/issues/6994
1.7.0
* #463: Add "before_actions" and "after_actions" command hooks that run before/after all the
actions for each repository. These new hooks are a good place to run per-repository steps like
mounting/unmounting a remote filesystem.
* #463: Update documentation to cover per-repository configurations:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/
* #557: Support for Borg 2 while still working with Borg 1. This includes new borgmatic actions
like "rcreate" (replaces "init"), "rlist" (list archives in repository), "rinfo" (show repository
info), and "transfer" (for upgrading Borg repositories). For the most part, borgmatic tries to
smooth over differences between Borg 1 and 2 to make your upgrade process easier. However, there
are still a few cases where Borg made breaking changes. See the Borg 2.0 changelog for more
information: https://www.borgbackup.org/releases/borg-2.0.html
* #557: If you install Borg 2, you'll need to manually upgrade your existing Borg 1 repositories
before use. Note that Borg 2 stable is not yet released as of this borgmatic release, so don't
use Borg 2 for production until it is! See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-borg
* #557: Rename several configuration options to match Borg 2: "remote_rate_limit" is now
"upload_rate_limit", "numeric_owner" is "numeric_ids", and "bsd_flags" is "flags". borgmatic
still works with the old options.
* #557: Remote repository paths without the "ssh://" syntax are deprecated but still supported for
now. Remote repository paths containing "~" are deprecated in borgmatic and no longer work in
Borg 2.
* #557: Omitting the "--archive" flag on the "list" action is deprecated when using Borg 2. Use
the new "rlist" action instead.
* #557: The "--dry-run" flag can now be used with the "rcreate"/"init" action.
* #565: Fix handling of "repository" and "data" consistency checks to prevent invalid Borg flags.
* #566: Modify "mount" and "extract" actions to require the "--repository" flag when multiple
repositories are configured.
* #571: BREAKING: Remove old-style command-line action flags like "--create, "--list", etc. If
you're already using actions like "create" and "list" instead, this change should not affect you.
* #571: BREAKING: Rename "--files" flag on "prune" action to "--list", as it lists archives, not
files.
* #571: Add "--list" as alias for "--files" flag on "create" and "export-tar" actions.
* Add support for disabling TLS verification in Healthchecks monitoring hook with "verify_tls"
option.
1.6.6
* #559: Update documentation about configuring multiple consistency checks or multiple databases.
* #560: Fix all database hooks to error when the requested database to restore isn't present in the
Borg archive.
* #561: Fix command-line "--override" flag to continue supporting old configuration file formats.
* #563: Fix traceback with "create" action and "--json" flag when a database hook is configured.
1.6.5
* #553: Fix logging to include the full traceback when Borg experiences an internal error, not just
the first few lines.
* #554: Fix all monitoring hooks to warn if the server returns an HTTP 4xx error. This can happen
with Healthchecks, for instance, when using an invalid ping URL.
* #555: Fix environment variable plumbing so options like "encryption_passphrase" and
"encryption_passcommand" in one configuration file aren't used for other configuration files.
1.6.4
* #546, #382: Keep your repository passphrases and database passwords outside of borgmatic's
configuration file with environment variable interpolation. See the documentation for more
information: https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
1.6.3
* #541: Add "borgmatic list --find" flag for searching for files across multiple archives, useful
for hunting down that file you accidentally deleted so you can extract it. See the documentation
for more information:
https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#searching-for-a-file
* #543: Add a monitoring hook for sending push notifications via ntfy. See the documentation for
more information: https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook
* Fix Bash completion script to no longer alter your shell's settings (complain about unset
variables or error on pipe failures).
* Deprecate "borgmatic list --successful" flag, as listing only non-checkpoint (successful)
archives is now the default in newer versions of Borg.
1.6.2
* #523: Reduce the default consistency check frequency and support configuring the frequency
independently for each check. Also add "borgmatic check --force" flag to ignore configured
frequencies. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#check-frequency
* #536: Fix generate-borgmatic-config to support more complex schema changes like the new
Healthchecks configuration options when the "--source" flag is used.
* #538: Add support for "borgmatic borg debug" command.
* #539: Add "generate-borgmatic-config --overwrite" flag to replace an existing destination file.
* Add Bash completion script so you can tab-complete the borgmatic command-line. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#shell-completion
1.6.1
* #294: Add Healthchecks monitoring hook "ping_body_limit" option to configure how many bytes of
logs to send to the Healthchecks server.
* #402: Remove the error when "archive_name_format" is specified but a retention prefix isn't.
* #420: Warn when an unsupported variable is used in a hook command.
* #439: Change connection failures for monitoring hooks (Healthchecks, Cronitor, PagerDuty, and
Cronhub) to be warnings instead of errors. This way, the monitoring system failing does not block
backups.
* #460: Add Healthchecks monitoring hook "send_logs" option to enable/disable sending borgmatic
logs to the Healthchecks server.
* #525: Add Healthchecks monitoring hook "states" option to only enable pinging for particular
monitoring states (start, finish, fail).
* #528: Improve the error message when a configuration override contains an invalid value.
* #531: BREAKING: When deep merging common configuration, merge colliding list values by appending
them. Previously, one list replaced the other.
* #532: When a configuration include is a relative path, load it from either the current working
directory or from the directory containing the file doing the including. Previously, only the
working directory was used.
* Add a randomized delay to the sample systemd timer to spread out the load on a server.
* Change the configuration format for borgmatic monitoring hooks (Healthchecks, Cronitor,
PagerDuty, and Cronhub) to specify the ping URL / integration key as a named option. The intent
is to support additional options (some in this release). This change is backwards-compatible.
* Add emojis to documentation table of contents to make it easier to find particular how-to and
reference guides at a glance.
1.6.0
* #381: BREAKING: Greatly simplify configuration file reuse by deep merging when including common
configuration. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#include-merging
* #473: BREAKING: Instead of executing "before" command hooks before all borgmatic actions run (and
"after" hooks after), execute these hooks right before/after the corresponding action. E.g.,
"before_check" now runs immediately before the "check" action. This better supports running
timing-sensitive tasks like pausing containers. Side effect: before/after command hooks now run
once for each configured repository instead of once per configuration file. Additionally, the
"repositories" interpolated variable has been changed to "repository", containing the path to the
current repository for the hook. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #513: Add mention of sudo's "secure_path" option to borgmatic installation documentation.
* #515: Fix "borgmatic borg key ..." to pass parameters to Borg in the correct order.
* #516: Fix handling of TERM signal to exit borgmatic, not just forward the signal to Borg.
* #517: Fix borgmatic exit code (so it's zero) when initial Borg calls fail but later retries
succeed.
* Change Healthchecks logs truncation size from 10k bytes to 100k bytes, corresponding to that
same change on Healthchecks.io.
1.5.24
* #431: Add "working_directory" option to support source directories with relative paths.
* #444: When loading a configuration file that is unreadable due to file permissions, warn instead
of erroring. This supports running borgmatic as a non-root user with configuration in ~/.config
even if there is an unreadable global configuration file in /etc.
* #469: Add "repositories" context to "before_*" and "after_*" command action hooks. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #486: Fix handling of "patterns_from" and "exclude_from" options to error instead of warning when
referencing unreadable files and "create" action is run.
* #507: Fix Borg usage error in the "compact" action when running "borgmatic --dry-run". Now, skip
"compact" entirely during a dry run.
1.5.23
* #394: Compact repository segments and free space with new "borgmatic compact" action. Borg 1.2+
only. Also run "compact" by default when no actions are specified, as "prune" in Borg 1.2 no
longer frees up space unless "compact" is run.
* #394: When using the "atime", "bsd_flags", "numeric_owner", or "remote_rate_limit" options,
tailor the flags passed to Borg depending on the Borg version.
* #480, #482: Fix traceback when a YAML validation error occurs.
1.5.22
* #288: Add database dump hook for MongoDB.
* #470: Move mysqldump options to the beginning of the command due to MySQL bug 30994.
* #471: When command-line configuration override produces a parse error, error cleanly instead of
tracebacking.
* #476: Fix unicode error when restoring particular MySQL databases.
* Drop support for Python 3.6, which has been end-of-lifed.
* Add support for Python 3.10.
1.5.21
* #28: Optionally retry failing backups via "retries" and "retry_wait" configuration options.
* #306: Add "list_options" MySQL configuration option for passing additional arguments to MySQL
list command.
* #459: Add support for old version (2.x) of jsonschema library.
1.5.20
* Re-release with correct version without dev0 tag.
1.5.19
* #387: Fix error when configured source directories are not present on the filesystem at the time
of backup. Now, Borg will complain, but the backup will still continue.
* #455: Mention changing borgmatic path in cron documentation.
* Update sample systemd service file with more granular read-only filesystem settings.
* Move Gitea and GitHub hosting from a personal namespace to an organization for better
collaboration with related projects.
* 1k ★s on GitHub!
1.5.18
* #389: Fix "message too long" error when logging to rsyslog.
* #440: Fix traceback that can occur when dumping a database.
1.5.17
* #437: Fix error when configuration file contains "umask" option.
* Remove test dependency on vim and /dev/urandom.
1.5.16
* #379: Suppress console output in sample crontab and systemd service files.
* #407: Fix syslog logging on FreeBSD.
* #430: Fix hang when restoring a PostgreSQL "tar" format database dump.
* Better error messages! Switch the library used for validating configuration files (from pykwalify
to jsonschema).
* Link borgmatic Ansible role from installation documentation:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#other-ways-to-install
1.5.15
* #419: Document use case of running backups conditionally based on laptop power level:
https://torsion.org/borgmatic/docs/how-to/backup-to-a-removable-drive-or-an-intermittent-server/
* #425: Run arbitrary Borg commands with new "borgmatic borg" action. See the documentation for
more information: https://torsion.org/borgmatic/docs/how-to/run-arbitrary-borg-commands/
1.5.14
* #390: Add link to Hetzner storage offering from the documentation.
* #398: Clarify canonical home of borgmatic in documentation.
* #406: Clarify that spaces in path names should not be backslashed in path names.
* #423: Fix error handling to error loudly when Borg gets killed due to running out of memory!
* Fix build so as not to attempt to build and push documentation for a non-master branch.
* "Fix" build failure with Alpine Edge by switching from Edge to Alpine 3.13.
* Move #borgmatic IRC channel from Freenode to Libera Chat due to Freenode takeover drama.
IRC connection info: https://torsion.org/borgmatic/#issues
1.5.13
* #373: Document that passphrase is used for Borg keyfile encryption, not just repokey encryption.
* #404: Add support for ruamel.yaml 0.17.x YAML parsing library.
* Update systemd service example to return a permission error when a system call isn't permitted
(instead of terminating borgmatic outright).
* Drop support for Python 3.5, which has been end-of-lifed.
* Add support for Python 3.9.
* Update versions of test dependencies (test_requirements.txt and test containers).
* Only support black code formatter on Python 3.8+. New black dependencies make installation
difficult on older versions of Python.
* Replace "improve this documentation" form with link to support and ticket tracker.
1.5.12
* Fix for previous release with incorrect version suffix in setup.py. No other changes.
1.5.11
* #341: Add "temporary_directory" option for changing Borg's temporary directory.
* #352: Lock down systemd security settings in sample systemd service file.
* #355: Fix traceback when a database hook value is null in a configuration file.
@ -41,7 +422,7 @@
configuration schema descriptions.
1.5.6
* #292: Allow before_backup and similiar hooks to exit with a soft failure without altering the
* #292: Allow before_backup and similar hooks to exit with a soft failure without altering the
monitoring status on Healthchecks or other providers. Support this by waiting to ping monitoring
services with a "start" status until after before_* hooks finish. Failures in before_* hooks
still trigger a monitoring "fail" status.
@ -110,7 +491,7 @@
* For "list" and "info" actions, show repository names even at verbosity 0.
1.4.22
* #276, #285: Disable colored output when "--json" flag is used, so as to produce valid JSON ouput.
* #276, #285: Disable colored output when "--json" flag is used, so as to produce valid JSON output.
* After a backup of a database dump in directory format, properly remove the dump directory.
* In "borgmatic --help", don't expand $HOME in listing of default "--config" paths.
@ -482,7 +863,7 @@
* #77: Skip non-"*.yaml" config filenames in /etc/borgmatic.d/ so as not to parse backup files,
editor swap files, etc.
* #81: Document user-defined hooks run before/after backup, or on error.
* Add code style guidelines to the documention.
* Add code style guidelines to the documentation.
1.2.0
* #61: Support for Borg --list option via borgmatic command-line to list all archives.
@ -520,7 +901,7 @@
* #49: Support for Borg experimental --patterns-from and --patterns options for specifying mixed
includes/excludes.
* Moved issue tracker from Taiga to integrated Gitea tracker at
https://projects.torsion.org/witten/borgmatic/issues
https://projects.torsion.org/borgmatic-collective/borgmatic/issues
1.1.12
* #46: Declare dependency on pykwalify 1.6 or above, as older versions yield "Unknown key: version"

View File

@ -11,6 +11,8 @@ borgmatic is simple, configuration-driven backup software for servers and
workstations. Protect your files with client-side encryption. Backup your
databases too. Monitor it all with integrated third-party services.
The canonical home of borgmatic is at <a href="https://torsion.org/borgmatic">https://torsion.org/borgmatic</a>.
Here's an example configuration file:
```yaml
@ -22,10 +24,10 @@ location:
# Paths of local or remote repositories to backup to.
repositories:
- 1234@usw-s001.rsync.net:backups.borg
- k8pDxu32@k8pDxu32.repo.borgbase.com:repo
- user1@scp2.cdn.lima-labs.com:repo
- /var/lib/backups/local.borg
- path: ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
label: borgbase
- path: /var/lib/backups/local.borg
label: local
retention:
# Retention policy for how many backups to keep.
@ -36,8 +38,9 @@ retention:
consistency:
# List of checks to run to validate your backups.
checks:
- repository
- archives
- name: repository
- name: archives
frequency: 2 weeks
hooks:
# Custom preparation scripts to run.
@ -53,9 +56,9 @@ hooks:
```
Want to see borgmatic in action? Check out the <a
href="https://asciinema.org/a/203761" target="_blank">screencast</a>.
href="https://asciinema.org/a/203761?autoplay=1" target="_blank">screencast</a>.
<script src="https://asciinema.org/a/203761.js" id="asciicast-203761" async></script>
<a href="https://asciinema.org/a/203761?autoplay=1" target="_blank"><img src="https://asciinema.org/a/203761.png" width="480"></a>
borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
@ -64,11 +67,13 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
<a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://sqlite.org/"><img src="docs/static/sqlite.png" alt="SQLite" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.pagerduty.com/"><img src="docs/static/pagerduty.png" alt="PagerDuty" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic"><img src="docs/static/rsyncnet.png" alt="rsync.net" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://ntfy.sh/"><img src="docs/static/ntfy.png" alt="ntfy" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
@ -77,65 +82,88 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
Your first step is to [install and configure
borgmatic](https://torsion.org/borgmatic/docs/how-to/set-up-backups/).
For additional documentation, check out the links above for <a
href="https://torsion.org/borgmatic/#documentation">borgmatic how-to and
For additional documentation, check out the links above (left panel on wide screens)
for <a href="https://torsion.org/borgmatic/#documentation">borgmatic how-to and
reference guides</a>.
## Hosting providers
Need somewhere to store your encrypted offsite backups? The following hosting
providers include specific support for Borg/borgmatic. Using these links and
services helps support borgmatic development and hosting. (These are referral
links, but without any tracking scripts or cookies.)
Need somewhere to store your encrypted off-site backups? The following hosting
providers include specific support for Borg/borgmatic—and fund borgmatic
development and hosting when you use these links to sign up. (These are
referral links, but without any tracking scripts or cookies.)
<ul>
<li class="referral"><a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic">rsync.net</a>: Cloud Storage provider with full support for borg and any other SSH/SFTP tool</li>
<li class="referral"><a href="https://www.borgbase.com/?utm_source=borgmatic">BorgBase</a>: Borg hosting service with support for monitoring, 2FA, and append-only repos</li>
<li class="referral"><a href="https://storage.lima-labs.com/special-pricing-offer-for-borgmatic-users/">Lima-Labs</a>: Affordable, reliable cloud data storage accessable via SSH/SCP/FTP for Borg backups or any other bulk storage needs</li>
</ul>
Additionally, [rsync.net](https://www.rsync.net/products/borg.html) and
[Hetzner](https://www.hetzner.com/storage/storage-box) have compatible storage
offerings, but do not currently fund borgmatic development or hosting.
## Support and contributing
### Issues
You've got issues? Or an idea for a feature enhancement? We've got an [issue
tracker](https://projects.torsion.org/witten/borgmatic/issues). In order to
create a new issue or comment on an issue, you'll need to [login
first](https://projects.torsion.org/user/login). Note that you can login with
an existing GitHub account if you prefer.
If you'd like to chat with borgmatic developers or users, head on over to the
`#borgmatic` IRC channel on Freenode, either via <a
href="https://webchat.freenode.net/?channels=borgmatic">web chat</a> or a
native <a href="irc://chat.freenode.net:6697">IRC client</a>.
Are you experiencing an issue with borgmatic? Or do you have an idea for a
feature enhancement? Head on over to our [issue
tracker](https://projects.torsion.org/borgmatic-collective/borgmatic/issues).
In order to create a new issue or add a comment, you'll need to
[register](https://projects.torsion.org/user/sign_up?invite_code=borgmatic)
first. If you prefer to use an existing GitHub account, you can skip account
creation and [login directly](https://projects.torsion.org/user/login).
Also see the [security
policy](https://torsion.org/borgmatic/docs/security-policy/) for any security
issues.
### Social
Check out the [Borg subreddit](https://www.reddit.com/r/BorgBackup/) for
general Borg and borgmatic discussion and support.
Also follow [borgmatic on Mastodon](https://fosstodon.org/@borgmatic).
### Chat
To chat with borgmatic developers or users, check out the `#borgmatic`
IRC channel on Libera Chat, either via <a
href="https://web.libera.chat/#borgmatic">web chat</a> or a native <a
href="ircs://irc.libera.chat:6697">IRC client</a>. If you don't get a response
right away, please hang around a while—or file a ticket instead.
### Other
Other questions or comments? Contact
[witten@torsion.org](mailto:witten@torsion.org).
### Contributing
borgmatic is hosted at <https://torsion.org/borgmatic> with [source code
available](https://projects.torsion.org/witten/borgmatic), and is also
mirrored on [GitHub](https://github.com/witten/borgmatic) for convenience.
borgmatic [source code is
available](https://projects.torsion.org/borgmatic-collective/borgmatic) and is also mirrored
on [GitHub](https://github.com/borgmatic-collective/borgmatic) for convenience.
borgmatic is licensed under the GNU General Public License version 3 or any
later version.
If you'd like to contribute to borgmatic development, please feel free to
submit a [Pull Request](https://projects.torsion.org/witten/borgmatic/pulls)
or open an [issue](https://projects.torsion.org/witten/borgmatic/issues) first
to discuss your idea. We also accept Pull Requests on GitHub, if that's more
your thing. In general, contributions are very welcome. We don't bite!
submit a [Pull
Request](https://projects.torsion.org/borgmatic-collective/borgmatic/pulls) or
open an
[issue](https://projects.torsion.org/borgmatic-collective/borgmatic/issues) to
discuss your idea. Note that you'll need to
[register](https://projects.torsion.org/user/sign_up?invite_code=borgmatic)
first. We also accept Pull Requests on GitHub, if that's more your thing. In
general, contributions are very welcome. We don't bite!
Also, please check out the [borgmatic development
how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for
info on cloning source code, running tests, etc.
<a href="https://build.torsion.org/witten/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/witten/borgmatic/status.svg?ref=refs/heads/master)</a>
<a href="https://build.torsion.org/borgmatic-collective/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/borgmatic-collective/borgmatic/status.svg?ref=refs/heads/master)</a>

View File

@ -6,14 +6,13 @@ permalink: security-policy/index.html
## Supported versions
While we want to hear about security vulnerabilities in all versions of
borgmatic, security fixes will only be made to the most recently released
version. It's not practical for our small volunteer effort to maintain
multiple different release branches and put out separate security patches for
each.
borgmatic, security fixes are only made to the most recently released version.
It's simply not practical for our small volunteer effort to maintain multiple
release branches and put out separate security patches for each.
## Reporting a vulnerability
If you find a security vulnerability, please [file a
ticket](https://torsion.org/borgmatic/#issues) or [send email
directly](mailto:witten@torsion.org) as appropriate. You should expect to hear
back within a few days at most, and generally sooner.
back within a few days at most and generally sooner.

View File

36
borgmatic/actions/borg.py Normal file
View File

@ -0,0 +1,36 @@
import logging
import borgmatic.borg.borg
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_borg(
repository, storage, local_borg_version, borg_arguments, local_path, remote_path,
):
'''
Run the "borg" action for the given repository.
'''
if borg_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, borg_arguments.repository
):
logger.info(f'{repository["path"]}: Running arbitrary Borg command')
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
borg_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
)
borgmatic.borg.borg.run_arbitrary_borg(
repository['path'],
storage,
local_borg_version,
options=borg_arguments.options,
archive=archive_name,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,25 @@
import logging
import borgmatic.borg.break_lock
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_break_lock(
repository, storage, local_borg_version, break_lock_arguments, local_path, remote_path,
):
'''
Run the "break-lock" action for the given repository.
'''
if break_lock_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, break_lock_arguments.repository
):
logger.info(f'{repository["path"]}: Breaking repository and cache locks')
borgmatic.borg.break_lock.break_lock(
repository['path'],
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,61 @@
import logging
import borgmatic.borg.check
import borgmatic.config.validate
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_check(
config_filename,
repository,
location,
storage,
consistency,
hooks,
hook_context,
local_borg_version,
check_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "check" action for the given repository.
'''
if check_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, check_arguments.repository
):
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_check'),
hooks.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository["path"]}: Running consistency checks')
borgmatic.borg.check.check_archives(
repository['path'],
location,
storage,
consistency,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
progress=check_arguments.progress,
repair=check_arguments.repair,
only_checks=check_arguments.only,
force=check_arguments.force,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_check'),
hooks.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,63 @@
import logging
import borgmatic.borg.compact
import borgmatic.borg.feature
import borgmatic.config.validate
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_compact(
config_filename,
repository,
storage,
retention,
hooks,
hook_context,
local_borg_version,
compact_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
):
'''
Run the "compact" action for the given repository.
'''
if compact_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, compact_arguments.repository
):
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_compact'),
hooks.get('umask'),
config_filename,
'pre-compact',
global_arguments.dry_run,
**hook_context,
)
if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
logger.info(f'{repository["path"]}: Compacting segments{dry_run_label}')
borgmatic.borg.compact.compact_segments(
global_arguments.dry_run,
repository['path'],
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
progress=compact_arguments.progress,
cleanup_commits=compact_arguments.cleanup_commits,
threshold=compact_arguments.threshold,
)
else: # pragma: nocover
logger.info(f'{repository["path"]}: Skipping compact (only available/needed in Borg 1.2+)')
borgmatic.hooks.command.execute_hook(
hooks.get('after_compact'),
hooks.get('umask'),
config_filename,
'post-compact',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,96 @@
import json
import logging
import borgmatic.borg.create
import borgmatic.config.validate
import borgmatic.hooks.command
import borgmatic.hooks.dispatch
import borgmatic.hooks.dump
logger = logging.getLogger(__name__)
def run_create(
config_filename,
repository,
location,
storage,
hooks,
hook_context,
local_borg_version,
create_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
):
'''
Run the "create" action for the given repository.
If create_arguments.json is True, yield the JSON output from creating the archive.
'''
if create_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, create_arguments.repository
):
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_backup'),
hooks.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository["path"]}: Creating archive{dry_run_label}')
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
active_dumps = borgmatic.hooks.dispatch.call_hooks(
'dump_databases',
hooks,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
stream_processes = [process for processes in active_dumps.values() for process in processes]
json_output = borgmatic.borg.create.create_archive(
global_arguments.dry_run,
repository['path'],
location,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
progress=create_arguments.progress,
stats=create_arguments.stats,
json=create_arguments.json,
list_files=create_arguments.list_files,
stream_processes=stream_processes,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
config_filename,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_backup'),
hooks.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,48 @@
import logging
import borgmatic.borg.export_tar
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_export_tar(
repository,
storage,
local_borg_version,
export_tar_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "export-tar" action for the given repository.
'''
if export_tar_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_tar_arguments.repository
):
logger.info(
f'{repository["path"]}: Exporting archive {export_tar_arguments.archive} as tar file'
)
borgmatic.borg.export_tar.export_tar_archive(
global_arguments.dry_run,
repository['path'],
borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
export_tar_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
),
export_tar_arguments.paths,
export_tar_arguments.destination,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
tar_filter=export_tar_arguments.tar_filter,
list_files=export_tar_arguments.list_files,
strip_components=export_tar_arguments.strip_components,
)

View File

@ -0,0 +1,67 @@
import logging
import borgmatic.borg.extract
import borgmatic.borg.rlist
import borgmatic.config.validate
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_extract(
config_filename,
repository,
location,
storage,
hooks,
hook_context,
local_borg_version,
extract_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "extract" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
hooks.get('before_extract'),
hooks.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
**hook_context,
)
if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, extract_arguments.repository
):
logger.info(f'{repository["path"]}: Extracting archive {extract_arguments.archive}')
borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
repository['path'],
borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
extract_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
),
extract_arguments.paths,
location,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
destination_path=extract_arguments.destination,
strip_components=extract_arguments.strip_components,
progress=extract_arguments.progress,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_extract'),
hooks.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
**hook_context,
)

41
borgmatic/actions/info.py Normal file
View File

@ -0,0 +1,41 @@
import json
import logging
import borgmatic.borg.info
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_info(
repository, storage, local_borg_version, info_arguments, local_path, remote_path,
):
'''
Run the "info" action for the given repository and archive.
If info_arguments.json is True, yield the JSON output from the info for the archive.
'''
if info_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, info_arguments.repository
):
if not info_arguments.json: # pragma: nocover
logger.answer(f'{repository["path"]}: Displaying archive summary information')
info_arguments.archive = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
info_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
)
json_output = borgmatic.borg.info.display_archives_info(
repository['path'],
storage,
local_borg_version,
info_arguments=info_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

43
borgmatic/actions/list.py Normal file
View File

@ -0,0 +1,43 @@
import json
import logging
import borgmatic.borg.list
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_list(
repository, storage, local_borg_version, list_arguments, local_path, remote_path,
):
'''
Run the "list" action for the given repository and archive.
If list_arguments.json is True, yield the JSON output from listing the archive.
'''
if list_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, list_arguments.repository
):
if not list_arguments.json: # pragma: nocover
if list_arguments.find_paths:
logger.answer(f'{repository["path"]}: Searching archives')
elif not list_arguments.archive:
logger.answer(f'{repository["path"]}: Listing archives')
list_arguments.archive = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
list_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
)
json_output = borgmatic.borg.list.list_archive(
repository['path'],
storage,
local_borg_version,
list_arguments=list_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -0,0 +1,42 @@
import logging
import borgmatic.borg.mount
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_mount(
repository, storage, local_borg_version, mount_arguments, local_path, remote_path,
):
'''
Run the "mount" action for the given repository.
'''
if mount_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, mount_arguments.repository
):
if mount_arguments.archive:
logger.info(f'{repository["path"]}: Mounting archive {mount_arguments.archive}')
else: # pragma: nocover
logger.info(f'{repository["path"]}: Mounting repository')
borgmatic.borg.mount.mount_archive(
repository['path'],
borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
mount_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
),
mount_arguments.mount_point,
mount_arguments.paths,
mount_arguments.foreground,
mount_arguments.options,
storage,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,59 @@
import logging
import borgmatic.borg.prune
import borgmatic.config.validate
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_prune(
config_filename,
repository,
storage,
retention,
hooks,
hook_context,
local_borg_version,
prune_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
):
'''
Run the "prune" action for the given repository.
'''
if prune_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, prune_arguments.repository
):
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_prune'),
hooks.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository["path"]}: Pruning archives{dry_run_label}')
borgmatic.borg.prune.prune_archives(
global_arguments.dry_run,
repository['path'],
storage,
retention,
local_borg_version,
local_path=local_path,
remote_path=remote_path,
stats=prune_arguments.stats,
list_archives=prune_arguments.list_archives,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_prune'),
hooks.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,40 @@
import logging
import borgmatic.borg.rcreate
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_rcreate(
repository,
storage,
local_borg_version,
rcreate_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "rcreate" action for the given repository.
'''
if rcreate_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, rcreate_arguments.repository
):
return
logger.info(f'{repository["path"]}: Creating repository')
borgmatic.borg.rcreate.create_repository(
global_arguments.dry_run,
repository['path'],
storage,
local_borg_version,
rcreate_arguments.encryption_mode,
rcreate_arguments.source_repository,
rcreate_arguments.copy_crypt_key,
rcreate_arguments.append_only,
rcreate_arguments.storage_quota,
rcreate_arguments.make_parent_dirs,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,357 @@
import copy
import logging
import os
import borgmatic.borg.extract
import borgmatic.borg.list
import borgmatic.borg.mount
import borgmatic.borg.rlist
import borgmatic.borg.state
import borgmatic.config.validate
import borgmatic.hooks.dispatch
import borgmatic.hooks.dump
logger = logging.getLogger(__name__)
UNSPECIFIED_HOOK = object()
def get_configured_database(
hooks, archive_database_names, hook_name, database_name, configuration_database_name=None
):
'''
Find the first database with the given hook name and database name in the configured hooks
dict and the given archive database names dict (from hook name to database names contained in
a particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all database
hooks for the named database. If a configuration database name is given, use that instead of the
database name to lookup the database in the given hooks configuration.
Return the found database as a tuple of (found hook name, database configuration dict).
'''
if not configuration_database_name:
configuration_database_name = database_name
if hook_name == UNSPECIFIED_HOOK:
hooks_to_search = hooks
else:
hooks_to_search = {hook_name: hooks[hook_name]}
return next(
(
(name, hook_database)
for (name, hook) in hooks_to_search.items()
for hook_database in hook
if hook_database['name'] == configuration_database_name
and database_name in archive_database_names.get(name, [])
),
(None, None),
)
def get_configured_hook_name_and_database(hooks, database_name):
'''
Find the hook name and first database dict with the given database name in the configured hooks
dict. This searches across all database hooks.
'''
def restore_single_database(
repository,
location,
storage,
hooks,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
hook_name,
database,
): # pragma: no cover
'''
Given (among other things) an archive name, a database hook name, and a configured database
configuration dict, restore that database from the archive.
'''
logger.info(f'{repository}: Restoring database {database["name"]}')
dump_pattern = borgmatic.hooks.dispatch.call_hooks(
'make_database_dump_pattern',
hooks,
repository,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
database['name'],
)[hook_name]
# Kick off a single database extract to stdout.
extract_process = borgmatic.borg.extract.extract_archive(
dry_run=global_arguments.dry_run,
repository=repository,
archive=archive_name,
paths=borgmatic.hooks.dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
location_config=location,
storage_config=storage,
local_borg_version=local_borg_version,
local_path=local_path,
remote_path=remote_path,
destination_path='/',
# A directory format dump isn't a single file, and therefore can't extract
# to stdout. In this case, the extract_process return value is None.
extract_to_stdout=bool(database.get('format') != 'directory'),
)
# Run a single database restore, consuming the extract stdout (if any).
borgmatic.hooks.dispatch.call_hooks(
'restore_database_dump',
{hook_name: [database]},
repository,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
extract_process,
)
def collect_archive_database_names(
repository, archive, location, storage, local_borg_version, local_path, remote_path,
):
'''
Given a local or remote repository path, a resolved archive name, a location configuration dict,
a storage configuration dict, the local Borg version, and local and remote Borg paths, query the
archive for the names of databases it contains and return them as a dict from hook name to a
sequence of database names.
'''
borgmatic_source_directory = os.path.expanduser(
location.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
).lstrip('/')
parent_dump_path = os.path.expanduser(
borgmatic.hooks.dump.make_database_dump_path(borgmatic_source_directory, '*_databases/*/*')
)
dump_paths = borgmatic.borg.list.capture_archive_listing(
repository,
archive,
storage,
local_borg_version,
list_path=parent_dump_path,
local_path=local_path,
remote_path=remote_path,
)
# Determine the database names corresponding to the dumps found in the archive and
# add them to restore_names.
archive_database_names = {}
for dump_path in dump_paths:
try:
(hook_name, _, database_name) = dump_path.split(
borgmatic_source_directory + os.path.sep, 1
)[1].split(os.path.sep)[0:3]
except (ValueError, IndexError):
logger.warning(
f'{repository}: Ignoring invalid database dump path "{dump_path}" in archive {archive}'
)
else:
if database_name not in archive_database_names.get(hook_name, []):
archive_database_names.setdefault(hook_name, []).extend([database_name])
return archive_database_names
def find_databases_to_restore(requested_database_names, archive_database_names):
'''
Given a sequence of requested database names to restore and a dict of hook name to the names of
databases found in an archive, return an expanded sequence of database names to restore,
replacing "all" with actual database names as appropriate.
Raise ValueError if any of the requested database names cannot be found in the archive.
'''
# A map from database hook name to the database names to restore for that hook.
restore_names = (
{UNSPECIFIED_HOOK: requested_database_names}
if requested_database_names
else {UNSPECIFIED_HOOK: ['all']}
)
# If "all" is in restore_names, then replace it with the names of dumps found within the
# archive.
if 'all' in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove('all')
for (hook_name, database_names) in archive_database_names.items():
restore_names.setdefault(hook_name, []).extend(database_names)
# If a database is to be restored as part of "all", then remove it from restore names so
# it doesn't get restored twice.
for database_name in database_names:
if database_name in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove(database_name)
if not restore_names[UNSPECIFIED_HOOK]:
restore_names.pop(UNSPECIFIED_HOOK)
combined_restore_names = set(
name for database_names in restore_names.values() for name in database_names
)
combined_archive_database_names = set(
name for database_names in archive_database_names.values() for name in database_names
)
missing_names = sorted(set(combined_restore_names) - combined_archive_database_names)
if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError(
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from archive"
)
return restore_names
def ensure_databases_found(restore_names, remaining_restore_names, found_names):
'''
Given a dict from hook name to database names to restore, a dict from hook name to remaining
database names to restore, and a sequence of found (actually restored) database names, raise
ValueError if requested databases to restore were missing from the archive and/or configuration.
'''
combined_restore_names = set(
name
for database_names in tuple(restore_names.values())
+ tuple(remaining_restore_names.values())
for name in database_names
)
if not combined_restore_names and not found_names:
raise ValueError('No databases were found to restore')
missing_names = sorted(set(combined_restore_names) - set(found_names))
if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError(
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from borgmatic's configuration"
)
def run_restore(
repository,
location,
storage,
hooks,
local_borg_version,
restore_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "restore" action for the given repository, but only if the repository matches the
requested repository in restore arguments.
Raise ValueError if a configured database could not be found to restore.
'''
if restore_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, restore_arguments.repository
):
return
logger.info(
f'{repository["path"]}: Restoring databases from archive {restore_arguments.archive}'
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
restore_arguments.archive,
storage,
local_borg_version,
local_path,
remote_path,
)
archive_database_names = collect_archive_database_names(
repository['path'],
archive_name,
location,
storage,
local_borg_version,
local_path,
remote_path,
)
restore_names = find_databases_to_restore(restore_arguments.databases, archive_database_names)
found_names = set()
remaining_restore_names = {}
for hook_name, database_names in restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
hooks, archive_database_names, hook_name, database_name
)
if not found_database:
remaining_restore_names.setdefault(found_hook_name or hook_name, []).append(
database_name
)
continue
found_names.add(database_name)
restore_single_database(
repository['path'],
location,
storage,
hooks,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
found_hook_name or hook_name,
found_database,
)
# For any database that weren't found via exact matches in the hooks configuration, try to
# fallback to "all" entries.
for hook_name, database_names in remaining_restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
hooks, archive_database_names, hook_name, database_name, 'all'
)
if not found_database:
continue
found_names.add(database_name)
database = copy.copy(found_database)
database['name'] = database_name
restore_single_database(
repository['path'],
location,
storage,
hooks,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
found_hook_name or hook_name,
database,
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
ensure_databases_found(restore_names, remaining_restore_names, found_names)

View File

@ -0,0 +1,33 @@
import json
import logging
import borgmatic.borg.rinfo
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_rinfo(
repository, storage, local_borg_version, rinfo_arguments, local_path, remote_path,
):
'''
Run the "rinfo" action for the given repository.
If rinfo_arguments.json is True, yield the JSON output from the info for the repository.
'''
if rinfo_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rinfo_arguments.repository
):
if not rinfo_arguments.json: # pragma: nocover
logger.answer(f'{repository["path"]}: Displaying repository summary information')
json_output = borgmatic.borg.rinfo.display_repository_info(
repository['path'],
storage,
local_borg_version,
rinfo_arguments=rinfo_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -0,0 +1,33 @@
import json
import logging
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_rlist(
repository, storage, local_borg_version, rlist_arguments, local_path, remote_path,
):
'''
Run the "rlist" action for the given repository.
If rlist_arguments.json is True, yield the JSON output from listing the repository.
'''
if rlist_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rlist_arguments.repository
):
if not rlist_arguments.json: # pragma: nocover
logger.answer(f'{repository["path"]}: Listing repository')
json_output = borgmatic.borg.rlist.list_repository(
repository['path'],
storage,
local_borg_version,
rlist_arguments=rlist_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -0,0 +1,29 @@
import logging
import borgmatic.borg.transfer
logger = logging.getLogger(__name__)
def run_transfer(
repository,
storage,
local_borg_version,
transfer_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "transfer" action for the given repository.
'''
logger.info(f'{repository["path"]}: Transferring archives to repository')
borgmatic.borg.transfer.transfer_archives(
global_arguments.dry_run,
repository['path'],
storage,
local_borg_version,
transfer_arguments,
local_path=local_path,
remote_path=remote_path,
)

68
borgmatic/borg/borg.py Normal file
View File

@ -0,0 +1,68 @@
import logging
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
REPOSITORYLESS_BORG_COMMANDS = {'serve', None}
BORG_SUBCOMMANDS_WITH_SUBCOMMANDS = {'key', 'debug'}
BORG_SUBCOMMANDS_WITHOUT_REPOSITORY = (('debug', 'info'), ('debug', 'convert-profile'), ())
def run_arbitrary_borg(
repository_path,
storage_config,
local_borg_version,
options,
archive=None,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, a
sequence of arbitrary command-line Borg options, and an optional archive name, run an arbitrary
Borg command on the given repository/archive.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
try:
options = options[1:] if options[0] == '--' else options
# Borg commands like "key" have a sub-command ("export", etc.) that must follow it.
command_options_start_index = 2 if options[0] in BORG_SUBCOMMANDS_WITH_SUBCOMMANDS else 1
borg_command = tuple(options[:command_options_start_index])
command_options = tuple(options[command_options_start_index:])
except IndexError:
borg_command = ()
command_options = ()
if borg_command in BORG_SUBCOMMANDS_WITHOUT_REPOSITORY:
repository_archive_flags = ()
elif archive:
repository_archive_flags = flags.make_repository_archive_flags(
repository_path, archive, local_borg_version
)
else:
repository_archive_flags = flags.make_repository_flags(repository_path, local_borg_version)
full_command = (
(local_path,)
+ borg_command
+ repository_archive_flags
+ command_options
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
)
return execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

View File

@ -0,0 +1,31 @@
import logging
from borgmatic.borg import environment, flags
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def break_lock(
repository_path, storage_config, local_borg_version, local_path='borg', remote_path=None,
):
'''
Given a local or remote repository path, a storage configuration dict, the local Borg version,
and optional local and remote Borg paths, break any repository and cache locks leftover from Borg
aborting.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(local_path, 'break-lock')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
borg_environment = environment.make_environment(storage_config)
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)

View File

@ -1,48 +1,155 @@
import argparse
import datetime
import json
import logging
import os
import pathlib
from borgmatic.borg import extract
from borgmatic.borg import environment, extract, feature, flags, rinfo, state
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
DEFAULT_CHECKS = ('repository', 'archives')
DEFAULT_PREFIX = '{hostname}-'
DEFAULT_CHECKS = (
{'name': 'repository', 'frequency': '1 month'},
{'name': 'archives', 'frequency': '1 month'},
)
logger = logging.getLogger(__name__)
def _parse_checks(consistency_config, only_checks=None):
def parse_checks(consistency_config, only_checks=None):
'''
Given a consistency config with a "checks" list, and an optional list of override checks,
transform them a tuple of named checks to run.
Given a consistency config with a "checks" sequence of dicts and an optional list of override
checks, return a tuple of named checks to run.
For example, given a retention config of:
{'checks': ['repository', 'archives']}
{'checks': ({'name': 'repository'}, {'name': 'archives'})}
This will be returned as:
('repository', 'archives')
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If the checks value
is the string "disabled", return an empty tuple, meaning that no checks should be run.
If the "data" option is present, then make sure the "archives" option is included as well.
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If a checks value
has a name of "disabled", return an empty tuple, meaning that no checks should be run.
'''
checks = [
check.lower() for check in (only_checks or consistency_config.get('checks', []) or [])
]
if checks == ['disabled']:
checks = only_checks or tuple(
check_config['name']
for check_config in (consistency_config.get('checks', None) or DEFAULT_CHECKS)
)
checks = tuple(check.lower() for check in checks)
if 'disabled' in checks:
if len(checks) > 1:
logger.warning(
'Multiple checks are configured, but one of them is "disabled"; not running any checks'
)
return ()
if 'data' in checks and 'archives' not in checks:
checks.append('archives')
return tuple(check for check in checks if check not in ('disabled', '')) or DEFAULT_CHECKS
return checks
def _make_check_flags(checks, check_last=None, prefix=None):
def parse_frequency(frequency):
'''
Given a parsed sequence of checks, transform it into tuple of command-line flags.
Given a frequency string with a number and a unit of time, return a corresponding
datetime.timedelta instance or None if the frequency is None or "always".
For instance, given "3 weeks", return datetime.timedelta(weeks=3)
Raise ValueError if the given frequency cannot be parsed.
'''
if not frequency:
return None
frequency = frequency.strip().lower()
if frequency == 'always':
return None
try:
number, time_unit = frequency.split(' ')
number = int(number)
except ValueError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
if not time_unit.endswith('s'):
time_unit += 's'
if time_unit == 'months':
number *= 30
time_unit = 'days'
elif time_unit == 'years':
number *= 365
time_unit = 'days'
try:
return datetime.timedelta(**{time_unit: number})
except TypeError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
def filter_checks_on_frequency(
location_config, consistency_config, borg_repository_id, checks, force
):
'''
Given a location config, a consistency config with a "checks" sequence of dicts, a Borg
repository ID, a sequence of checks, and whether to force checks to run, filter down those
checks based on the configured "frequency" for each check as compared to its check time file.
In other words, a check whose check time file's timestamp is too new (based on the configured
frequency) will get cut from the returned sequence of checks. Example:
consistency_config = {
'checks': [
{
'name': 'archives',
'frequency': '2 weeks',
},
]
}
When this function is called with that consistency_config and "archives" in checks, "archives"
will get filtered out of the returned result if its check time file is newer than 2 weeks old,
indicating that it's not yet time to run that check again.
Raise ValueError if a frequency cannot be parsed.
'''
filtered_checks = list(checks)
if force:
return tuple(filtered_checks)
for check_config in consistency_config.get('checks', DEFAULT_CHECKS):
check = check_config['name']
if checks and check not in checks:
continue
frequency_delta = parse_frequency(check_config.get('frequency'))
if not frequency_delta:
continue
check_time = read_check_time(
make_check_time_path(location_config, borg_repository_id, check)
)
if not check_time:
continue
# If we've not yet reached the time when the frequency dictates we're ready for another
# check, skip this check.
if datetime.datetime.now() < check_time + frequency_delta:
remaining = check_time + frequency_delta - datetime.datetime.now()
logger.info(
f'Skipping {check} check due to configured frequency; {remaining} until next check'
)
filtered_checks.remove(check)
return tuple(filtered_checks)
def make_check_flags(local_borg_version, storage_config, checks, check_last=None, prefix=None):
'''
Given the local Borg version, a storage configuration dict, a parsed sequence of checks, the
check last value, and a consistency check prefix, transform the checks into tuple of
command-line flags.
For example, given parsed checks of:
@ -53,47 +160,111 @@ def _make_check_flags(checks, check_last=None, prefix=None):
('--repository-only',)
However, if both "repository" and "archives" are in checks, then omit them from the returned
flags because Borg does both checks by default.
flags because Borg does both checks by default. If "data" is in checks, that implies "archives".
Additionally, if a check_last value is given and "archives" is in checks, then include a
"--last" flag. And if a prefix value is given and "archives" is in checks, then include a
"--prefix" flag.
"--match-archives" flag.
'''
if 'data' in checks:
data_flags = ('--verify-data',)
checks += ('archives',)
else:
data_flags = ()
if 'archives' in checks:
last_flags = ('--last', str(check_last)) if check_last else ()
prefix_flags = ('--prefix', prefix) if prefix else ()
match_archives_flags = (
(
('--match-archives', f'sh:{prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else ('--glob-archives', f'{prefix}*')
)
if prefix
else (
flags.make_match_archives_flags(
storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
else:
last_flags = ()
prefix_flags = ()
match_archives_flags = ()
if check_last:
logger.warning(
'Ignoring check_last option, as "archives" is not in consistency checks.'
'Ignoring check_last option, as "archives" or "data" are not in consistency checks'
)
if prefix:
logger.warning(
'Ignoring consistency prefix option, as "archives" is not in consistency checks.'
'Ignoring consistency prefix option, as "archives" or "data" are not in consistency checks'
)
common_flags = last_flags + prefix_flags + (('--verify-data',) if 'data' in checks else ())
common_flags = last_flags + match_archives_flags + data_flags
if set(DEFAULT_CHECKS).issubset(set(checks)):
if {'repository', 'archives'}.issubset(set(checks)):
return common_flags
return (
tuple('--{}-only'.format(check) for check in checks if check in DEFAULT_CHECKS)
tuple(f'--{check}-only' for check in checks if check in ('repository', 'archives'))
+ common_flags
)
def make_check_time_path(location_config, borg_repository_id, check_type):
'''
Given a location configuration dict, a Borg repository ID, and the name of a check type
("repository", "archives", etc.), return a path for recording that check's time (the time of
that check last occurring).
'''
return os.path.join(
os.path.expanduser(
location_config.get(
'borgmatic_source_directory', state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
),
'checks',
borg_repository_id,
check_type,
)
def write_check_time(path): # pragma: no cover
'''
Record a check time of now as the modification time of the given path.
'''
logger.debug(f'Writing check time at {path}')
os.makedirs(os.path.dirname(path), mode=0o700, exist_ok=True)
pathlib.Path(path, mode=0o600).touch()
def read_check_time(path):
'''
Return the check time based on the modification time of the given path. Return None if the path
doesn't exist.
'''
logger.debug(f'Reading check time from {path}')
try:
return datetime.datetime.fromtimestamp(os.stat(path).st_mtime)
except FileNotFoundError:
return None
def check_archives(
repository,
repository_path,
location_config,
storage_config,
consistency_config,
local_borg_version,
local_path='borg',
remote_path=None,
progress=None,
repair=None,
only_checks=None,
force=None,
):
'''
Given a local or remote repository path, a storage config dict, a consistency config dict,
@ -102,14 +273,36 @@ def check_archives(
Borg archives for consistency.
If there are no consistency checks to run, skip running them.
Raises ValueError if the Borg repository ID cannot be determined.
'''
checks = _parse_checks(consistency_config, only_checks)
try:
borg_repository_id = json.loads(
rinfo.display_repository_info(
repository_path,
storage_config,
local_borg_version,
argparse.Namespace(json=True),
local_path,
remote_path,
)
)['repository']['id']
except (json.JSONDecodeError, KeyError):
raise ValueError(f'Cannot determine Borg repository ID for {repository_path}')
checks = filter_checks_on_frequency(
location_config,
consistency_config,
borg_repository_id,
parse_checks(consistency_config, only_checks),
force,
)
check_last = consistency_config.get('check_last', None)
lock_wait = None
extra_borg_options = storage_config.get('extra_borg_options', {}).get('check', '')
if set(checks).intersection(set(DEFAULT_CHECKS + ('data',))):
lock_wait = storage_config.get('lock_wait', None)
if set(checks).intersection({'repository', 'archives', 'data'}):
lock_wait = storage_config.get('lock_wait')
verbosity_flags = ()
if logger.isEnabledFor(logging.INFO):
@ -117,26 +310,36 @@ def check_archives(
if logger.isEnabledFor(logging.DEBUG):
verbosity_flags = ('--debug', '--show-rc')
prefix = consistency_config.get('prefix', DEFAULT_PREFIX)
prefix = consistency_config.get('prefix')
full_command = (
(local_path, 'check')
+ (('--repair',) if repair else ())
+ _make_check_flags(checks, check_last, prefix)
+ make_check_flags(local_borg_version, storage_config, checks, check_last, prefix)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ (repository,)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
# The Borg repair option trigger an interactive prompt, which won't work when output is
borg_environment = environment.make_environment(storage_config)
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
if repair or progress:
execute_command(full_command, output_file=DO_NOT_CAPTURE)
execute_command(
full_command, output_file=DO_NOT_CAPTURE, extra_environment=borg_environment
)
else:
execute_command(full_command)
execute_command(full_command, extra_environment=borg_environment)
for check in checks:
write_check_time(make_check_time_path(location_config, borg_repository_id, check))
if 'extract' in checks:
extract.extract_last_archive_dry_run(repository, lock_wait, local_path, remote_path)
extract.extract_last_archive_dry_run(
storage_config, local_borg_version, repository_path, lock_wait, local_path, remote_path
)
write_check_time(make_check_time_path(location_config, borg_repository_id, 'extract'))

51
borgmatic/borg/compact.py Normal file
View File

@ -0,0 +1,51 @@
import logging
from borgmatic.borg import environment, flags
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def compact_segments(
dry_run,
repository_path,
storage_config,
local_borg_version,
local_path='borg',
remote_path=None,
progress=False,
cleanup_commits=False,
threshold=None,
):
'''
Given dry-run flag, a local or remote repository path, a storage config dict, and the local
Borg version, compact the segments in a repository.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('compact', '')
full_command = (
(local_path, 'compact')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--progress',) if progress else ())
+ (('--cleanup-commits',) if cleanup_commits else ())
+ (('--threshold', str(threshold)) if threshold else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if dry_run:
logging.info(f'{repository_path}: Skipping compact (dry run)')
return
execute_command(
full_command,
output_log_level=logging.INFO,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

View File

@ -3,14 +3,22 @@ import itertools
import logging
import os
import pathlib
import stat
import tempfile
from borgmatic.execute import DO_NOT_CAPTURE, execute_command, execute_command_with_processes
import borgmatic.logger
from borgmatic.borg import environment, feature, flags, state
from borgmatic.execute import (
DO_NOT_CAPTURE,
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
logger = logging.getLogger(__name__)
def _expand_directory(directory):
def expand_directory(directory):
'''
Given a directory path, expand any tilde (representing a user's home directory) and any globs
therein. Return a list of one or more resulting paths.
@ -20,7 +28,7 @@ def _expand_directory(directory):
return glob.glob(expanded_directory) or [expanded_directory]
def _expand_directories(directories):
def expand_directories(directories):
'''
Given a sequence of directory paths, expand tildes and globs in each one. Return all the
resulting directories as a single flattened tuple.
@ -29,11 +37,11 @@ def _expand_directories(directories):
return ()
return tuple(
itertools.chain.from_iterable(_expand_directory(directory) for directory in directories)
itertools.chain.from_iterable(expand_directory(directory) for directory in directories)
)
def _expand_home_directories(directories):
def expand_home_directories(directories):
'''
Given a sequence of directory paths, expand tildes in each one. Do not perform any globbing.
Return the results as a tuple.
@ -44,16 +52,21 @@ def _expand_home_directories(directories):
return tuple(os.path.expanduser(directory) for directory in directories)
def map_directories_to_devices(directories): # pragma: no cover
def map_directories_to_devices(directories):
'''
Given a sequence of directories, return a map from directory to an identifier for the device on
which that directory resides. This is handy for determining whether two different directories
are on the same filesystem (have the same device identifier).
which that directory resides or None if the path doesn't exist.
This is handy for determining whether two different directories are on the same filesystem (have
the same device identifier).
'''
return {directory: os.stat(directory).st_dev for directory in directories}
return {
directory: os.stat(directory).st_dev if os.path.exists(directory) else None
for directory in directories
}
def deduplicate_directories(directory_devices):
def deduplicate_directories(directory_devices, additional_directory_devices):
'''
Given a map from directory to the identifier for the device on which that directory resides,
return the directories as a sorted tuple with all duplicate child directories removed. For
@ -68,21 +81,28 @@ def deduplicate_directories(directory_devices):
there are cases where Borg coming across the same file twice will result in duplicate reads and
even hangs, e.g. when a database hook is using a named pipe for streaming database dumps to
Borg.
If any additional directory devices are given, also deduplicate against them, but don't include
them in the returned directories.
'''
deduplicated = set()
directories = sorted(directory_devices.keys())
additional_directories = sorted(additional_directory_devices.keys())
all_devices = {**directory_devices, **additional_directory_devices}
for directory in directories:
deduplicated.add(directory)
parents = pathlib.PurePath(directory).parents
# If another directory in the given list is a parent of current directory (even n levels
# up) and both are on the same filesystem, then the current directory is a duplicate.
for other_directory in directories:
# If another directory in the given list (or the additional list) is a parent of current
# directory (even n levels up) and both are on the same filesystem, then the current
# directory is a duplicate.
for other_directory in directories + additional_directories:
for parent in parents:
if (
pathlib.PurePath(other_directory) == parent
and directory_devices[other_directory] == directory_devices[directory]
and all_devices[directory] is not None
and all_devices[other_directory] == all_devices[directory]
):
if directory in deduplicated:
deduplicated.remove(directory)
@ -91,22 +111,42 @@ def deduplicate_directories(directory_devices):
return tuple(sorted(deduplicated))
def _write_pattern_file(patterns=None):
def write_pattern_file(patterns=None, sources=None, pattern_file=None):
'''
Given a sequence of patterns, write them to a named temporary file and return it. Return None
if no patterns are provided.
Given a sequence of patterns and an optional sequence of source directories, write them to a
named temporary file (with the source directories as additional roots) and return the file.
If an optional open pattern file is given, overwrite it instead of making a new temporary file.
Return None if no patterns are provided.
'''
if not patterns:
if not patterns and not sources:
return None
pattern_file = tempfile.NamedTemporaryFile('w')
pattern_file.write('\n'.join(patterns))
if pattern_file is None:
pattern_file = tempfile.NamedTemporaryFile('w')
else:
pattern_file.seek(0)
pattern_file.write(
'\n'.join(tuple(patterns or ()) + tuple(f'R {source}' for source in (sources or [])))
)
pattern_file.flush()
return pattern_file
def _make_pattern_flags(location_config, pattern_filename=None):
def ensure_files_readable(*filename_lists):
'''
Given a sequence of filename sequences, ensure that each filename is openable. This prevents
unreadable files from being passed to Borg, which in certain situations only warns instead of
erroring.
'''
for file_object in itertools.chain.from_iterable(
filename_list for filename_list in filename_lists if filename_list
):
open(file_object).close()
def make_pattern_flags(location_config, pattern_filename=None):
'''
Given a location config dict with a potential patterns_from option, and a filename containing
any additional patterns, return the corresponding Borg flags for those files as a tuple.
@ -122,7 +162,7 @@ def _make_pattern_flags(location_config, pattern_filename=None):
)
def _make_exclude_flags(location_config, exclude_filename=None):
def make_exclude_flags(location_config, exclude_filename=None):
'''
Given a location config dict with various exclude options, and a filename containing any exclude
patterns, return the corresponding Borg flags as a tuple.
@ -156,15 +196,36 @@ def _make_exclude_flags(location_config, exclude_filename=None):
)
DEFAULT_BORGMATIC_SOURCE_DIRECTORY = '~/.borgmatic'
def make_list_filter_flags(local_borg_version, dry_run):
'''
Given the local Borg version and whether this is a dry run, return the corresponding flags for
passing to "--list --filter". The general idea is that excludes are shown for a dry run or when
the verbosity is debug.
'''
base_flags = 'AME'
show_excludes = logger.isEnabledFor(logging.DEBUG)
if feature.available(feature.Feature.EXCLUDED_FILES_MINUS, local_borg_version):
if show_excludes or dry_run:
return f'{base_flags}+-'
else:
return base_flags
if show_excludes:
return f'{base_flags}x-'
else:
return f'{base_flags}-'
def borgmatic_source_directories(borgmatic_source_directory):
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}' # noqa: FS003
def collect_borgmatic_source_directories(borgmatic_source_directory):
'''
Return a list of borgmatic-specific source directories used for state like database backups.
'''
if not borgmatic_source_directory:
borgmatic_source_directory = DEFAULT_BORGMATIC_SOURCE_DIRECTORY
borgmatic_source_directory = state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
return (
[borgmatic_source_directory]
@ -173,20 +234,104 @@ def borgmatic_source_directories(borgmatic_source_directory):
)
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}'
ROOT_PATTERN_PREFIX = 'R '
def pattern_root_directories(patterns=None):
'''
Given a sequence of patterns, parse out and return just the root directories.
'''
if not patterns:
return []
return [
pattern.split(ROOT_PATTERN_PREFIX, maxsplit=1)[1]
for pattern in patterns
if pattern.startswith(ROOT_PATTERN_PREFIX)
]
def special_file(path):
'''
Return whether the given path is a special file (character device, block device, or named pipe
/ FIFO).
'''
try:
mode = os.stat(path).st_mode
except (FileNotFoundError, OSError):
return False
return stat.S_ISCHR(mode) or stat.S_ISBLK(mode) or stat.S_ISFIFO(mode)
def any_parent_directories(path, candidate_parents):
'''
Return whether any of the given candidate parent directories are an actual parent of the given
path. This includes grandparents, etc.
'''
for parent in candidate_parents:
if pathlib.PurePosixPath(parent) in pathlib.PurePath(path).parents:
return True
return False
def collect_special_file_paths(
create_command, local_path, working_directory, borg_environment, skip_directories
):
'''
Given a Borg create command as a tuple, a local Borg path, a working directory, and a dict of
environment variables to pass to Borg, and a sequence of parent directories to skip, collect the
paths for any special files (character devices, block devices, and named pipes / FIFOs) that
Borg would encounter during a create. These are all paths that could cause Borg to hang if its
--read-special flag is used.
'''
paths_output = execute_command_and_capture_output(
create_command + ('--dry-run', '--list'),
capture_stderr=True,
working_directory=working_directory,
extra_environment=borg_environment,
)
paths = tuple(
path_line.split(' ', 1)[1]
for path_line in paths_output.split('\n')
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
return tuple(
path
for path in paths
if special_file(path) and not any_parent_directories(path, skip_directories)
)
def check_all_source_directories_exist(source_directories):
'''
Given a sequence of source directories, check that they all exist. If any do not, raise an
exception.
'''
missing_directories = [
source_directory
for source_directory in source_directories
if not os.path.exists(source_directory)
]
if missing_directories:
raise ValueError(f"Source directories do not exist: {', '.join(missing_directories)}")
def create_archive(
dry_run,
repository,
repository_path,
location_config,
storage_config,
local_borg_version,
local_path='borg',
remote_path=None,
progress=False,
stats=False,
json=False,
files=False,
list_files=False,
stream_processes=None,
):
'''
@ -196,72 +341,121 @@ def create_archive(
If a sequence of stream processes is given (instances of subprocess.Popen), then execute the
create command while also triggering the given processes to produce output.
'''
borgmatic.logger.add_custom_log_levels()
borgmatic_source_directories = expand_directories(
collect_borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
)
if location_config.get('source_directories_must_exist', False):
check_all_source_directories_exist(location_config.get('source_directories'))
sources = deduplicate_directories(
map_directories_to_devices(
_expand_directories(
location_config['source_directories']
+ borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
expand_directories(
tuple(location_config.get('source_directories', ())) + borgmatic_source_directories
)
)
),
additional_directory_devices=map_directories_to_devices(
expand_directories(pattern_root_directories(location_config.get('patterns')))
),
)
pattern_file = _write_pattern_file(location_config.get('patterns'))
exclude_file = _write_pattern_file(
_expand_home_directories(location_config.get('exclude_patterns'))
ensure_files_readable(location_config.get('patterns_from'), location_config.get('exclude_from'))
try:
working_directory = os.path.expanduser(location_config.get('working_directory'))
except TypeError:
working_directory = None
pattern_file = (
write_pattern_file(location_config.get('patterns'), sources)
if location_config.get('patterns') or location_config.get('patterns_from')
else None
)
exclude_file = write_pattern_file(
expand_home_directories(location_config.get('exclude_patterns'))
)
checkpoint_interval = storage_config.get('checkpoint_interval', None)
checkpoint_volume = storage_config.get('checkpoint_volume', None)
chunker_params = storage_config.get('chunker_params', None)
compression = storage_config.get('compression', None)
remote_rate_limit = storage_config.get('remote_rate_limit', None)
upload_rate_limit = storage_config.get('upload_rate_limit', None)
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
list_filter_flags = make_list_filter_flags(local_borg_version, dry_run)
files_cache = location_config.get('files_cache')
archive_name_format = storage_config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('create', '')
full_command = (
(local_path, 'create')
+ _make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
+ _make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
if feature.available(feature.Feature.ATIME, local_borg_version):
atime_flags = ('--atime',) if location_config.get('atime') is True else ()
else:
atime_flags = ('--noatime',) if location_config.get('atime') is False else ()
if feature.available(feature.Feature.NOFLAGS, local_borg_version):
noflags_flags = ('--noflags',) if location_config.get('flags') is False else ()
else:
noflags_flags = ('--nobsdflags',) if location_config.get('flags') is False else ()
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
else:
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
if feature.available(feature.Feature.UPLOAD_RATELIMIT, local_borg_version):
upload_ratelimit_flags = (
('--upload-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
)
else:
upload_ratelimit_flags = (
('--remote-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
)
if stream_processes and location_config.get('read_special') is False:
logger.warning(
f'{repository_path}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
create_command = (
tuple(local_path.split(' '))
+ ('create',)
+ make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
+ make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
+ (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ())
+ (('--checkpoint-volume', str(checkpoint_volume)) if checkpoint_volume else ())
+ (('--chunker-params', chunker_params) if chunker_params else ())
+ (('--compression', compression) if compression else ())
+ (('--remote-ratelimit', str(remote_rate_limit)) if remote_rate_limit else ())
+ upload_ratelimit_flags
+ (
('--one-file-system',)
if location_config.get('one_file_system') or stream_processes
else ()
)
+ (('--numeric-owner',) if location_config.get('numeric_owner') else ())
+ (('--noatime',) if location_config.get('atime') is False else ())
+ numeric_ids_flags
+ atime_flags
+ (('--noctime',) if location_config.get('ctime') is False else ())
+ (('--nobirthtime',) if location_config.get('birthtime') is False else ())
+ (('--read-special',) if (location_config.get('read_special') or stream_processes) else ())
+ (('--nobsdflags',) if location_config.get('bsd_flags') is False else ())
+ (('--read-special',) if location_config.get('read_special') or stream_processes else ())
+ noflags_flags
+ (('--files-cache', files_cache) if files_cache else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--list', '--filter', 'AME-') if files and not json and not progress else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
+ (('--dry-run',) if dry_run else ())
+ (('--progress',) if progress else ())
+ (('--json',) if json else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ (
'{repository}::{archive_name_format}'.format(
repository=repository, archive_name_format=archive_name_format
),
('--list', '--filter', list_filter_flags)
if list_files and not json and not progress
else ()
)
+ sources
+ (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_archive_flags(
repository_path, archive_name_format, local_borg_version
)
+ (sources if not pattern_file else ())
)
if json:
output_log_level = None
elif (stats or files) and logger.getEffectiveLevel() == logging.WARNING:
output_log_level = logging.WARNING
elif list_files or (stats and not dry_run):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
@ -269,13 +463,60 @@ def create_archive(
# the terminal directly.
output_file = DO_NOT_CAPTURE if progress else None
borg_environment = environment.make_environment(storage_config)
# If database hooks are enabled (as indicated by streaming processes), exclude files that might
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
if stream_processes and not location_config.get('read_special'):
logger.debug(f'{repository_path}: Collecting special file paths')
special_file_paths = collect_special_file_paths(
create_command,
local_path,
working_directory,
borg_environment,
skip_directories=borgmatic_source_directories,
)
if special_file_paths:
logger.warning(
f'{repository_path}: Excluding special files to prevent Borg from hanging: {", ".join(special_file_paths)}'
)
exclude_file = write_pattern_file(
expand_home_directories(
tuple(location_config.get('exclude_patterns') or ()) + special_file_paths
),
pattern_file=exclude_file,
)
create_command += make_exclude_flags(location_config, exclude_file.name)
create_command += (
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
+ (('--progress',) if progress else ())
+ (('--json',) if json else ())
)
if stream_processes:
return execute_command_with_processes(
full_command,
create_command,
stream_processes,
output_log_level,
output_file,
borg_local_path=local_path,
working_directory=working_directory,
extra_environment=borg_environment,
)
elif output_log_level is None:
return execute_command_and_capture_output(
create_command, working_directory=working_directory, extra_environment=borg_environment,
)
else:
execute_command(
create_command,
output_log_level,
output_file,
borg_local_path=local_path,
working_directory=working_directory,
extra_environment=borg_environment,
)
return execute_command(full_command, output_log_level, output_file, borg_local_path=local_path)

View File

@ -1,9 +1,8 @@
import os
OPTION_TO_ENVIRONMENT_VARIABLE = {
'borg_base_directory': 'BORG_BASE_DIR',
'borg_config_directory': 'BORG_CONFIG_DIR',
'borg_cache_directory': 'BORG_CACHE_DIR',
'borg_files_cache_ttl': 'BORG_FILES_CACHE_TTL',
'borg_security_directory': 'BORG_SECURITY_DIR',
'borg_keys_directory': 'BORG_KEYS_DIR',
'encryption_passcommand': 'BORG_PASSCOMMAND',
@ -18,21 +17,24 @@ DEFAULT_BOOL_OPTION_TO_ENVIRONMENT_VARIABLE = {
}
def initialize(storage_config):
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
def make_environment(storage_config):
'''
Given a borgmatic storage configuration dict, return its options converted to a Borg environment
variable dict.
'''
environment = {}
# Options from borgmatic configuration take precedence over already set BORG_* environment
# variables.
value = storage_config.get(option_name) or os.environ.get(environment_variable_name)
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
value = storage_config.get(option_name)
if value:
os.environ[environment_variable_name] = value
else:
os.environ.pop(environment_variable_name, None)
environment[environment_variable_name] = str(value)
for (
option_name,
environment_variable_name,
) in DEFAULT_BOOL_OPTION_TO_ENVIRONMENT_VARIABLE.items():
value = storage_config.get(option_name, False)
os.environ[environment_variable_name] = 'yes' if value else 'no'
environment[environment_variable_name] = 'yes' if value else 'no'
return environment

View File

@ -1,6 +1,7 @@
import logging
import os
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
@ -8,26 +9,28 @@ logger = logging.getLogger(__name__)
def export_tar_archive(
dry_run,
repository,
repository_path,
archive,
paths,
destination_path,
storage_config,
local_borg_version,
local_path='borg',
remote_path=None,
tar_filter=None,
files=False,
list_files=False,
strip_components=None,
):
'''
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
export from the archive, a destination path to export to, a storage configuration dict, optional
local and remote Borg paths, an optional filter program, whether to include per-file details,
and an optional number of path components to strip, export the archive into the given
destination path as a tar-formatted file.
export from the archive, a destination path to export to, a storage configuration dict, the
local Borg version, optional local and remote Borg paths, an optional filter program, whether to
include per-file details, and an optional number of path components to strip, export the archive
into the given destination path as a tar-formatted file.
If the destination path is "-", then stream the output to stdout instead of to a file.
'''
borgmatic.logger.add_custom_log_levels()
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
@ -37,23 +40,23 @@ def export_tar_archive(
+ (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--list',) if files else ())
+ (('--list',) if list_files else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (('--tar-filter', tar_filter) if tar_filter else ())
+ (('--strip-components', str(strip_components)) if strip_components else ())
+ ('::'.join((repository if ':' in repository else os.path.abspath(repository), archive)),)
+ flags.make_repository_archive_flags(repository_path, archive, local_borg_version,)
+ (destination_path,)
+ (tuple(paths) if paths else ())
)
if files and logger.getEffectiveLevel() == logging.WARNING:
output_log_level = logging.WARNING
if list_files:
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
if dry_run:
logging.info('{}: Skipping export to tar file (dry run)'.format(repository))
logging.info(f'{repository_path}: Skipping export to tar file (dry run)')
return
execute_command(
@ -61,4 +64,5 @@ def export_tar_archive(
output_file=DO_NOT_CAPTURE if destination_path == '-' else None,
output_log_level=output_log_level,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

View File

@ -2,12 +2,20 @@ import logging
import os
import subprocess
from borgmatic.borg import environment, feature, flags, rlist
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def extract_last_archive_dry_run(repository, lock_wait=None, local_path='borg', remote_path=None):
def extract_last_archive_dry_run(
storage_config,
local_borg_version,
repository_path,
lock_wait=None,
local_path='borg',
remote_path=None,
):
'''
Perform an extraction dry-run of the most recent archive. If there are no archives, skip the
dry-run.
@ -20,38 +28,30 @@ def extract_last_archive_dry_run(repository, lock_wait=None, local_path='borg',
elif logger.isEnabledFor(logging.INFO):
verbosity_flags = ('--info',)
full_list_command = (
(local_path, 'list', '--short')
+ remote_path_flags
+ lock_wait_flags
+ verbosity_flags
+ (repository,)
)
list_output = execute_command(
full_list_command, output_log_level=None, borg_local_path=local_path
)
try:
last_archive_name = list_output.strip().splitlines()[-1]
except IndexError:
last_archive_name = rlist.resolve_archive_name(
repository_path, 'latest', storage_config, local_borg_version, local_path, remote_path
)
except ValueError:
logger.warning('No archives found. Skipping extract consistency check.')
return
list_flag = ('--list',) if logger.isEnabledFor(logging.DEBUG) else ()
borg_environment = environment.make_environment(storage_config)
full_extract_command = (
(local_path, 'extract', '--dry-run')
+ remote_path_flags
+ lock_wait_flags
+ verbosity_flags
+ list_flag
+ (
'{repository}::{last_archive_name}'.format(
repository=repository, last_archive_name=last_archive_name
),
+ flags.make_repository_archive_flags(
repository_path, last_archive_name, local_borg_version
)
)
execute_command(full_extract_command, working_directory=None)
execute_command(
full_extract_command, working_directory=None, extra_environment=borg_environment
)
def extract_archive(
@ -61,6 +61,7 @@ def extract_archive(
paths,
location_config,
storage_config,
local_borg_version,
local_path='borg',
remote_path=None,
destination_path=None,
@ -70,9 +71,9 @@ def extract_archive(
):
'''
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
restore from the archive, location/storage configuration dicts, optional local and remote Borg
paths, and an optional destination path to extract to, extract the archive into the current
directory.
restore from the archive, the local Borg version string, location/storage configuration dicts,
optional local and remote Borg paths, and an optional destination path to extract to, extract
the archive into the current directory.
If extract to stdout is True, then start the extraction streaming to stdout, and return that
extract process as an instance of subprocess.Popen.
@ -83,10 +84,22 @@ def extract_archive(
if progress and extract_to_stdout:
raise ValueError('progress and extract_to_stdout cannot both be set')
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
else:
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
if strip_components == 'all':
if not paths:
raise ValueError('The --strip-components flag with "all" requires at least one --path')
# Calculate the maximum number of leading path components of the given paths.
strip_components = max(0, *(len(path.split(os.path.sep)) - 1 for path in paths))
full_command = (
(local_path, 'extract')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--numeric-owner',) if location_config.get('numeric_owner') else ())
+ numeric_ids_flags
+ (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
@ -95,15 +108,20 @@ def extract_archive(
+ (('--strip-components', str(strip_components)) if strip_components else ())
+ (('--progress',) if progress else ())
+ (('--stdout',) if extract_to_stdout else ())
+ ('::'.join((repository if ':' in repository else os.path.abspath(repository), archive)),)
+ flags.make_repository_archive_flags(repository, archive, local_borg_version,)
+ (tuple(paths) if paths else ())
)
borg_environment = environment.make_environment(storage_config)
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.
if progress:
return execute_command(
full_command, output_file=DO_NOT_CAPTURE, working_directory=destination_path
full_command,
output_file=DO_NOT_CAPTURE,
working_directory=destination_path,
extra_environment=borg_environment,
)
return None
@ -113,8 +131,11 @@ def extract_archive(
output_file=subprocess.PIPE,
working_directory=destination_path,
run_to_completion=False,
extra_environment=borg_environment,
)
# Don't give Borg local path, so as to error on warnings, as Borg only gives a warning if the
# restore paths don't exist in the archive!
execute_command(full_command, working_directory=destination_path)
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command(
full_command, working_directory=destination_path, extra_environment=borg_environment
)

40
borgmatic/borg/feature.py Normal file
View File

@ -0,0 +1,40 @@
from enum import Enum
from pkg_resources import parse_version
class Feature(Enum):
COMPACT = 1
ATIME = 2
NOFLAGS = 3
NUMERIC_IDS = 4
UPLOAD_RATELIMIT = 5
SEPARATE_REPOSITORY_ARCHIVE = 6
RCREATE = 7
RLIST = 8
RINFO = 9
MATCH_ARCHIVES = 10
EXCLUDED_FILES_MINUS = 11
FEATURE_TO_MINIMUM_BORG_VERSION = {
Feature.COMPACT: parse_version('1.2.0a2'), # borg compact
Feature.ATIME: parse_version('1.2.0a7'), # borg create --atime
Feature.NOFLAGS: parse_version('1.2.0a8'), # borg create --noflags
Feature.NUMERIC_IDS: parse_version('1.2.0b3'), # borg create/extract/mount --numeric-ids
Feature.UPLOAD_RATELIMIT: parse_version('1.2.0b3'), # borg create --upload-ratelimit
Feature.SEPARATE_REPOSITORY_ARCHIVE: parse_version('2.0.0a2'), # --repo with separate archive
Feature.RCREATE: parse_version('2.0.0a2'), # borg rcreate
Feature.RLIST: parse_version('2.0.0a2'), # borg rlist
Feature.RINFO: parse_version('2.0.0a2'), # borg rinfo
Feature.MATCH_ARCHIVES: parse_version('2.0.0b3'), # borg --match-archives
Feature.EXCLUDED_FILES_MINUS: parse_version('2.0.0b5'), # --list --filter uses "-" for excludes
}
def available(feature, borg_version):
'''
Given a Borg Feature constant and a Borg version string, return whether that feature is
available in that version of Borg.
'''
return FEATURE_TO_MINIMUM_BORG_VERSION[feature] <= parse_version(borg_version)

View File

@ -1,4 +1,7 @@
import itertools
import re
from borgmatic.borg import feature
def make_flags(name, value):
@ -8,7 +11,7 @@ def make_flags(name, value):
if not value:
return ()
flag = '--{}'.format(name.replace('_', '-'))
flag = f"--{name.replace('_', '-')}"
if value is True:
return (flag,)
@ -29,3 +32,52 @@ def make_flags_from_arguments(arguments, excludes=()):
if name not in excludes and not name.startswith('_')
)
)
def make_repository_flags(repository_path, local_borg_version):
'''
Given the path of a Borg repository and the local Borg version, return Borg-version-appropriate
command-line flags (as a tuple) for selecting that repository.
'''
return (
('--repo',)
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else ()
) + (repository_path,)
def make_repository_archive_flags(repository_path, archive, local_borg_version):
'''
Given the path of a Borg repository, an archive name or pattern, and the local Borg version,
return Borg-version-appropriate command-line flags (as a tuple) for selecting that repository
and archive.
'''
return (
('--repo', repository_path, archive)
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else (f'{repository_path}::{archive}',)
)
def make_match_archives_flags(match_archives, archive_name_format, local_borg_version):
'''
Return match archives flags based on the given match archives value, if any. If it isn't set,
return match archives flags to match archives created with the given archive name format, if
any. This is done by replacing certain archive name format placeholders for ephemeral data (like
"{now}") with globs.
'''
if match_archives:
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
return ('--match-archives', match_archives)
else:
return ('--glob-archives', re.sub(r'^sh:', '', match_archives))
if not archive_name_format:
return ()
derived_match_archives = re.sub(r'\{(now|utcnow|pid)([:%\w\.-]*)\}', '*', archive_name_format)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
return ('--match-archives', f'sh:{derived_match_archives}')
else:
return ('--glob-archives', f'{derived_match_archives}')

View File

@ -1,19 +1,26 @@
import logging
from borgmatic.borg.flags import make_flags, make_flags_from_arguments
from borgmatic.execute import execute_command
import borgmatic.logger
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__)
def display_archives_info(
repository, storage_config, info_arguments, local_path='borg', remote_path=None
repository_path,
storage_config,
local_borg_version,
info_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, and the arguments to the info
action, display summary information for Borg archives in the repository or return JSON summary
information.
Given a local or remote repository path, a storage config dict, the local Borg version, and the
arguments to the info action, display summary information for Borg archives in the repository or
return JSON summary information.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
full_command = (
@ -28,18 +35,39 @@ def display_archives_info(
if logger.isEnabledFor(logging.DEBUG) and not info_arguments.json
else ()
)
+ make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait)
+ make_flags_from_arguments(info_arguments, excludes=('repository', 'archive'))
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
+ (
'::'.join((repository, info_arguments.archive))
if info_arguments.archive
else repository,
(
flags.make_flags('match-archives', f'sh:{info_arguments.prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else flags.make_flags('glob-archives', f'{info_arguments.prefix}*')
)
if info_arguments.prefix
else (
flags.make_match_archives_flags(
info_arguments.match_archives
or info_arguments.archive
or storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
+ flags.make_flags_from_arguments(
info_arguments, excludes=('repository', 'archive', 'prefix', 'match_archives')
)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
return execute_command(
full_command,
output_log_level=None if info_arguments.json else logging.WARNING,
borg_local_path=local_path,
)
if info_arguments.json:
return execute_command_and_capture_output(
full_command, extra_environment=environment.make_environment(storage_config),
)
else:
execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

View File

@ -1,58 +0,0 @@
import logging
import subprocess
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
INFO_REPOSITORY_NOT_FOUND_EXIT_CODE = 2
def initialize_repository(
repository,
storage_config,
encryption_mode,
append_only=None,
storage_quota=None,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage configuration dict, a Borg encryption mode,
whether the repository should be append-only, and the storage quota to use, initialize the
repository. If the repository already exists, then log and skip initialization.
'''
info_command = (
(local_path, 'info')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (repository,)
)
logger.debug(' '.join(info_command))
try:
execute_command(info_command, output_log_level=None)
logger.info('Repository already exists. Skipping initialization.')
return
except subprocess.CalledProcessError as error:
if error.returncode != INFO_REPOSITORY_NOT_FOUND_EXIT_CODE:
raise
extra_borg_options = storage_config.get('extra_borg_options', {}).get('init', '')
init_command = (
(local_path, 'init')
+ (('--encryption', encryption_mode) if encryption_mode else ())
+ (('--append-only',) if append_only else ())
+ (('--storage-quota', storage_quota) if storage_quota else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ (repository,)
)
# Do not capture output here, so as to support interactive prompts.
execute_command(init_command, output_file=DO_NOT_CAPTURE, borg_local_path=local_path)

View File

@ -1,63 +1,41 @@
import argparse
import copy
import logging
import re
from borgmatic.borg.flags import make_flags, make_flags_from_arguments
from borgmatic.execute import execute_command
import borgmatic.logger
from borgmatic.borg import environment, feature, flags, rlist
from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__)
# A hack to convince Borg to exclude archives ending in ".checkpoint". This assumes that a
# non-checkpoint archive name ends in a digit (e.g. from a timestamp).
BORG_EXCLUDE_CHECKPOINTS_GLOB = '*[0123456789]'
ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST = ('prefix', 'match_archives', 'sort_by', 'first', 'last')
MAKE_FLAGS_EXCLUDES = (
'repository',
'archive',
'successful',
'paths',
'find_paths',
) + ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST
def resolve_archive_name(repository, archive, storage_config, local_path='borg', remote_path=None):
def make_list_command(
repository_path,
storage_config,
local_borg_version,
list_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a storage config dict, a local Borg
path, and a remote Borg path, simply return the archive name. But if the archive name is
"latest", then instead introspect the repository for the latest successful (non-checkpoint)
archive, and return its name.
Raise ValueError if "latest" is given but there are no archives in the repository.
'''
if archive != "latest":
return archive
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(local_path, 'list')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait)
+ make_flags('glob-archives', BORG_EXCLUDE_CHECKPOINTS_GLOB)
+ make_flags('last', 1)
+ ('--short', repository)
)
output = execute_command(full_command, output_log_level=None, borg_local_path=local_path)
try:
latest_archive = output.strip().splitlines()[-1]
except IndexError:
raise ValueError('No archives found in the repository')
logger.debug('{}: Latest archive is {}'.format(repository, latest_archive))
return latest_archive
def list_archives(repository, storage_config, list_arguments, local_path='borg', remote_path=None):
'''
Given a local or remote repository path, a storage config dict, and the arguments to the list
action, display the output of listing Borg archives in the repository or return JSON output. Or,
if an archive name is given, listing the files in that archive.
Given a local or remote repository path, a storage config dict, the arguments to the list
action, and local and remote Borg paths, return a command as a tuple to list archives or paths
within an archive.
'''
lock_wait = storage_config.get('lock_wait', None)
if list_arguments.successful:
list_arguments.glob_archives = BORG_EXCLUDE_CHECKPOINTS_GLOB
full_command = (
return (
(local_path, 'list')
+ (
('--info',)
@ -69,21 +47,199 @@ def list_archives(repository, storage_config, list_arguments, local_path='borg',
if logger.isEnabledFor(logging.DEBUG) and not list_arguments.json
else ()
)
+ make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait)
+ make_flags_from_arguments(
list_arguments, excludes=('repository', 'archive', 'paths', 'successful')
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
+ flags.make_flags_from_arguments(list_arguments, excludes=MAKE_FLAGS_EXCLUDES)
+ (
'::'.join((repository, list_arguments.archive))
flags.make_repository_archive_flags(
repository_path, list_arguments.archive, local_borg_version
)
if list_arguments.archive
else repository,
else flags.make_repository_flags(repository_path, local_borg_version)
)
+ (tuple(list_arguments.paths) if list_arguments.paths else ())
)
return execute_command(
full_command,
output_log_level=None if list_arguments.json else logging.WARNING,
borg_local_path=local_path,
def make_find_paths(find_paths):
'''
Given a sequence of path fragments or patterns as passed to `--find`, transform all path
fragments into glob patterns. Pass through existing patterns untouched.
For example, given find_paths of:
['foo.txt', 'pp:root/somedir']
... transform that into:
['sh:**/*foo.txt*/**', 'pp:root/somedir']
'''
if not find_paths:
return ()
return tuple(
find_path
if re.compile(r'([-!+RrPp] )|(\w\w:)').match(find_path)
else f'sh:**/*{find_path}*/**'
for find_path in find_paths
)
def capture_archive_listing(
repository_path,
archive,
storage_config,
local_borg_version,
list_path=None,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a storage config dict, the local Borg
version, the archive path in which to list files, and local and remote Borg paths, capture the
output of listing that archive and return it as a list of file paths.
'''
borg_environment = environment.make_environment(storage_config)
return tuple(
execute_command_and_capture_output(
make_list_command(
repository_path,
storage_config,
local_borg_version,
argparse.Namespace(
repository=repository_path,
archive=archive,
paths=[f'sh:{list_path}'],
find_paths=None,
json=None,
format='{path}{NL}', # noqa: FS003
),
local_path,
remote_path,
),
extra_environment=borg_environment,
)
.strip('\n')
.split('\n')
)
def list_archive(
repository_path,
storage_config,
local_borg_version,
list_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the list action, and local and remote Borg paths, display the output of listing
the files of a Borg archive (or return JSON output). If list_arguments.find_paths are given,
list the files by searching across multiple archives. If neither find_paths nor archive name
are given, instead list the archives in the given repository.
'''
borgmatic.logger.add_custom_log_levels()
if not list_arguments.archive and not list_arguments.find_paths:
if feature.available(feature.Feature.RLIST, local_borg_version):
logger.warning(
'Omitting the --archive flag on the list action is deprecated when using Borg 2.x+. Use the rlist action instead.'
)
rlist_arguments = argparse.Namespace(
repository=repository_path,
short=list_arguments.short,
format=list_arguments.format,
json=list_arguments.json,
prefix=list_arguments.prefix,
match_archives=list_arguments.match_archives,
sort_by=list_arguments.sort_by,
first=list_arguments.first,
last=list_arguments.last,
)
return rlist.list_repository(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
local_path,
remote_path,
)
if list_arguments.archive:
for name in ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST:
if getattr(list_arguments, name, None):
logger.warning(
f"The --{name.replace('_', '-')} flag on the list action is ignored when using the --archive flag."
)
if list_arguments.json:
raise ValueError(
'The --json flag on the list action is not supported when using the --archive/--find flags.'
)
borg_environment = environment.make_environment(storage_config)
# If there are any paths to find (and there's not a single archive already selected), start by
# getting a list of archives to search.
if list_arguments.find_paths and not list_arguments.archive:
rlist_arguments = argparse.Namespace(
repository=repository_path,
short=True,
format=None,
json=None,
prefix=list_arguments.prefix,
match_archives=list_arguments.match_archives,
sort_by=list_arguments.sort_by,
first=list_arguments.first,
last=list_arguments.last,
)
# Ask Borg to list archives. Capture its output for use below.
archive_lines = tuple(
execute_command_and_capture_output(
rlist.make_rlist_command(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
local_path,
remote_path,
),
extra_environment=borg_environment,
)
.strip('\n')
.split('\n')
)
else:
archive_lines = (list_arguments.archive,)
# For each archive listed by Borg, run list on the contents of that archive.
for archive in archive_lines:
logger.answer(f'{repository_path}: Listing archive {archive}')
archive_arguments = copy.copy(list_arguments)
archive_arguments.archive = archive
# This list call is to show the files in a single archive, not list multiple archives. So
# blank out any archive filtering flags. They'll break anyway in Borg 2.
for name in ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST:
setattr(archive_arguments, name, None)
main_command = make_list_command(
repository_path,
storage_config,
local_borg_version,
archive_arguments,
local_path,
remote_path,
) + make_find_paths(list_arguments.find_paths)
execute_command(
main_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=borg_environment,
)

View File

@ -1,25 +1,28 @@
import logging
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def mount_archive(
repository,
repository_path,
archive,
mount_point,
paths,
foreground,
options,
storage_config,
local_borg_version,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an optional archive name, a filesystem mount point,
zero or more paths to mount from the archive, extra Borg mount options, a storage configuration
dict, and optional local and remote Borg paths, mount the archive onto the mount point.
dict, the local Borg version, and optional local and remote Borg paths, mount the archive onto
the mount point.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
@ -33,14 +36,36 @@ def mount_archive(
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--foreground',) if foreground else ())
+ (('-o', options) if options else ())
+ (('::'.join((repository, archive)),) if archive else (repository,))
+ (
(
flags.make_repository_flags(repository_path, local_borg_version)
+ (
('--match-archives', archive)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else ('--glob-archives', archive)
)
)
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else (
flags.make_repository_archive_flags(repository_path, archive, local_borg_version)
if archive
else flags.make_repository_flags(repository_path, local_borg_version)
)
)
+ (mount_point,)
+ (tuple(paths) if paths else ())
)
borg_environment = environment.make_environment(storage_config)
# Don't capture the output when foreground mode is used so that ctrl-C can work properly.
if foreground:
execute_command(full_command, output_file=DO_NOT_CAPTURE, borg_local_path=local_path)
execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
extra_environment=borg_environment,
)
return
execute_command(full_command, borg_local_path=local_path)
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)

View File

@ -1,14 +1,16 @@
import logging
import borgmatic.logger
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def _make_prune_flags(retention_config):
def make_prune_flags(storage_config, retention_config, local_borg_version):
'''
Given a retention config dict mapping from option name to value, tranform it into an iterable of
command-line name-value flag pairs.
Given a retention config dict mapping from option name to value, transform it into an sequence of
command-line flags.
For example, given a retention config of:
@ -22,54 +24,73 @@ def _make_prune_flags(retention_config):
)
'''
config = retention_config.copy()
prefix = config.pop('prefix', None)
if 'prefix' not in config:
config['prefix'] = '{hostname}-'
elif not config['prefix']:
config.pop('prefix')
return (
flag_pairs = (
('--' + option_name.replace('_', '-'), str(value)) for option_name, value in config.items()
)
return tuple(element for pair in flag_pairs for element in pair) + (
(
('--match-archives', f'sh:{prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else ('--glob-archives', f'{prefix}*')
)
if prefix
else (
flags.make_match_archives_flags(
storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
def prune_archives(
dry_run,
repository,
repository_path,
storage_config,
retention_config,
local_borg_version,
local_path='borg',
remote_path=None,
stats=False,
files=False,
list_archives=False,
):
'''
Given dry-run flag, a local or remote repository path, a storage config dict, and a
retention config dict, prune Borg archives according to the retention policy specified in that
configuration.
'''
borgmatic.logger.add_custom_log_levels()
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('prune', '')
full_command = (
(local_path, 'prune')
+ tuple(element for pair in _make_prune_flags(retention_config) for element in pair)
+ make_prune_flags(storage_config, retention_config, local_borg_version)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--stats',) if stats and not dry_run else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--list',) if files else ())
+ (('--list',) if list_archives else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ (repository,)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if (stats or files) and logger.getEffectiveLevel() == logging.WARNING:
output_log_level = logging.WARNING
if stats or list_archives:
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
execute_command(full_command, output_log_level=output_log_level, borg_local_path=local_path)
execute_command(
full_command,
output_log_level=output_log_level,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

81
borgmatic/borg/rcreate.py Normal file
View File

@ -0,0 +1,81 @@
import argparse
import logging
import subprocess
from borgmatic.borg import environment, feature, flags, rinfo
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE = 2
def create_repository(
dry_run,
repository_path,
storage_config,
local_borg_version,
encryption_mode,
source_repository=None,
copy_crypt_key=False,
append_only=None,
storage_quota=None,
make_parent_dirs=False,
local_path='borg',
remote_path=None,
):
'''
Given a dry-run flag, a local or remote repository path, a storage configuration dict, the local
Borg version, a Borg encryption mode, the path to another repo whose key material should be
reused, whether the repository should be append-only, and the storage quota to use, create the
repository. If the repository already exists, then log and skip creation.
'''
try:
rinfo.display_repository_info(
repository_path,
storage_config,
local_borg_version,
argparse.Namespace(json=True),
local_path,
remote_path,
)
logger.info(f'{repository_path}: Repository already exists. Skipping creation.')
return
except subprocess.CalledProcessError as error:
if error.returncode != RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE:
raise
extra_borg_options = storage_config.get('extra_borg_options', {}).get('rcreate', '')
rcreate_command = (
(local_path,)
+ (
('rcreate',)
if feature.available(feature.Feature.RCREATE, local_borg_version)
else ('init',)
)
+ (('--encryption', encryption_mode) if encryption_mode else ())
+ (('--other-repo', source_repository) if source_repository else ())
+ (('--copy-crypt-key',) if copy_crypt_key else ())
+ (('--append-only',) if append_only else ())
+ (('--storage-quota', storage_quota) if storage_quota else ())
+ (('--make-parent-dirs',) if make_parent_dirs else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if dry_run:
logging.info(f'{repository_path}: Skipping repository creation (dry run)')
return
# Do not capture output here, so as to support interactive prompts.
execute_command(
rcreate_command,
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

61
borgmatic/borg/rinfo.py Normal file
View File

@ -0,0 +1,61 @@
import logging
import borgmatic.logger
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__)
def display_repository_info(
repository_path,
storage_config,
local_borg_version,
rinfo_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, and the
arguments to the rinfo action, display summary information for the Borg repository or return
JSON summary information.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(local_path,)
+ (
('rinfo',)
if feature.available(feature.Feature.RINFO, local_borg_version)
else ('info',)
)
+ (
('--info',)
if logger.getEffectiveLevel() == logging.INFO and not rinfo_arguments.json
else ()
)
+ (
('--debug', '--show-rc')
if logger.isEnabledFor(logging.DEBUG) and not rinfo_arguments.json
else ()
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
+ (('--json',) if rinfo_arguments.json else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
extra_environment = environment.make_environment(storage_config)
if rinfo_arguments.json:
return execute_command_and_capture_output(
full_command, extra_environment=extra_environment,
)
else:
execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=extra_environment,
)

143
borgmatic/borg/rlist.py Normal file
View File

@ -0,0 +1,143 @@
import logging
import borgmatic.logger
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__)
def resolve_archive_name(
repository_path,
archive,
storage_config,
local_borg_version,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a storage config dict, a local Borg
path, and a remote Borg path, simply return the archive name. But if the archive name is
"latest", then instead introspect the repository for the latest archive and return its name.
Raise ValueError if "latest" is given but there are no archives in the repository.
'''
if archive != 'latest':
return archive
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(
local_path,
'rlist' if feature.available(feature.Feature.RLIST, local_borg_version) else 'list',
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
+ flags.make_flags('last', 1)
+ ('--short',)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
output = execute_command_and_capture_output(
full_command, extra_environment=environment.make_environment(storage_config),
)
try:
latest_archive = output.strip().splitlines()[-1]
except IndexError:
raise ValueError('No archives found in the repository')
logger.debug(f'{repository_path}: Latest archive is {latest_archive}')
return latest_archive
MAKE_FLAGS_EXCLUDES = ('repository', 'prefix', 'match_archives')
def make_rlist_command(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the rlist action, and local and remote Borg paths, return a command as a tuple to
list archives with a repository.
'''
lock_wait = storage_config.get('lock_wait', None)
return (
(
local_path,
'rlist' if feature.available(feature.Feature.RLIST, local_borg_version) else 'list',
)
+ (
('--info',)
if logger.getEffectiveLevel() == logging.INFO and not rlist_arguments.json
else ()
)
+ (
('--debug', '--show-rc')
if logger.isEnabledFor(logging.DEBUG) and not rlist_arguments.json
else ()
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
+ (
(
flags.make_flags('match-archives', f'sh:{rlist_arguments.prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else flags.make_flags('glob-archives', f'{rlist_arguments.prefix}*')
)
if rlist_arguments.prefix
else (
flags.make_match_archives_flags(
rlist_arguments.match_archives or storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
+ flags.make_flags_from_arguments(rlist_arguments, excludes=MAKE_FLAGS_EXCLUDES)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
def list_repository(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the list action, and local and remote Borg paths, display the output of listing
Borg archives in the given repository (or return JSON output).
'''
borgmatic.logger.add_custom_log_levels()
borg_environment = environment.make_environment(storage_config)
main_command = make_rlist_command(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
local_path,
remote_path,
)
if rlist_arguments.json:
return execute_command_and_capture_output(main_command, extra_environment=borg_environment)
else:
execute_command(
main_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=borg_environment,
)

1
borgmatic/borg/state.py Normal file
View File

@ -0,0 +1 @@
DEFAULT_BORGMATIC_SOURCE_DIRECTORY = '~/.borgmatic'

View File

@ -0,0 +1,57 @@
import logging
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def transfer_archives(
dry_run,
repository_path,
storage_config,
local_borg_version,
transfer_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a dry-run flag, a local or remote repository path, a storage config dict, the local Borg
version, and the arguments to the transfer action, transfer archives to the given repository.
'''
borgmatic.logger.add_custom_log_levels()
full_command = (
(local_path, 'transfer')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', storage_config.get('lock_wait', None))
+ (
flags.make_flags_from_arguments(
transfer_arguments,
excludes=('repository', 'source_repository', 'archive', 'match_archives'),
)
or (
flags.make_match_archives_flags(
transfer_arguments.match_archives
or transfer_arguments.archive
or storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
+ flags.make_repository_flags(repository_path, local_borg_version)
+ flags.make_flags('other-repo', transfer_arguments.source_repository)
+ flags.make_flags('dry-run', dry_run)
)
return execute_command(
full_command,
output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

28
borgmatic/borg/version.py Normal file
View File

@ -0,0 +1,28 @@
import logging
from borgmatic.borg import environment
from borgmatic.execute import execute_command_and_capture_output
logger = logging.getLogger(__name__)
def local_borg_version(storage_config, local_path='borg'):
'''
Given a storage configuration dict and a local Borg binary path, return a version string for it.
Raise OSError or CalledProcessError if there is a problem running Borg.
Raise ValueError if the version cannot be parsed.
'''
full_command = (
(local_path, '--version')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
)
output = execute_command_and_capture_output(
full_command, extra_environment=environment.make_environment(storage_config),
)
try:
return output.split(' ')[1].strip()
except IndexError:
raise ValueError('Could not parse Borg version string')

View File

@ -4,28 +4,34 @@ from argparse import Action, ArgumentParser
from borgmatic.config import collect
SUBPARSER_ALIASES = {
'init': ['--init', '-I'],
'prune': ['--prune', '-p'],
'create': ['--create', '-C'],
'check': ['--check', '-k'],
'extract': ['--extract', '-x'],
'export-tar': ['--export-tar'],
'mount': ['--mount', '-m'],
'umount': ['--umount', '-u'],
'restore': ['--restore', '-r'],
'list': ['--list', '-l'],
'info': ['--info', '-i'],
'rcreate': ['init', '-I'],
'prune': ['-p'],
'compact': [],
'create': ['-C'],
'check': ['-k'],
'extract': ['-x'],
'export-tar': [],
'mount': ['-m'],
'umount': ['-u'],
'restore': ['-r'],
'rlist': [],
'list': ['-l'],
'rinfo': [],
'info': ['-i'],
'transfer': [],
'break-lock': [],
'borg': [],
}
def parse_subparser_arguments(unparsed_arguments, subparsers):
'''
Given a sequence of arguments, and a subparsers object as returned by
argparse.ArgumentParser().add_subparsers(), give each requested action's subparser a shot at
parsing all arguments. This allows common arguments like "--repository" to be shared across
multiple subparsers.
Given a sequence of arguments and a dict from subparser name to argparse.ArgumentParser
instance, give each requested action's subparser a shot at parsing all arguments. This allows
common arguments like "--repository" to be shared across multiple subparsers.
Return the result as a dict mapping from subparser name to a parsed namespace of arguments.
Return the result as a tuple of (a dict mapping from subparser name to a parsed namespace of
arguments, a list of remaining arguments not claimed by any subparser).
'''
arguments = collections.OrderedDict()
remaining_arguments = list(unparsed_arguments)
@ -35,11 +41,17 @@ def parse_subparser_arguments(unparsed_arguments, subparsers):
for alias in aliases
}
for subparser_name, subparser in subparsers.choices.items():
if subparser_name not in remaining_arguments:
continue
# If the "borg" action is used, skip all other subparsers. This avoids confusion like
# "borg list" triggering borgmatic's own list action.
if 'borg' in unparsed_arguments:
subparsers = {'borg': subparsers['borg']}
canonical_name = alias_to_subparser_name.get(subparser_name, subparser_name)
for argument in remaining_arguments:
canonical_name = alias_to_subparser_name.get(argument, argument)
subparser = subparsers.get(canonical_name)
if not subparser:
continue
# If a parsed value happens to be the same as the name of a subparser, remove it from the
# remaining arguments. This prevents, for instance, "check --only extract" from triggering
@ -47,59 +59,45 @@ def parse_subparser_arguments(unparsed_arguments, subparsers):
parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments)
for value in vars(parsed).values():
if isinstance(value, str):
if value in subparsers.choices:
if value in subparsers:
remaining_arguments.remove(value)
elif isinstance(value, list):
for item in value:
if item in subparsers.choices:
if item in subparsers:
remaining_arguments.remove(item)
arguments[canonical_name] = parsed
# If no actions are explicitly requested, assume defaults: prune, create, and check.
# If no actions are explicitly requested, assume defaults.
if not arguments and '--help' not in unparsed_arguments and '-h' not in unparsed_arguments:
for subparser_name in ('prune', 'create', 'check'):
subparser = subparsers.choices[subparser_name]
for subparser_name in ('create', 'prune', 'compact', 'check'):
subparser = subparsers[subparser_name]
parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments)
arguments[subparser_name] = parsed
return arguments
def parse_global_arguments(unparsed_arguments, top_level_parser, subparsers):
'''
Given a sequence of arguments, a top-level parser (containing subparsers), and a subparsers
object as returned by argparse.ArgumentParser().add_subparsers(), parse and return any global
arguments as a parsed argparse.Namespace instance.
'''
# Ask each subparser, one by one, to greedily consume arguments. Any arguments that remain
# are global arguments.
remaining_arguments = list(unparsed_arguments)
present_subparser_names = set()
for subparser_name, subparser in subparsers.choices.items():
if subparser_name not in remaining_arguments:
# Now ask each subparser, one by one, to greedily consume arguments.
for subparser_name, subparser in subparsers.items():
if subparser_name not in arguments.keys():
continue
present_subparser_names.add(subparser_name)
subparser = subparsers[subparser_name]
unused_parsed, remaining_arguments = subparser.parse_known_args(remaining_arguments)
# If no actions are explicitly requested, assume defaults: prune, create, and check.
if (
not present_subparser_names
and '--help' not in unparsed_arguments
and '-h' not in unparsed_arguments
):
for subparser_name in ('prune', 'create', 'check'):
subparser = subparsers.choices[subparser_name]
unused_parsed, remaining_arguments = subparser.parse_known_args(remaining_arguments)
# Special case: If "borg" is present in the arguments, consume all arguments after (+1) the
# "borg" action.
if 'borg' in arguments:
borg_options_index = remaining_arguments.index('borg') + 1
arguments['borg'].options = remaining_arguments[borg_options_index:]
remaining_arguments = remaining_arguments[:borg_options_index]
# Remove the subparser names themselves.
for subparser_name in present_subparser_names:
for subparser_name, subparser in subparsers.items():
if subparser_name in remaining_arguments:
remaining_arguments.remove(subparser_name)
return top_level_parser.parse_args(remaining_arguments)
return (arguments, remaining_arguments)
class Extend_action(Action):
@ -116,10 +114,9 @@ class Extend_action(Action):
setattr(namespace, self.dest, list(values))
def parse_arguments(*unparsed_arguments):
def make_parsers():
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as a dict mapping from subparser name (or "global") to an argparse.Namespace instance.
Build a top-level parser and its subparsers and return them as a tuple.
'''
config_paths = collect.get_default_config_paths(expand_home=True)
unexpanded_config_paths = collect.get_default_config_paths(expand_home=False)
@ -134,9 +131,7 @@ def parse_arguments(*unparsed_arguments):
nargs='*',
dest='config_paths',
default=config_paths,
help='Configuration filenames or directories, defaults to: {}'.format(
' '.join(unexpanded_config_paths)
),
help=f"Configuration filenames or directories, defaults to: {' '.join(unexpanded_config_paths)}",
)
global_group.add_argument(
'--excludes',
@ -183,10 +178,12 @@ def parse_arguments(*unparsed_arguments):
help='Log verbose progress to monitoring integrations that support logging (from only errors to very verbose: -1, 0, 1, or 2)',
)
global_group.add_argument(
'--log-file',
'--log-file', type=str, help='Write log messages to this file instead of syslog',
)
global_group.add_argument(
'--log-file-format',
type=str,
default=None,
help='Write log messages to this file instead of syslog',
help='Log format string used for log messages written to the log file',
)
global_group.add_argument(
'--override',
@ -196,6 +193,18 @@ def parse_arguments(*unparsed_arguments):
action='extend',
help='One or more configuration file options to override with specified values',
)
global_group.add_argument(
'--no-environment-interpolation',
dest='resolve_env',
action='store_false',
help='Do not resolve environment variables in configuration file',
)
global_group.add_argument(
'--bash-completion',
default=False,
action='store_true',
help='Show bash completion script and exit',
)
global_group.add_argument(
'--version',
dest='version',
@ -207,8 +216,8 @@ def parse_arguments(*unparsed_arguments):
top_level_parser = ArgumentParser(
description='''
Simple, configuration-driven backup software for servers and workstations. If none of
the action options are given, then borgmatic defaults to: prune, create, and check
archives.
the action options are given, then borgmatic defaults to: create, prune, compact, and
check.
''',
parents=[global_parser],
)
@ -216,44 +225,118 @@ def parse_arguments(*unparsed_arguments):
subparsers = top_level_parser.add_subparsers(
title='actions',
metavar='',
help='Specify zero or more actions. Defaults to prune, create, and check. Use --help with action for details:',
help='Specify zero or more actions. Defaults to create, prune, compact, and check. Use --help with action for details:',
)
init_parser = subparsers.add_parser(
'init',
aliases=SUBPARSER_ALIASES['init'],
help='Initialize an empty Borg repository',
description='Initialize an empty Borg repository',
rcreate_parser = subparsers.add_parser(
'rcreate',
aliases=SUBPARSER_ALIASES['rcreate'],
help='Create a new, empty Borg repository',
description='Create a new, empty Borg repository',
add_help=False,
)
init_group = init_parser.add_argument_group('init arguments')
init_group.add_argument(
rcreate_group = rcreate_parser.add_argument_group('rcreate arguments')
rcreate_group.add_argument(
'-e',
'--encryption',
dest='encryption_mode',
help='Borg repository encryption mode',
required=True,
)
init_group.add_argument(
'--append-only',
dest='append_only',
rcreate_group.add_argument(
'--source-repository',
'--other-repo',
metavar='KEY_REPOSITORY',
help='Path to an existing Borg repository whose key material should be reused (Borg 2.x+ only)',
)
rcreate_group.add_argument(
'--repository',
help='Path of the new repository to create (must be already specified in a borgmatic configuration file), defaults to the configured repository if there is only one',
)
rcreate_group.add_argument(
'--copy-crypt-key',
action='store_true',
help='Create an append-only repository',
help='Copy the crypt key used for authenticated encryption from the source repository, defaults to a new random key (Borg 2.x+ only)',
)
init_group.add_argument(
'--storage-quota',
dest='storage_quota',
help='Create a repository with a fixed storage quota',
rcreate_group.add_argument(
'--append-only', action='store_true', help='Create an append-only repository',
)
rcreate_group.add_argument(
'--storage-quota', help='Create a repository with a fixed storage quota',
)
rcreate_group.add_argument(
'--make-parent-dirs',
action='store_true',
help='Create any missing parent directories of the repository directory',
)
rcreate_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
transfer_parser = subparsers.add_parser(
'transfer',
aliases=SUBPARSER_ALIASES['transfer'],
help='Transfer archives from one repository to another, optionally upgrading the transferred data (Borg 2.0+ only)',
description='Transfer archives from one repository to another, optionally upgrading the transferred data (Borg 2.0+ only)',
add_help=False,
)
transfer_group = transfer_parser.add_argument_group('transfer arguments')
transfer_group.add_argument(
'--repository',
help='Path of existing destination repository to transfer archives to, defaults to the configured repository if there is only one',
)
transfer_group.add_argument(
'--source-repository',
help='Path of existing source repository to transfer archives from',
required=True,
)
transfer_group.add_argument(
'--archive',
help='Name of single archive to transfer (or "latest"), defaults to transferring all archives',
)
transfer_group.add_argument(
'--upgrader',
help='Upgrader type used to convert the transferred data, e.g. "From12To20" to upgrade data from Borg 1.2 to 2.0 format, defaults to no conversion',
)
transfer_group.add_argument(
'--progress',
default=False,
action='store_true',
help='Display progress as each archive is transferred',
)
transfer_group.add_argument(
'-a',
'--match-archives',
'--glob-archives',
metavar='PATTERN',
help='Only transfer archives with names matching this pattern',
)
transfer_group.add_argument(
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
)
transfer_group.add_argument(
'--first',
metavar='N',
help='Only transfer first N archives after other filters are applied',
)
transfer_group.add_argument(
'--last', metavar='N', help='Only transfer last N archives after other filters are applied'
)
transfer_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
init_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
prune_parser = subparsers.add_parser(
'prune',
aliases=SUBPARSER_ALIASES['prune'],
help='Prune archives according to the retention policy',
description='Prune archives according to the retention policy',
help='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
description='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
add_help=False,
)
prune_group = prune_parser.add_argument_group('prune arguments')
prune_group.add_argument(
'--repository',
help='Path of specific existing repository to prune (must be already specified in a borgmatic configuration file)',
)
prune_group.add_argument(
'--stats',
dest='stats',
@ -262,18 +345,58 @@ def parse_arguments(*unparsed_arguments):
help='Display statistics of archive',
)
prune_group.add_argument(
'--files', dest='files', default=False, action='store_true', help='Show per-file details'
'--list', dest='list_archives', action='store_true', help='List archives kept/pruned'
)
prune_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
compact_parser = subparsers.add_parser(
'compact',
aliases=SUBPARSER_ALIASES['compact'],
help='Compact segments to free space (Borg 1.2+, borgmatic 1.5.23+ only)',
description='Compact segments to free space (Borg 1.2+, borgmatic 1.5.23+ only)',
add_help=False,
)
compact_group = compact_parser.add_argument_group('compact arguments')
compact_group.add_argument(
'--repository',
help='Path of specific existing repository to compact (must be already specified in a borgmatic configuration file)',
)
compact_group.add_argument(
'--progress',
dest='progress',
default=False,
action='store_true',
help='Display progress as each segment is compacted',
)
compact_group.add_argument(
'--cleanup-commits',
dest='cleanup_commits',
default=False,
action='store_true',
help='Cleanup commit-only 17-byte segment files left behind by Borg 1.1 (flag in Borg 1.2 only)',
)
compact_group.add_argument(
'--threshold',
type=int,
dest='threshold',
help='Minimum saved space percentage threshold for compacting a segment, defaults to 10',
)
compact_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
create_parser = subparsers.add_parser(
'create',
aliases=SUBPARSER_ALIASES['create'],
help='Create archives (actually perform backups)',
description='Create archives (actually perform backups)',
help='Create an archive (actually perform a backup)',
description='Create an archive (actually perform a backup)',
add_help=False,
)
create_group = create_parser.add_argument_group('create arguments')
create_group.add_argument(
'--repository',
help='Path of specific existing repository to backup to (must be already specified in a borgmatic configuration file)',
)
create_group.add_argument(
'--progress',
dest='progress',
@ -289,7 +412,7 @@ def parse_arguments(*unparsed_arguments):
help='Display statistics of archive',
)
create_group.add_argument(
'--files', dest='files', default=False, action='store_true', help='Show per-file details'
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
)
create_group.add_argument(
'--json', dest='json', default=False, action='store_true', help='Output results as JSON'
@ -304,6 +427,10 @@ def parse_arguments(*unparsed_arguments):
add_help=False,
)
check_group = check_parser.add_argument_group('check arguments')
check_group.add_argument(
'--repository',
help='Path of specific existing repository to check (must be already specified in a borgmatic configuration file)',
)
check_group.add_argument(
'--progress',
dest='progress',
@ -316,7 +443,7 @@ def parse_arguments(*unparsed_arguments):
dest='repair',
default=False,
action='store_true',
help='Attempt to repair any inconsistencies found (experimental and only for interactive use)',
help='Attempt to repair any inconsistencies found (for interactive use)',
)
check_group.add_argument(
'--only',
@ -324,7 +451,13 @@ def parse_arguments(*unparsed_arguments):
choices=('repository', 'archives', 'data', 'extract'),
dest='only',
action='append',
help='Run a particular consistency check (repository, archives, data, or extract) instead of configured checks; can specify flag multiple times',
help='Run a particular consistency check (repository, archives, data, or extract) instead of configured checks (subject to configured frequency, can specify flag multiple times)',
)
check_group.add_argument(
'--force',
default=False,
action='store_true',
help='Ignore configured check frequencies and run checks unconditionally',
)
check_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
@ -359,10 +492,9 @@ def parse_arguments(*unparsed_arguments):
)
extract_group.add_argument(
'--strip-components',
type=int,
type=lambda number: number if number == 'all' else int(number),
metavar='NUMBER',
dest='strip_components',
help='Number of leading path components to remove from each extracted path. Skip paths with fewer elements',
help='Number of leading path components to remove from each extracted path or "all" to strip all leading path components. Skip paths with fewer elements',
)
extract_group.add_argument(
'--progress',
@ -401,14 +533,14 @@ def parse_arguments(*unparsed_arguments):
'--destination',
metavar='PATH',
dest='destination',
help='Path to destination export tar file, or "-" for stdout (but be careful about dirtying output with --verbosity or --files)',
help='Path to destination export tar file, or "-" for stdout (but be careful about dirtying output with --verbosity or --list)',
required=True,
)
export_tar_group.add_argument(
'--tar-filter', help='Name of filter program to pipe data through'
)
export_tar_group.add_argument(
'--files', default=False, action='store_true', help='Show per-file details'
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
)
export_tar_group.add_argument(
'--strip-components',
@ -495,50 +627,100 @@ def parse_arguments(*unparsed_arguments):
metavar='NAME',
nargs='+',
dest='databases',
help='Names of databases to restore from archive, defaults to all databases. Note that any databases to restore must be defined in borgmatic\'s configuration',
help="Names of databases to restore from archive, defaults to all databases. Note that any databases to restore must be defined in borgmatic's configuration",
)
restore_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
rlist_parser = subparsers.add_parser(
'rlist',
aliases=SUBPARSER_ALIASES['rlist'],
help='List repository',
description='List the archives in a repository',
add_help=False,
)
rlist_group = rlist_parser.add_argument_group('rlist arguments')
rlist_group.add_argument(
'--repository', help='Path of repository to list, defaults to the configured repositories',
)
rlist_group.add_argument(
'--short', default=False, action='store_true', help='Output only archive names'
)
rlist_group.add_argument('--format', help='Format for archive listing')
rlist_group.add_argument(
'--json', default=False, action='store_true', help='Output results as JSON'
)
rlist_group.add_argument(
'-P', '--prefix', help='Deprecated. Only list archive names starting with this prefix'
)
rlist_group.add_argument(
'-a',
'--match-archives',
'--glob-archives',
metavar='PATTERN',
help='Only list archive names matching this pattern',
)
rlist_group.add_argument(
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
)
rlist_group.add_argument(
'--first', metavar='N', help='List first N archives after other filters are applied'
)
rlist_group.add_argument(
'--last', metavar='N', help='List last N archives after other filters are applied'
)
rlist_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
list_parser = subparsers.add_parser(
'list',
aliases=SUBPARSER_ALIASES['list'],
help='List archives',
description='List archives or the contents of an archive',
help='List archive',
description='List the files in an archive or search for a file across archives',
add_help=False,
)
list_group = list_parser.add_argument_group('list arguments')
list_group.add_argument(
'--repository',
help='Path of repository to list, defaults to the configured repository if there is only one',
help='Path of repository containing archive to list, defaults to the configured repositories',
)
list_group.add_argument('--archive', help='Name of archive to list (or "latest")')
list_group.add_argument('--archive', help='Name of the archive to list (or "latest")')
list_group.add_argument(
'--path',
metavar='PATH',
nargs='+',
dest='paths',
help='Paths to list from archive, defaults to the entire archive',
help='Paths or patterns to list from a single selected archive (via "--archive"), defaults to listing the entire archive',
)
list_group.add_argument(
'--short', default=False, action='store_true', help='Output only archive or path names'
'--find',
metavar='PATH',
nargs='+',
dest='find_paths',
help='Partial paths or patterns to search for and list across multiple archives',
)
list_group.add_argument(
'--short', default=False, action='store_true', help='Output only path names'
)
list_group.add_argument('--format', help='Format for file listing')
list_group.add_argument(
'--json', default=False, action='store_true', help='Output results as JSON'
)
list_group.add_argument(
'-P', '--prefix', help='Only list archive names starting with this prefix'
'-P', '--prefix', help='Deprecated. Only list archive names starting with this prefix'
)
list_group.add_argument(
'-a', '--glob-archives', metavar='GLOB', help='Only list archive names matching this glob'
'-a',
'--match-archives',
'--glob-archives',
metavar='PATTERN',
help='Only list archive names matching this pattern',
)
list_group.add_argument(
'--successful',
default=False,
default=True,
action='store_true',
help='Only list archive names of successful (non-checkpoint) backups',
help='Deprecated; no effect. Newer versions of Borg shows successful (non-checkpoint) archives by default.',
)
list_group.add_argument(
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
@ -563,30 +745,50 @@ def parse_arguments(*unparsed_arguments):
)
list_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
rinfo_parser = subparsers.add_parser(
'rinfo',
aliases=SUBPARSER_ALIASES['rinfo'],
help='Show repository summary information such as disk space used',
description='Show repository summary information such as disk space used',
add_help=False,
)
rinfo_group = rinfo_parser.add_argument_group('rinfo arguments')
rinfo_group.add_argument(
'--repository',
help='Path of repository to show info for, defaults to the configured repository if there is only one',
)
rinfo_group.add_argument(
'--json', dest='json', default=False, action='store_true', help='Output results as JSON'
)
rinfo_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
info_parser = subparsers.add_parser(
'info',
aliases=SUBPARSER_ALIASES['info'],
help='Display summary information on archives',
description='Display summary information on archives',
help='Show archive summary information such as disk space used',
description='Show archive summary information such as disk space used',
add_help=False,
)
info_group = info_parser.add_argument_group('info arguments')
info_group.add_argument(
'--repository',
help='Path of repository to show info for, defaults to the configured repository if there is only one',
help='Path of repository containing archive to show info for, defaults to the configured repository if there is only one',
)
info_group.add_argument('--archive', help='Name of archive to show info for (or "latest")')
info_group.add_argument(
'--json', dest='json', default=False, action='store_true', help='Output results as JSON'
)
info_group.add_argument(
'-P', '--prefix', help='Only show info for archive names starting with this prefix'
'-P',
'--prefix',
help='Deprecated. Only show info for archive names starting with this prefix',
)
info_group.add_argument(
'-a',
'--match-archives',
'--glob-archives',
metavar='GLOB',
help='Only show info for archive names matching this glob',
metavar='PATTERN',
help='Only show info for archive names matching this pattern',
)
info_group.add_argument(
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
@ -601,26 +803,102 @@ def parse_arguments(*unparsed_arguments):
)
info_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
arguments = parse_subparser_arguments(unparsed_arguments, subparsers)
arguments['global'] = parse_global_arguments(unparsed_arguments, top_level_parser, subparsers)
break_lock_parser = subparsers.add_parser(
'break-lock',
aliases=SUBPARSER_ALIASES['break-lock'],
help='Break the repository and cache locks left behind by Borg aborting',
description='Break Borg repository and cache locks left behind by Borg aborting',
add_help=False,
)
break_lock_group = break_lock_parser.add_argument_group('break-lock arguments')
break_lock_group.add_argument(
'--repository',
help='Path of repository to break the lock for, defaults to the configured repository if there is only one',
)
break_lock_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
borg_parser = subparsers.add_parser(
'borg',
aliases=SUBPARSER_ALIASES['borg'],
help='Run an arbitrary Borg command',
description="Run an arbitrary Borg command based on borgmatic's configuration",
add_help=False,
)
borg_group = borg_parser.add_argument_group('borg arguments')
borg_group.add_argument(
'--repository',
help='Path of repository to pass to Borg, defaults to the configured repositories',
)
borg_group.add_argument('--archive', help='Name of archive to pass to Borg (or "latest")')
borg_group.add_argument(
'--',
metavar='OPTION',
dest='options',
nargs='+',
help='Options to pass to Borg, command first ("create", "list", etc). "--" is optional. To specify the repository or the archive, you must use --repository or --archive instead of providing them here.',
)
borg_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
return top_level_parser, subparsers
def parse_arguments(*unparsed_arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as a dict mapping from subparser name (or "global") to an argparse.Namespace instance.
'''
top_level_parser, subparsers = make_parsers()
arguments, remaining_arguments = parse_subparser_arguments(
unparsed_arguments, subparsers.choices
)
arguments['global'] = top_level_parser.parse_args(remaining_arguments)
if arguments['global'].excludes_filename:
raise ValueError(
'The --excludes option has been replaced with exclude_patterns in configuration'
'The --excludes flag has been replaced with exclude_patterns in configuration.'
)
if 'init' in arguments and arguments['global'].dry_run:
raise ValueError('The init action cannot be used with the --dry-run option')
if 'list' in arguments and arguments['list'].glob_archives and arguments['list'].successful:
raise ValueError('The --glob-archives and --successful options cannot be used together')
if 'create' in arguments and arguments['create'].list_files and arguments['create'].progress:
raise ValueError(
'With the create action, only one of --list (--files) and --progress flags can be used.'
)
if (
'list' in arguments
and 'info' in arguments
and arguments['list'].json
and arguments['info'].json
('list' in arguments and 'rinfo' in arguments and arguments['list'].json)
or ('list' in arguments and 'info' in arguments and arguments['list'].json)
or ('rinfo' in arguments and 'info' in arguments and arguments['rinfo'].json)
):
raise ValueError('With the --json option, list and info actions cannot be used together')
raise ValueError('With the --json flag, multiple actions cannot be used together.')
if (
'transfer' in arguments
and arguments['transfer'].archive
and arguments['transfer'].match_archives
):
raise ValueError(
'With the transfer action, only one of --archive and --match-archives flags can be used.'
)
if 'list' in arguments and (arguments['list'].prefix and arguments['list'].match_archives):
raise ValueError(
'With the list action, only one of --prefix or --match-archives flags can be used.'
)
if 'rlist' in arguments and (arguments['rlist'].prefix and arguments['rlist'].match_archives):
raise ValueError(
'With the rlist action, only one of --prefix or --match-archives flags can be used.'
)
if 'info' in arguments and (
(arguments['info'].archive and arguments['info'].prefix)
or (arguments['info'].archive and arguments['info'].match_archives)
or (arguments['info'].prefix and arguments['info'].match_archives)
):
raise ValueError(
'With the info action, only one of --archive, --prefix, or --match-archives flags can be used.'
)
return arguments

View File

@ -1,29 +1,38 @@
import collections
import copy
import json
import logging
import os
import sys
import time
from queue import Queue
from subprocess import CalledProcessError
import colorama
import pkg_resources
from borgmatic.borg import check as borg_check
from borgmatic.borg import create as borg_create
from borgmatic.borg import environment as borg_environment
from borgmatic.borg import export_tar as borg_export_tar
from borgmatic.borg import extract as borg_extract
from borgmatic.borg import info as borg_info
from borgmatic.borg import init as borg_init
from borgmatic.borg import list as borg_list
from borgmatic.borg import mount as borg_mount
from borgmatic.borg import prune as borg_prune
import borgmatic.actions.borg
import borgmatic.actions.break_lock
import borgmatic.actions.check
import borgmatic.actions.compact
import borgmatic.actions.create
import borgmatic.actions.export_tar
import borgmatic.actions.extract
import borgmatic.actions.info
import borgmatic.actions.list
import borgmatic.actions.mount
import borgmatic.actions.prune
import borgmatic.actions.rcreate
import borgmatic.actions.restore
import borgmatic.actions.rinfo
import borgmatic.actions.rlist
import borgmatic.actions.transfer
import borgmatic.commands.completion
from borgmatic.borg import umount as borg_umount
from borgmatic.borg import version as borg_version
from borgmatic.commands.arguments import parse_arguments
from borgmatic.config import checks, collect, convert, validate
from borgmatic.hooks import command, dispatch, dump, monitor
from borgmatic.logger import configure_logging, should_do_markup
from borgmatic.hooks import command, dispatch, monitor
from borgmatic.logger import add_custom_log_levels, configure_logging, should_do_markup
from borgmatic.signals import configure_signals
from borgmatic.verbosity import verbosity_to_log_level
@ -35,8 +44,8 @@ LEGACY_CONFIG_PATH = '/etc/borgmatic/config'
def run_configuration(config_filename, config, arguments):
'''
Given a config filename, the corresponding parsed config dict, and command-line arguments as a
dict from subparser name to a namespace of parsed arguments, execute its defined pruning,
backups, consistency checks, and/or other actions.
dict from subparser name to a namespace of parsed arguments, execute the defined create, prune,
compact, check, and/or other actions.
Yield a combination of:
@ -51,14 +60,21 @@ def run_configuration(config_filename, config, arguments):
local_path = location.get('local_path', 'borg')
remote_path = location.get('remote_path')
borg_environment.initialize(storage)
retries = storage.get('retries', 0)
retry_wait = storage.get('retry_wait', 0)
encountered_error = None
error_repository = ''
prune_create_or_check = {'prune', 'create', 'check'}.intersection(arguments)
using_primary_action = {'create', 'prune', 'compact', 'check'}.intersection(arguments)
monitoring_log_level = verbosity_to_log_level(global_arguments.monitoring_verbosity)
try:
if prune_create_or_check:
local_borg_version = borg_version.local_borg_version(storage, local_path)
except (OSError, CalledProcessError, ValueError) as error:
yield from log_error_records(f'{config_filename}: Error getting local Borg version', error)
return
try:
if using_primary_action:
dispatch.call_hooks(
'initialize_monitor',
hooks,
@ -67,39 +83,7 @@ def run_configuration(config_filename, config, arguments):
monitoring_log_level,
global_arguments.dry_run,
)
if 'prune' in arguments:
command.execute_hook(
hooks.get('before_prune'),
hooks.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
)
if 'create' in arguments:
command.execute_hook(
hooks.get('before_backup'),
hooks.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
)
if 'check' in arguments:
command.execute_hook(
hooks.get('before_check'),
hooks.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
)
if 'extract' in arguments:
command.execute_hook(
hooks.get('before_extract'),
hooks.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
)
if prune_create_or_check:
if using_primary_action:
dispatch.call_hooks(
'ping_monitor',
hooks,
@ -114,15 +98,24 @@ def run_configuration(config_filename, config, arguments):
return
encountered_error = error
yield from make_error_log_records(
'{}: Error running pre hook'.format(config_filename), error
)
yield from log_error_records(f'{config_filename}: Error pinging monitor', error)
if not encountered_error:
for repository_path in location['repositories']:
repo_queue = Queue()
for repo in location['repositories']:
repo_queue.put((repo, 0),)
while not repo_queue.empty():
repository, retry_num = repo_queue.get()
logger.debug(f'{repository["path"]}: Running actions for repository')
timeout = retry_num * retry_wait
if timeout:
logger.warning(f'{config_filename}: Sleeping {timeout}s before next retry')
time.sleep(timeout)
try:
yield from run_actions(
arguments=arguments,
config_filename=config_filename,
location=location,
storage=storage,
retention=retention,
@ -130,58 +123,56 @@ def run_configuration(config_filename, config, arguments):
hooks=hooks,
local_path=local_path,
remote_path=remote_path,
repository_path=repository_path,
local_borg_version=local_borg_version,
repository=repository,
)
except (OSError, CalledProcessError, ValueError) as error:
encountered_error = error
error_repository = repository_path
yield from make_error_log_records(
'{}: Error running actions for repository'.format(repository_path), error
if retry_num < retries:
repo_queue.put((repository, retry_num + 1),)
tuple( # Consume the generator so as to trigger logging.
log_error_records(
f'{repository["path"]}: Error running actions for repository',
error,
levelno=logging.WARNING,
log_command_error_output=True,
)
)
logger.warning(
f'{config_filename}: Retrying... attempt {retry_num + 1}/{retries}'
)
continue
if command.considered_soft_failure(config_filename, error):
return
yield from log_error_records(
f'{repository["path"]}: Error running actions for repository', error
)
encountered_error = error
error_repository = repository['path']
try:
if using_primary_action:
# send logs irrespective of error
dispatch.call_hooks(
'ping_monitor',
hooks,
config_filename,
monitor.MONITOR_HOOK_NAMES,
monitor.State.LOG,
monitoring_log_level,
global_arguments.dry_run,
)
except (OSError, CalledProcessError) as error:
if command.considered_soft_failure(config_filename, error):
return
encountered_error = error
yield from log_error_records(f'{repository["path"]}: Error pinging monitor', error)
if not encountered_error:
try:
if 'prune' in arguments:
command.execute_hook(
hooks.get('after_prune'),
hooks.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
)
if 'create' in arguments:
dispatch.call_hooks(
'remove_database_dumps',
hooks,
config_filename,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
command.execute_hook(
hooks.get('after_backup'),
hooks.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
)
if 'check' in arguments:
command.execute_hook(
hooks.get('after_check'),
hooks.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
)
if 'extract' in arguments:
command.execute_hook(
hooks.get('after_extract'),
hooks.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
)
if prune_create_or_check:
if using_primary_action:
dispatch.call_hooks(
'ping_monitor',
hooks,
@ -204,11 +195,9 @@ def run_configuration(config_filename, config, arguments):
return
encountered_error = error
yield from make_error_log_records(
'{}: Error running post hook'.format(config_filename), error
)
yield from log_error_records(f'{config_filename}: Error pinging monitor', error)
if encountered_error and prune_create_or_check:
if encountered_error and using_primary_action:
try:
command.execute_hook(
hooks.get('on_error'),
@ -241,14 +230,13 @@ def run_configuration(config_filename, config, arguments):
if command.considered_soft_failure(config_filename, error):
return
yield from make_error_log_records(
'{}: Error running on-error hook'.format(config_filename), error
)
yield from log_error_records(f'{config_filename}: Error running on-error hook', error)
def run_actions(
*,
arguments,
config_filename,
location,
storage,
retention,
@ -256,296 +244,209 @@ def run_actions(
hooks,
local_path,
remote_path,
repository_path
): # pragma: no cover
local_borg_version,
repository,
):
'''
Given parsed command-line arguments as an argparse.ArgumentParser instance, several different
configuration dicts, local and remote paths to Borg, and a repository name, run all actions
from the command-line arguments on the given repository.
Given parsed command-line arguments as an argparse.ArgumentParser instance, the configuration
filename, several different configuration dicts, local and remote paths to Borg, a local Borg
version string, and a repository name, run all actions from the command-line arguments on the
given repository.
Yield JSON output strings from executing any actions that produce JSON.
Raise OSError or subprocess.CalledProcessError if an error occurs running a command for an
action. Raise ValueError if the arguments or configuration passed to action are invalid.
action or a hook. Raise ValueError if the arguments or configuration passed to action are
invalid.
'''
repository = os.path.expanduser(repository_path)
add_custom_log_levels()
repository_path = os.path.expanduser(repository['path'])
global_arguments = arguments['global']
dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else ''
if 'init' in arguments:
logger.info('{}: Initializing repository'.format(repository))
borg_init.initialize_repository(
repository,
storage,
arguments['init'].encryption_mode,
arguments['init'].append_only,
arguments['init'].storage_quota,
local_path=local_path,
remote_path=remote_path,
)
if 'prune' in arguments:
logger.info('{}: Pruning archives{}'.format(repository, dry_run_label))
borg_prune.prune_archives(
global_arguments.dry_run,
repository,
storage,
retention,
local_path=local_path,
remote_path=remote_path,
stats=arguments['prune'].stats,
files=arguments['prune'].files,
)
if 'create' in arguments:
logger.info('{}: Creating archive{}'.format(repository, dry_run_label))
dispatch.call_hooks(
'remove_database_dumps',
hooks,
repository,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
active_dumps = dispatch.call_hooks(
'dump_databases',
hooks,
repository,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
stream_processes = [process for processes in active_dumps.values() for process in processes]
hook_context = {
'repository': repository_path,
# Deprecated: For backwards compatibility with borgmatic < 1.6.0.
'repositories': ','.join([repo['path'] for repo in location['repositories']]),
'log_file': global_arguments.log_file if global_arguments.log_file else '',
}
json_output = borg_create.create_archive(
global_arguments.dry_run,
repository,
location,
storage,
local_path=local_path,
remote_path=remote_path,
progress=arguments['create'].progress,
stats=arguments['create'].stats,
json=arguments['create'].json,
files=arguments['create'].files,
stream_processes=stream_processes,
)
if json_output:
yield json.loads(json_output)
command.execute_hook(
hooks.get('before_actions'),
hooks.get('umask'),
config_filename,
'pre-actions',
global_arguments.dry_run,
**hook_context,
)
if 'check' in arguments and checks.repository_enabled_for_checks(repository, consistency):
logger.info('{}: Running consistency checks'.format(repository))
borg_check.check_archives(
repository,
storage,
consistency,
local_path=local_path,
remote_path=remote_path,
progress=arguments['check'].progress,
repair=arguments['check'].repair,
only_checks=arguments['check'].only,
)
if 'extract' in arguments:
if arguments['extract'].repository is None or validate.repositories_match(
repository, arguments['extract'].repository
):
logger.info(
'{}: Extracting archive {}'.format(repository, arguments['extract'].archive)
)
borg_extract.extract_archive(
global_arguments.dry_run,
for (action_name, action_arguments) in arguments.items():
if action_name == 'rcreate':
borgmatic.actions.rcreate.run_rcreate(
repository,
storage,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'transfer':
borgmatic.actions.transfer.run_transfer(
repository,
storage,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'create':
yield from borgmatic.actions.create.run_create(
config_filename,
repository,
borg_list.resolve_archive_name(
repository, arguments['extract'].archive, storage, local_path, remote_path
),
arguments['extract'].paths,
location,
storage,
local_path=local_path,
remote_path=remote_path,
destination_path=arguments['extract'].destination,
strip_components=arguments['extract'].strip_components,
progress=arguments['extract'].progress,
)
if 'export-tar' in arguments:
if arguments['export-tar'].repository is None or validate.repositories_match(
repository, arguments['export-tar'].repository
):
logger.info(
'{}: Exporting archive {} as tar file'.format(
repository, arguments['export-tar'].archive
)
)
borg_export_tar.export_tar_archive(
global_arguments.dry_run,
repository,
borg_list.resolve_archive_name(
repository, arguments['export-tar'].archive, storage, local_path, remote_path
),
arguments['export-tar'].paths,
arguments['export-tar'].destination,
storage,
local_path=local_path,
remote_path=remote_path,
tar_filter=arguments['export-tar'].tar_filter,
files=arguments['export-tar'].files,
strip_components=arguments['export-tar'].strip_components,
)
if 'mount' in arguments:
if arguments['mount'].repository is None or validate.repositories_match(
repository, arguments['mount'].repository
):
if arguments['mount'].archive:
logger.info(
'{}: Mounting archive {}'.format(repository, arguments['mount'].archive)
)
else:
logger.info('{}: Mounting repository'.format(repository))
borg_mount.mount_archive(
repository,
borg_list.resolve_archive_name(
repository, arguments['mount'].archive, storage, local_path, remote_path
),
arguments['mount'].mount_point,
arguments['mount'].paths,
arguments['mount'].foreground,
arguments['mount'].options,
storage,
local_path=local_path,
remote_path=remote_path,
)
if 'restore' in arguments:
if arguments['restore'].repository is None or validate.repositories_match(
repository, arguments['restore'].repository
):
logger.info(
'{}: Restoring databases from archive {}'.format(
repository, arguments['restore'].archive
)
)
dispatch.call_hooks(
'remove_database_dumps',
hooks,
hook_context,
local_borg_version,
action_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
)
elif action_name == 'prune':
borgmatic.actions.prune.run_prune(
config_filename,
repository,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
restore_names = arguments['restore'].databases or []
if 'all' in restore_names:
restore_names = []
archive_name = borg_list.resolve_archive_name(
repository, arguments['restore'].archive, storage, local_path, remote_path
)
found_names = set()
for hook_name, per_hook_restore_databases in hooks.items():
if hook_name not in dump.DATABASE_HOOK_NAMES:
continue
for restore_database in per_hook_restore_databases:
database_name = restore_database['name']
if restore_names and database_name not in restore_names:
continue
found_names.add(database_name)
dump_pattern = dispatch.call_hooks(
'make_database_dump_pattern',
hooks,
repository,
dump.DATABASE_HOOK_NAMES,
location,
database_name,
)[hook_name]
# Kick off a single database extract to stdout.
extract_process = borg_extract.extract_archive(
dry_run=global_arguments.dry_run,
repository=repository,
archive=archive_name,
paths=dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
location_config=location,
storage_config=storage,
local_path=local_path,
remote_path=remote_path,
destination_path='/',
# A directory format dump isn't a single file, and therefore can't extract
# to stdout. In this case, the extract_process return value is None.
extract_to_stdout=bool(restore_database.get('format') != 'directory'),
)
# Run a single database restore, consuming the extract stdout (if any).
dispatch.call_hooks(
'restore_database_dump',
{hook_name: [restore_database]},
repository,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
extract_process,
)
dispatch.call_hooks(
'remove_database_dumps',
storage,
retention,
hooks,
repository,
dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
hook_context,
local_borg_version,
action_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
)
if not restore_names and not found_names:
raise ValueError('No databases were found to restore')
missing_names = sorted(set(restore_names) - found_names)
if missing_names:
raise ValueError(
'Cannot restore database(s) {} missing from borgmatic\'s configuration'.format(
', '.join(missing_names)
)
elif action_name == 'compact':
borgmatic.actions.compact.run_compact(
config_filename,
repository,
storage,
retention,
hooks,
hook_context,
local_borg_version,
action_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
)
elif action_name == 'check':
if checks.repository_enabled_for_checks(repository, consistency):
borgmatic.actions.check.run_check(
config_filename,
repository,
location,
storage,
consistency,
hooks,
hook_context,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
if 'list' in arguments:
if arguments['list'].repository is None or validate.repositories_match(
repository, arguments['list'].repository
):
list_arguments = copy.copy(arguments['list'])
if not list_arguments.json:
logger.warning('{}: Listing archives'.format(repository))
list_arguments.archive = borg_list.resolve_archive_name(
repository, list_arguments.archive, storage, local_path, remote_path
elif action_name == 'extract':
borgmatic.actions.extract.run_extract(
config_filename,
repository,
location,
storage,
hooks,
hook_context,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
json_output = borg_list.list_archives(
elif action_name == 'export-tar':
borgmatic.actions.export_tar.run_export_tar(
repository,
storage,
list_arguments=list_arguments,
local_path=local_path,
remote_path=remote_path,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
if json_output:
yield json.loads(json_output)
if 'info' in arguments:
if arguments['info'].repository is None or validate.repositories_match(
repository, arguments['info'].repository
):
info_arguments = copy.copy(arguments['info'])
if not info_arguments.json:
logger.warning('{}: Displaying summary info for archives'.format(repository))
info_arguments.archive = borg_list.resolve_archive_name(
repository, info_arguments.archive, storage, local_path, remote_path
)
json_output = borg_info.display_archives_info(
elif action_name == 'mount':
borgmatic.actions.mount.run_mount(
repository,
storage,
info_arguments=info_arguments,
local_path=local_path,
remote_path=remote_path,
local_borg_version,
arguments['mount'],
local_path,
remote_path,
)
if json_output:
yield json.loads(json_output)
elif action_name == 'restore':
borgmatic.actions.restore.run_restore(
repository,
location,
storage,
hooks,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'rlist':
yield from borgmatic.actions.rlist.run_rlist(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
)
elif action_name == 'list':
yield from borgmatic.actions.list.run_list(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
)
elif action_name == 'rinfo':
yield from borgmatic.actions.rinfo.run_rinfo(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
)
elif action_name == 'info':
yield from borgmatic.actions.info.run_info(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
)
elif action_name == 'break-lock':
borgmatic.actions.break_lock.run_break_lock(
repository,
storage,
local_borg_version,
arguments['break-lock'],
local_path,
remote_path,
)
elif action_name == 'borg':
borgmatic.actions.borg.run_borg(
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
)
command.execute_hook(
hooks.get('after_actions'),
hooks.get('umask'),
config_filename,
'post-actions',
global_arguments.dry_run,
**hook_context,
)
def load_configurations(config_filenames, overrides=None):
def load_configurations(config_filenames, overrides=None, resolve_env=True):
'''
Given a sequence of configuration filenames, load and validate each configuration file. Return
the results as a tuple of: dict of configuration filename to corresponding parsed configuration,
@ -558,8 +459,21 @@ def load_configurations(config_filenames, overrides=None):
# Parse and load each configuration file.
for config_filename in config_filenames:
try:
configs[config_filename] = validate.parse_configuration(
config_filename, validate.schema_filename(), overrides
configs[config_filename], parse_logs = validate.parse_configuration(
config_filename, validate.schema_filename(), overrides, resolve_env
)
logs.extend(parse_logs)
except PermissionError:
logs.extend(
[
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Insufficient permissions to read configuration file',
)
),
]
)
except (ValueError, OSError, validate.Validation_error) as error:
logs.extend(
@ -568,7 +482,7 @@ def load_configurations(config_filenames, overrides=None):
dict(
levelno=logging.CRITICAL,
levelname='CRITICAL',
msg='{}: Error parsing configuration file'.format(config_filename),
msg=f'{config_filename}: Error parsing configuration file',
)
),
logging.makeLogRecord(
@ -593,28 +507,39 @@ def log_record(suppress_log=False, **kwargs):
return record
def make_error_log_records(message, error=None):
def log_error_records(
message, error=None, levelno=logging.CRITICAL, log_command_error_output=False
):
'''
Given error message text and an optional exception object, yield a series of logging.LogRecord
instances with error summary information. As a side effect, log each record.
Given error message text, an optional exception object, an optional log level, and whether to
log the error output of a CalledProcessError (if any), log error summary information and also
yield it as a series of logging.LogRecord instances.
Note that because the logs are yielded as a generator, logs won't get logged unless you consume
the generator output.
'''
level_name = logging._levelToName[levelno]
if not error:
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=message)
yield log_record(levelno=levelno, levelname=level_name, msg=message)
return
try:
raise error
except CalledProcessError as error:
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=message)
yield log_record(levelno=levelno, levelname=level_name, msg=message)
if error.output:
# Suppress these logs for now and save full error output for the log summary at the end.
yield log_record(
levelno=logging.CRITICAL, levelname='CRITICAL', msg=error.output, suppress_log=True
levelno=levelno,
levelname=level_name,
msg=error.output,
suppress_log=not log_command_error_output,
)
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=error)
yield log_record(levelno=levelno, levelname=level_name, msg=error)
except (ValueError, OSError) as error:
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=message)
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=error)
yield log_record(levelno=levelno, levelname=level_name, msg=message)
yield log_record(levelno=levelno, levelname=level_name, msg=error)
except: # noqa: E722
# Raising above only as a means of determining the error type. Swallow the exception here
# because we don't want the exception to propagate out of this function.
@ -640,27 +565,25 @@ def collect_configuration_run_summary_logs(configs, arguments):
any, to stdout.
'''
# Run cross-file validation checks.
if 'extract' in arguments:
repository = arguments['extract'].repository
elif 'list' in arguments and arguments['list'].archive:
repository = arguments['list'].repository
elif 'mount' in arguments:
repository = arguments['mount'].repository
else:
repository = None
repository = None
if repository:
try:
validate.guard_configuration_contains_repository(repository, configs)
except ValueError as error:
yield from make_error_log_records(str(error))
return
for action_name, action_arguments in arguments.items():
if hasattr(action_arguments, 'repository'):
repository = getattr(action_arguments, 'repository')
break
try:
if 'extract' in arguments or 'mount' in arguments:
validate.guard_single_repository_selected(repository, configs)
validate.guard_configuration_contains_repository(repository, configs)
except ValueError as error:
yield from log_error_records(str(error))
return
if not configs:
yield from make_error_log_records(
'{}: No valid configuration files found'.format(
' '.join(arguments['global'].config_paths)
)
yield from log_error_records(
f"{' '.join(arguments['global'].config_paths)}: No valid configuration files found",
)
return
@ -676,7 +599,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
arguments['global'].dry_run,
)
except (CalledProcessError, ValueError, OSError) as error:
yield from make_error_log_records('Error running pre-everything hook', error)
yield from log_error_records('Error running pre-everything hook', error)
return
# Execute the actions corresponding to each configuration file.
@ -686,29 +609,27 @@ def collect_configuration_run_summary_logs(configs, arguments):
error_logs = tuple(result for result in results if isinstance(result, logging.LogRecord))
if error_logs:
yield from make_error_log_records(
'{}: Error running configuration file'.format(config_filename)
)
yield from log_error_records(f'{config_filename}: An error occurred')
yield from error_logs
else:
yield logging.makeLogRecord(
dict(
levelno=logging.INFO,
levelname='INFO',
msg='{}: Successfully ran configuration file'.format(config_filename),
msg=f'{config_filename}: Successfully ran configuration file',
)
)
if results:
json_results.extend(results)
if 'umount' in arguments:
logger.info('Unmounting mount point {}'.format(arguments['umount'].mount_point))
logger.info(f"Unmounting mount point {arguments['umount'].mount_point}")
try:
borg_umount.unmount_archive(
mount_point=arguments['umount'].mount_point, local_path=get_local_path(configs)
mount_point=arguments['umount'].mount_point, local_path=get_local_path(configs),
)
except (CalledProcessError, OSError) as error:
yield from make_error_log_records('Error unmounting mount point', error)
yield from log_error_records('Error unmounting mount point', error)
if json_results:
sys.stdout.write(json.dumps(json_results))
@ -725,7 +646,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
arguments['global'].dry_run,
)
except (CalledProcessError, ValueError, OSError) as error:
yield from make_error_log_records('Error running post-everything hook', error)
yield from log_error_records('Error running post-everything hook', error)
def exit_with_help_link(): # pragma: no cover
@ -750,16 +671,21 @@ def main(): # pragma: no cover
if error.code == 0:
raise error
configure_logging(logging.CRITICAL)
logger.critical('Error parsing arguments: {}'.format(' '.join(sys.argv)))
logger.critical(f"Error parsing arguments: {' '.join(sys.argv)}")
exit_with_help_link()
global_arguments = arguments['global']
if global_arguments.version:
print(pkg_resources.require('borgmatic')[0].version)
sys.exit(0)
if global_arguments.bash_completion:
print(borgmatic.commands.completion.bash_completion())
sys.exit(0)
config_filenames = tuple(collect.collect_config_filenames(global_arguments.config_paths))
configs, parse_logs = load_configurations(config_filenames, global_arguments.overrides)
configs, parse_logs = load_configurations(
config_filenames, global_arguments.overrides, global_arguments.resolve_env
)
any_json_flags = any(
getattr(sub_arguments, 'json', False) for sub_arguments in arguments.values()
@ -775,10 +701,11 @@ def main(): # pragma: no cover
verbosity_to_log_level(global_arguments.log_file_verbosity),
verbosity_to_log_level(global_arguments.monitoring_verbosity),
global_arguments.log_file,
global_arguments.log_file_format,
)
except (FileNotFoundError, PermissionError) as error:
configure_logging(logging.CRITICAL)
logger.critical('Error configuring logging: {}'.format(error))
logger.critical(f'Error configuring logging: {error}')
exit_with_help_link()
logger.debug('Ensuring legacy configuration is upgraded')

View File

@ -0,0 +1,57 @@
from borgmatic.commands import arguments
UPGRADE_MESSAGE = '''
Your bash completions script is from a different version of borgmatic than is
currently installed. Please upgrade your script so your completions match the
command-line flags in your installed borgmatic! Try this to upgrade:
sudo sh -c "borgmatic --bash-completion > $BASH_SOURCE"
source $BASH_SOURCE
'''
def parser_flags(parser):
'''
Given an argparse.ArgumentParser instance, return its argument flags in a space-separated
string.
'''
return ' '.join(option for action in parser._actions for option in action.option_strings)
def bash_completion():
'''
Return a bash completion script for the borgmatic command. Produce this by introspecting
borgmatic's command-line argument parsers.
'''
top_level_parser, subparsers = arguments.make_parsers()
global_flags = parser_flags(top_level_parser)
actions = ' '.join(subparsers.choices.keys())
# Avert your eyes.
return '\n'.join(
(
'check_version() {',
' local this_script="$(cat "$BASH_SOURCE" 2> /dev/null)"',
' local installed_script="$(borgmatic --bash-completion 2> /dev/null)"',
' if [ "$this_script" != "$installed_script" ] && [ "$installed_script" != "" ];'
f' then cat << EOF\n{UPGRADE_MESSAGE}\nEOF',
' fi',
'}',
'complete_borgmatic() {',
)
+ tuple(
''' if [[ " ${COMP_WORDS[*]} " =~ " %s " ]]; then
COMPREPLY=($(compgen -W "%s %s %s" -- "${COMP_WORDS[COMP_CWORD]}"))
return 0
fi'''
% (action, parser_flags(subparser), actions, global_flags)
for action, subparser in subparsers.choices.items()
)
+ (
' COMPREPLY=($(compgen -W "%s %s" -- "${COMP_WORDS[COMP_CWORD]}"))' # noqa: FS003
% (actions, global_flags),
' (check_version &)',
'}',
'\ncomplete -o bashdefault -o default -F complete_borgmatic borgmatic',
)
)

View File

@ -28,9 +28,7 @@ def parse_arguments(*arguments):
'--source-config',
dest='source_config_filename',
default=DEFAULT_SOURCE_CONFIG_FILENAME,
help='Source INI-style configuration filename. Default: {}'.format(
DEFAULT_SOURCE_CONFIG_FILENAME
),
help=f'Source INI-style configuration filename. Default: {DEFAULT_SOURCE_CONFIG_FILENAME}',
)
parser.add_argument(
'-e',
@ -46,9 +44,7 @@ def parse_arguments(*arguments):
'--destination-config',
dest='destination_config_filename',
default=DEFAULT_DESTINATION_CONFIG_FILENAME,
help='Destination YAML configuration filename. Default: {}'.format(
DEFAULT_DESTINATION_CONFIG_FILENAME
),
help=f'Destination YAML configuration filename. Default: {DEFAULT_DESTINATION_CONFIG_FILENAME}',
)
return parser.parse_args(arguments)
@ -59,19 +55,15 @@ TEXT_WRAP_CHARACTERS = 80
def display_result(args): # pragma: no cover
result_lines = textwrap.wrap(
'Your borgmatic configuration has been upgraded. Please review the result in {}.'.format(
args.destination_config_filename
),
f'Your borgmatic configuration has been upgraded. Please review the result in {args.destination_config_filename}.',
TEXT_WRAP_CHARACTERS,
)
excludes_phrase = (
f' and {args.source_excludes_filename}' if args.source_excludes_filename else ''
)
delete_lines = textwrap.wrap(
'Once you are satisfied, you can safely delete {}{}.'.format(
args.source_config_filename,
' and {}'.format(args.source_excludes_filename)
if args.source_excludes_filename
else '',
),
f'Once you are satisfied, you can safely delete {args.source_config_filename}{excludes_phrase}.',
TEXT_WRAP_CHARACTERS,
)

View File

@ -23,9 +23,13 @@ def parse_arguments(*arguments):
'--destination',
dest='destination_filename',
default=DEFAULT_DESTINATION_CONFIG_FILENAME,
help='Destination YAML configuration file. Default: {}'.format(
DEFAULT_DESTINATION_CONFIG_FILENAME
),
help=f'Destination YAML configuration file, default: {DEFAULT_DESTINATION_CONFIG_FILENAME}',
)
parser.add_argument(
'--overwrite',
default=False,
action='store_true',
help='Whether to overwrite any existing destination file, defaults to false',
)
return parser.parse_args(arguments)
@ -36,23 +40,22 @@ def main(): # pragma: no cover
args = parse_arguments(*sys.argv[1:])
generate.generate_sample_configuration(
args.source_filename, args.destination_filename, validate.schema_filename()
args.source_filename,
args.destination_filename,
validate.schema_filename(),
overwrite=args.overwrite,
)
print('Generated a sample configuration file at {}.'.format(args.destination_filename))
print(f'Generated a sample configuration file at {args.destination_filename}.')
print()
if args.source_filename:
print(
'Merged in the contents of configuration file at {}.'.format(args.source_filename)
)
print(f'Merged in the contents of configuration file at {args.source_filename}.')
print('To review the changes made, run:')
print()
print(
' diff --unified {} {}'.format(args.source_filename, args.destination_filename)
)
print(f' diff --unified {args.source_filename} {args.destination_filename}')
print()
print('Please edit the file to suit your needs. The values are representative.')
print('All fields are optional except where indicated.')
print('This includes all available configuration options with example values. The few')
print('required options are indicated. Please edit the file to suit your needs.')
print()
print('If you ever need help: https://torsion.org/borgmatic/#issues')
except (ValueError, OSError) as error:

View File

@ -2,6 +2,7 @@ import logging
import sys
from argparse import ArgumentParser
import borgmatic.config.generate
from borgmatic.config import collect, validate
logger = logging.getLogger(__name__)
@ -21,20 +22,24 @@ def parse_arguments(*arguments):
nargs='+',
dest='config_paths',
default=config_paths,
help='Configuration filenames or directories, defaults to: {}'.format(
' '.join(config_paths)
),
help=f'Configuration filenames or directories, defaults to: {config_paths}',
)
parser.add_argument(
'-s',
'--show',
action='store_true',
help='Show the validated configuration after all include merging has occurred',
)
return parser.parse_args(arguments)
def main(): # pragma: no cover
args = parse_arguments(*sys.argv[1:])
arguments = parse_arguments(*sys.argv[1:])
logging.basicConfig(level=logging.INFO, format='%(message)s')
config_filenames = tuple(collect.collect_config_filenames(args.config_paths))
config_filenames = tuple(collect.collect_config_filenames(arguments.config_paths))
if len(config_filenames) == 0:
logger.critical('No files to validate found')
sys.exit(1)
@ -42,15 +47,22 @@ def main(): # pragma: no cover
found_issues = False
for config_filename in config_filenames:
try:
validate.parse_configuration(config_filename, validate.schema_filename())
config, parse_logs = validate.parse_configuration(
config_filename, validate.schema_filename()
)
except (ValueError, OSError, validate.Validation_error) as error:
logging.critical('{}: Error parsing configuration file'.format(config_filename))
logging.critical(f'{config_filename}: Error parsing configuration file')
logging.critical(error)
found_issues = True
else:
for log in parse_logs:
logger.handle(log)
if arguments.show:
print('---')
print(borgmatic.config.generate.render_configuration(config))
if found_issues:
sys.exit(1)
else:
logger.info(
'All given configuration files are valid: {}'.format(', '.join(config_filenames))
)
logger.info(f"All given configuration files are valid: {', '.join(config_filenames)}")

View File

@ -16,8 +16,8 @@ def get_default_config_paths(expand_home=True):
return [
'/etc/borgmatic/config.yaml',
'/etc/borgmatic.d',
'%s/borgmatic/config.yaml' % user_config_directory,
'%s/borgmatic.d' % user_config_directory,
os.path.join(user_config_directory, 'borgmatic/config.yaml'),
os.path.join(user_config_directory, 'borgmatic.d'),
]

View File

@ -17,7 +17,7 @@ def _convert_section(source_section_config, section_schema):
(
option_name,
int(option_value)
if section_schema['map'].get(option_name, {}).get('type') == 'int'
if section_schema['properties'].get(option_name, {}).get('type') == 'integer'
else option_value,
)
for option_name, option_value in source_section_config.items()
@ -38,12 +38,12 @@ def convert_legacy_parsed_config(source_config, source_excludes, schema):
'''
destination_config = yaml.comments.CommentedMap(
[
(section_name, _convert_section(section_config, schema['map'][section_name]))
(section_name, _convert_section(section_config, schema['properties'][section_name]))
for section_name, section_config in source_config._asdict().items()
]
)
# Split space-seperated values into actual lists, make "repository" into a list, and merge in
# Split space-separated values into actual lists, make "repository" into a list, and merge in
# excludes.
location = destination_config['location']
location['source_directories'] = source_config.location['source_directories'].split(' ')
@ -54,11 +54,11 @@ def convert_legacy_parsed_config(source_config, source_excludes, schema):
destination_config['consistency']['checks'] = source_config.consistency['checks'].split(' ')
# Add comments to each section, and then add comments to the fields in each section.
generate.add_comments_to_configuration_map(destination_config, schema)
generate.add_comments_to_configuration_object(destination_config, schema)
for section_name, section_config in destination_config.items():
generate.add_comments_to_configuration_map(
section_config, schema['map'][section_name], indent=generate.INDENT
generate.add_comments_to_configuration_object(
section_config, schema['properties'][section_name], indent=generate.INDENT
)
return destination_config

View File

@ -0,0 +1,45 @@
import os
import re
_VARIABLE_PATTERN = re.compile(
r'(?P<escape>\\)?(?P<variable>\$\{(?P<name>[A-Za-z0-9_]+)((:?-)(?P<default>[^}]+))?\})'
)
def _resolve_string(matcher):
'''
Get the value from environment given a matcher containing a name and an optional default value.
If the variable is not defined in environment and no default value is provided, an Error is raised.
'''
if matcher.group('escape') is not None:
# in case of escaped envvar, unescape it
return matcher.group('variable')
# resolve the env var
name, default = matcher.group('name'), matcher.group('default')
out = os.getenv(name, default=default)
if out is None:
raise ValueError(f'Cannot find variable {name} in environment')
return out
def resolve_env_variables(item):
'''
Resolves variables like or ${FOO} from given configuration with values from process environment
Supported formats:
- ${FOO} will return FOO env variable
- ${FOO-bar} or ${FOO:-bar} will return FOO env variable if it exists, else "bar"
If any variable is missing in environment and no default value is provided, an Error is raised.
'''
if isinstance(item, str):
return _VARIABLE_PATTERN.sub(_resolve_string, item)
if isinstance(item, list):
for i, subitem in enumerate(item):
item[i] = resolve_env_variables(subitem)
if isinstance(item, dict):
for key, value in item.items():
item[key] = resolve_env_variables(value)
return item

View File

@ -5,7 +5,7 @@ import re
from ruamel import yaml
from borgmatic.config import load
from borgmatic.config import load, normalize
INDENT = 4
SEQUENCE_INDENT = 2
@ -24,33 +24,31 @@ def _insert_newline_before_comment(config, field_name):
def _schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
'''
Given a loaded configuration schema, generate and return sample config for it. Include comments
for each section based on the schema "desc" description.
for each section based on the schema "description".
'''
schema_type = schema.get('type')
example = schema.get('example')
if example is not None:
return example
if 'seq' in schema:
if schema_type == 'array':
config = yaml.comments.CommentedSeq(
[
_schema_to_sample_configuration(item_schema, level, parent_is_sequence=True)
for item_schema in schema['seq']
]
[_schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
)
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
elif 'map' in schema:
elif schema_type == 'object':
config = yaml.comments.CommentedMap(
[
(field_name, _schema_to_sample_configuration(sub_schema, level + 1))
for field_name, sub_schema in schema['map'].items()
for field_name, sub_schema in schema['properties'].items()
]
)
indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
add_comments_to_configuration_map(
add_comments_to_configuration_object(
config, schema, indent=indent, skip_first=parent_is_sequence
)
else:
raise ValueError('Schema at level {} is unsupported: {}'.format(level, schema))
raise ValueError(f'Schema at level {level} is unsupported: {schema}')
return config
@ -86,7 +84,7 @@ def _comment_out_optional_configuration(rendered_config):
for line in rendered_config.split('\n'):
# Upon encountering an optional configuration option, comment out lines until the next blank
# line.
if line.strip().startswith('# {}'.format(COMMENTED_OUT_SENTINEL)):
if line.strip().startswith(f'# {COMMENTED_OUT_SENTINEL}'):
optional = True
continue
@ -111,13 +109,16 @@ def render_configuration(config):
return rendered.getvalue()
def write_configuration(config_filename, rendered_config, mode=0o600):
def write_configuration(config_filename, rendered_config, mode=0o600, overwrite=False):
'''
Given a target config filename and rendered config YAML, write it out to file. Create any
containing directories as needed.
containing directories as needed. But if the file already exists and overwrite is False,
abort before writing anything.
'''
if os.path.exists(config_filename):
raise FileExistsError('{} already exists. Aborting.'.format(config_filename))
if not overwrite and os.path.exists(config_filename):
raise FileExistsError(
f'{config_filename} already exists. Aborting. Use --overwrite to replace the file.'
)
try:
os.makedirs(os.path.dirname(config_filename), mode=0o700)
@ -132,8 +133,8 @@ def write_configuration(config_filename, rendered_config, mode=0o600):
def add_comments_to_configuration_sequence(config, schema, indent=0):
'''
If the given config sequence's items are maps, then mine the schema for the description of the
map's first item, and slap that atop the sequence. Indent the comment the given number of
If the given config sequence's items are object, then mine the schema for the description of the
object's first item, and slap that atop the sequence. Indent the comment the given number of
characters.
Doing this for sequences of maps results in nice comments that look like:
@ -142,16 +143,16 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
things:
# First key description. Added by this function.
- key: foo
# Second key description. Added by add_comments_to_configuration_map().
# Second key description. Added by add_comments_to_configuration_object().
other: bar
```
'''
if 'map' not in schema['seq'][0]:
if schema['items'].get('type') != 'object':
return
for field_name in config[0].keys():
field_schema = schema['seq'][0]['map'].get(field_name, {})
description = field_schema.get('desc')
field_schema = schema['items']['properties'].get(field_name, {})
description = field_schema.get('description')
# No description to use? Skip it.
if not field_schema or not description:
@ -160,7 +161,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
config[0].yaml_set_start_comment(description, indent=indent)
# We only want the first key's description here, as the rest of the keys get commented by
# add_comments_to_configuration_map().
# add_comments_to_configuration_object().
return
@ -169,7 +170,7 @@ REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
def add_comments_to_configuration_map(config, schema, indent=0, skip_first=False):
def add_comments_to_configuration_object(config, schema, indent=0, skip_first=False):
'''
Using descriptions from a schema as a source, add those descriptions as comments to the given
config mapping, before each field. Indent the comment the given number of characters.
@ -178,8 +179,8 @@ def add_comments_to_configuration_map(config, schema, indent=0, skip_first=False
if skip_first and index == 0:
continue
field_schema = schema['map'].get(field_name, {})
description = field_schema.get('desc', '').strip()
field_schema = schema['properties'].get(field_name, {})
description = field_schema.get('description', '').strip()
# If this is an optional key, add an indicator to the comment flagging it to be commented
# out from the sample configuration. This sentinel is consumed by downstream processing that
@ -215,7 +216,7 @@ def remove_commented_out_sentinel(config, field_name):
except KeyError:
return
if last_comment_value == '# {}\n'.format(COMMENTED_OUT_SENTINEL):
if last_comment_value == f'# {COMMENTED_OUT_SENTINEL}\n':
config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX].pop()
@ -265,18 +266,22 @@ def merge_source_configuration_into_destination(destination_config, source_confi
return destination_config
def generate_sample_configuration(source_filename, destination_filename, schema_filename):
def generate_sample_configuration(
source_filename, destination_filename, schema_filename, overwrite=False
):
'''
Given an optional source configuration filename, and a required destination configuration
filename, and the path to a schema filename in pykwalify YAML schema format, write out a
sample configuration file based on that schema. If a source filename is provided, merge the
parsed contents of that configuration into the generated configuration.
filename, the path to a schema filename in a YAML rendition of the JSON Schema format, and
whether to overwrite a destination file, write out a sample configuration file based on that
schema. If a source filename is provided, merge the parsed contents of that configuration into
the generated configuration.
'''
schema = yaml.round_trip_load(open(schema_filename))
source_config = None
if source_filename:
source_config = load.load_configuration(source_filename)
normalize.normalize(source_filename, source_config)
destination_config = merge_source_configuration_into_destination(
_schema_to_sample_configuration(schema), source_config
@ -285,4 +290,5 @@ def generate_sample_configuration(source_filename, destination_filename, schema_
write_configuration(
destination_filename,
_comment_out_optional_configuration(render_configuration(destination_config)),
overwrite=overwrite,
)

View File

@ -70,13 +70,11 @@ def validate_configuration_format(parser, config_format):
section_format.name for section_format in config_format
)
if unknown_section_names:
raise ValueError(
'Unknown config sections found: {}'.format(', '.join(unknown_section_names))
)
raise ValueError(f"Unknown config sections found: {', '.join(unknown_section_names)}")
missing_section_names = set(required_section_names) - section_names
if missing_section_names:
raise ValueError('Missing config sections: {}'.format(', '.join(missing_section_names)))
raise ValueError(f"Missing config sections: {', '.join(missing_section_names)}")
for section_format in config_format:
if section_format.name not in section_names:
@ -91,9 +89,7 @@ def validate_configuration_format(parser, config_format):
if unexpected_option_names:
raise ValueError(
'Unexpected options found in config section {}: {}'.format(
section_format.name, ', '.join(sorted(unexpected_option_names))
)
f"Unexpected options found in config section {section_format.name}: {', '.join(sorted(unexpected_option_names))}",
)
missing_option_names = tuple(
@ -105,9 +101,7 @@ def validate_configuration_format(parser, config_format):
if missing_option_names:
raise ValueError(
'Required options missing from config section {}: {}'.format(
section_format.name, ', '.join(missing_option_names)
)
f"Required options missing from config section {section_format.name}: {', '.join(missing_option_names)}",
)
@ -137,7 +131,7 @@ def parse_configuration(config_filename, config_format):
'''
parser = RawConfigParser()
if not parser.read(config_filename):
raise ValueError('Configuration file cannot be opened: {}'.format(config_filename))
raise ValueError(f'Configuration file cannot be opened: {config_filename}')
validate_configuration_format(parser, config_format)

View File

@ -1,3 +1,5 @@
import functools
import json
import logging
import os
@ -6,26 +8,52 @@ import ruamel.yaml
logger = logging.getLogger(__name__)
def load_configuration(filename):
def include_configuration(loader, filename_node, include_directory):
'''
Load the given configuration file and return its contents as a data structure of nested dicts
and lists.
Given a ruamel.yaml.loader.Loader, a ruamel.yaml.serializer.ScalarNode containing the included
filename, and an include directory path to search for matching files, load the given YAML
filename (ignoring the given loader so we can use our own) and return its contents as a data
structure of nested dicts and lists. If the filename is relative, probe for it within 1. the
current working directory and 2. the given include directory.
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes.
Raise FileNotFoundError if an included file was not found.
'''
yaml = ruamel.yaml.YAML(typ='safe')
yaml.Constructor = Include_constructor
include_directories = [os.getcwd(), os.path.abspath(include_directory)]
include_filename = os.path.expanduser(filename_node.value)
return yaml.load(open(filename))
if not os.path.isabs(include_filename):
candidate_filenames = [
os.path.join(directory, include_filename) for directory in include_directories
]
for candidate_filename in candidate_filenames:
if os.path.exists(candidate_filename):
include_filename = candidate_filename
break
else:
raise FileNotFoundError(
f'Could not find include {filename_node.value} at {" or ".join(candidate_filenames)}'
)
return load_configuration(include_filename)
def include_configuration(loader, filename_node):
def retain_node_error(loader, node):
'''
Load the given YAML filename (ignoring the given loader so we can use our own), and return its
contents as a data structure of nested dicts and lists.
Given a ruamel.yaml.loader.Loader and a YAML node, raise an error.
Raise ValueError if a mapping or sequence node is given, as that indicates that "!retain" was
used in a configuration file without a merge. In configuration files with a merge, mapping and
sequence nodes with "!retain" tags are handled by deep_merge_nodes() below.
Also raise ValueError if a scalar node is given, as "!retain" is not supported on scalar nodes.
'''
return load_configuration(os.path.expanduser(filename_node.value))
if isinstance(node, (ruamel.yaml.nodes.MappingNode, ruamel.yaml.nodes.SequenceNode)):
raise ValueError(
'The !retain tag may only be used within a configuration file containing a merged !include tag.'
)
raise ValueError('The !retain tag may only be used on a YAML mapping or sequence.')
class Include_constructor(ruamel.yaml.SafeConstructor):
@ -34,20 +62,29 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
separate YAML configuration files. Example syntax: `retention: !include common.yaml`
'''
def __init__(self, preserve_quotes=None, loader=None):
def __init__(self, preserve_quotes=None, loader=None, include_directory=None):
super(Include_constructor, self).__init__(preserve_quotes, loader)
self.add_constructor('!include', include_configuration)
self.add_constructor(
'!include',
functools.partial(include_configuration, include_directory=include_directory),
)
self.add_constructor('!retain', retain_node_error)
def flatten_mapping(self, node):
'''
Support the special case of shallow merging included configuration into an existing mapping
Support the special case of deep merging included configuration into an existing mapping
using the YAML '<<' merge key. Example syntax:
```
retention:
keep_daily: 1
<<: !include common.yaml
<<: !include common.yaml
```
These includes are deep merged into the current configuration file. For instance, in this
example, any "retention" options in common.yaml will get merged into the "retention" section
in the example configuration file.
'''
representer = ruamel.yaml.representer.SafeRepresenter()
@ -57,3 +94,168 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
node.value[index] = (key_node, included_value)
super(Include_constructor, self).flatten_mapping(node)
node.value = deep_merge_nodes(node.value)
def load_configuration(filename):
'''
Load the given configuration file and return its contents as a data structure of nested dicts
and lists. Also, replace any "{constant}" strings with the value of the "constant" key in the
"constants" section of the configuration file.
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes.
'''
# Use an embedded derived class for the include constructor so as to capture the filename
# value. (functools.partial doesn't work for this use case because yaml.Constructor has to be
# an actual class.)
class Include_constructor_with_include_directory(Include_constructor):
def __init__(self, preserve_quotes=None, loader=None):
super(Include_constructor_with_include_directory, self).__init__(
preserve_quotes, loader, include_directory=os.path.dirname(filename)
)
yaml = ruamel.yaml.YAML(typ='safe')
yaml.Constructor = Include_constructor_with_include_directory
with open(filename) as file:
file_contents = file.read()
config = yaml.load(file_contents)
if config and 'constants' in config:
for key, value in config['constants'].items():
value = json.dumps(value)
file_contents = file_contents.replace(f'{{{key}}}', value.strip('"'))
config = yaml.load(file_contents)
del config['constants']
return config
DELETED_NODE = object()
def deep_merge_nodes(nodes):
'''
Given a nested borgmatic configuration data structure as a list of tuples in the form of:
(
ruamel.yaml.nodes.ScalarNode as a key,
ruamel.yaml.nodes.MappingNode or other Node as a value,
),
... deep merge any node values corresponding to duplicate keys and return the result. If
there are colliding keys with non-MappingNode values (e.g., integers or strings), the last
of the values wins.
For instance, given node values of:
[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
ScalarNode(tag='tag:yaml.org,2002:int', value='24')
),
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='7')
),
]),
),
(
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='5')
),
]),
),
]
... the returned result would be:
[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
ScalarNode(tag='tag:yaml.org,2002:int', value='24')
),
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='5')
),
]),
),
]
If a mapping or sequence node has a YAML "!retain" tag, then that node is not merged.
The purpose of deep merging like this is to support, for instance, merging one borgmatic
configuration file into another for reuse, such that a configuration section ("retention",
etc.) does not completely replace the corresponding section in a merged file.
'''
# Map from original node key/value to the replacement merged node. DELETED_NODE as a replacement
# node indications deletion.
replaced_nodes = {}
# To find nodes that require merging, compare each node with each other node.
for a_key, a_value in nodes:
for b_key, b_value in nodes:
# If we've already considered one of the nodes for merging, skip it.
if (a_key, a_value) in replaced_nodes or (b_key, b_value) in replaced_nodes:
continue
# If the keys match and the values are different, we need to merge these two A and B nodes.
if a_key.tag == b_key.tag and a_key.value == b_key.value and a_value != b_value:
# Since we're merging into the B node, consider the A node a duplicate and remove it.
replaced_nodes[(a_key, a_value)] = DELETED_NODE
# If we're dealing with MappingNodes, recurse and merge its values as well.
if isinstance(b_value, ruamel.yaml.nodes.MappingNode):
# A "!retain" tag says to skip deep merging for this node. Replace the tag so
# downstream schema validation doesn't break on our application-specific tag.
if b_value.tag == '!retain':
b_value.tag = 'tag:yaml.org,2002:map'
else:
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.MappingNode(
tag=b_value.tag,
value=deep_merge_nodes(a_value.value + b_value.value),
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
# If we're dealing with SequenceNodes, merge by appending one sequence to the other.
elif isinstance(b_value, ruamel.yaml.nodes.SequenceNode):
# A "!retain" tag says to skip deep merging for this node. Replace the tag so
# downstream schema validation doesn't break on our application-specific tag.
if b_value.tag == '!retain':
b_value.tag = 'tag:yaml.org,2002:seq'
else:
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.SequenceNode(
tag=b_value.tag,
value=a_value.value + b_value.value,
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
return [
replaced_nodes.get(node, node) for node in nodes if replaced_nodes.get(node) != DELETED_NODE
]

View File

@ -1,10 +1,105 @@
def normalize(config):
'''
Given a configuration dict, apply particular hard-coded rules to normalize its contents to
adhere to the configuration schema.
'''
exclude_if_present = config.get('location', {}).get('exclude_if_present')
import logging
import os
# "Upgrade" exclude_if_present from a string to a list.
def normalize(config_filename, config):
'''
Given a configuration filename and a configuration dict of its loaded contents, apply particular
hard-coded rules to normalize the configuration to adhere to the current schema. Return any log
message warnings produced based on the normalization performed.
'''
logs = []
location = config.get('location') or {}
storage = config.get('storage') or {}
consistency = config.get('consistency') or {}
hooks = config.get('hooks') or {}
# Upgrade exclude_if_present from a string to a list.
exclude_if_present = location.get('exclude_if_present')
if isinstance(exclude_if_present, str):
config['location']['exclude_if_present'] = [exclude_if_present]
# Upgrade various monitoring hooks from a string to a dict.
healthchecks = hooks.get('healthchecks')
if isinstance(healthchecks, str):
config['hooks']['healthchecks'] = {'ping_url': healthchecks}
cronitor = hooks.get('cronitor')
if isinstance(cronitor, str):
config['hooks']['cronitor'] = {'ping_url': cronitor}
pagerduty = hooks.get('pagerduty')
if isinstance(pagerduty, str):
config['hooks']['pagerduty'] = {'integration_key': pagerduty}
cronhub = hooks.get('cronhub')
if isinstance(cronhub, str):
config['hooks']['cronhub'] = {'ping_url': cronhub}
# Upgrade consistency checks from a list of strings to a list of dicts.
checks = consistency.get('checks')
if isinstance(checks, list) and len(checks) and isinstance(checks[0], str):
config['consistency']['checks'] = [{'name': check_type} for check_type in checks]
# Rename various configuration options.
numeric_owner = location.pop('numeric_owner', None)
if numeric_owner is not None:
config['location']['numeric_ids'] = numeric_owner
bsd_flags = location.pop('bsd_flags', None)
if bsd_flags is not None:
config['location']['flags'] = bsd_flags
remote_rate_limit = storage.pop('remote_rate_limit', None)
if remote_rate_limit is not None:
config['storage']['upload_rate_limit'] = remote_rate_limit
# Upgrade remote repositories to ssh:// syntax, required in Borg 2.
repositories = location.get('repositories')
if repositories:
if isinstance(repositories[0], str):
config['location']['repositories'] = [
{'path': repository} for repository in repositories
]
repositories = config['location']['repositories']
config['location']['repositories'] = []
for repository_dict in repositories:
repository_path = repository_dict['path']
if '~' in repository_path:
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Repository paths containing "~" are deprecated in borgmatic and no longer work in Borg 2.x+.',
)
)
)
if ':' in repository_path:
if repository_path.startswith('file://'):
updated_repository_path = os.path.abspath(
repository_path.partition('file://')[-1]
)
config['location']['repositories'].append(
dict(repository_dict, path=updated_repository_path,)
)
elif repository_path.startswith('ssh://'):
config['location']['repositories'].append(repository_dict)
else:
rewritten_repository_path = f"ssh://{repository_path.replace(':~', '/~').replace(':/', '/').replace(':', '/./')}"
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Remote repository paths without ssh:// syntax are deprecated. Interpreting "{repository_path}" as "{rewritten_repository_path}"',
)
)
)
config['location']['repositories'].append(
dict(repository_dict, path=rewritten_repository_path,)
)
else:
config['location']['repositories'].append(repository_dict)
return logs

View File

@ -26,6 +26,8 @@ def convert_value_type(value):
'''
Given a string value, determine its logical type (string, boolean, integer, etc.), and return it
converted to that type.
Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
'''
return ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
@ -50,20 +52,26 @@ def parse_overrides(raw_overrides):
if not raw_overrides:
return ()
try:
return tuple(
(tuple(raw_keys.split('.')), convert_value_type(value))
for raw_override in raw_overrides
for raw_keys, value in (raw_override.split('=', 1),)
)
except ValueError:
raise ValueError('Invalid override. Make sure you use the form: SECTION.OPTION=VALUE')
parsed_overrides = []
for raw_override in raw_overrides:
try:
raw_keys, value = raw_override.split('=', 1)
parsed_overrides.append((tuple(raw_keys.split('.')), convert_value_type(value),))
except ValueError:
raise ValueError(
f"Invalid override '{raw_override}'. Make sure you use the form: SECTION.OPTION=VALUE"
)
except ruamel.yaml.error.YAMLError as error:
raise ValueError(f"Invalid override '{raw_override}': {error.problem}")
return tuple(parsed_overrides)
def apply_overrides(config, raw_overrides):
'''
Given a sequence of configuration file override strings in the form of "section.option=value"
and a configuration dict, parse each override and set it the configuration dict.
Given a configuration dict and a sequence of configuration file override strings in the form of
"section.option=value", parse each override and set it the configuration dict.
'''
overrides = parse_overrides(raw_overrides)

File diff suppressed because it is too large Load Diff

View File

@ -1,12 +1,10 @@
import logging
import os
import jsonschema
import pkg_resources
import pykwalify.core
import pykwalify.errors
import ruamel.yaml
from borgmatic.config import load, normalize, override
from borgmatic.config import environment, load, normalize, override
def schema_filename():
@ -17,23 +15,49 @@ def schema_filename():
return pkg_resources.resource_filename('borgmatic', 'config/schema.yaml')
def format_json_error_path_element(path_element):
'''
Given a path element into a JSON data structure, format it for display as a string.
'''
if isinstance(path_element, int):
return str(f'[{path_element}]')
return str(f'.{path_element}')
def format_json_error(error):
'''
Given an instance of jsonschema.exceptions.ValidationError, format it for display as a string.
'''
if not error.path:
return f'At the top level: {error.message}'
formatted_path = ''.join(format_json_error_path_element(element) for element in error.path)
return f"At '{formatted_path.lstrip('.')}': {error.message}"
class Validation_error(ValueError):
'''
A collection of error message strings generated when attempting to validate a particular
configurartion file.
A collection of error messages generated when attempting to validate a particular
configuration file.
'''
def __init__(self, config_filename, error_messages):
def __init__(self, config_filename, errors):
'''
Given a configuration filename path and a sequence of string error messages, create a
Validation_error.
'''
self.config_filename = config_filename
self.error_messages = error_messages
self.errors = errors
def __str__(self):
'''
Render a validation error as a user-facing string.
'''
return 'An error occurred while parsing a configuration file at {}:\n'.format(
self.config_filename
) + '\n'.join(self.error_messages)
return (
f'An error occurred while parsing a configuration file at {self.config_filename}:\n'
+ '\n'.join(error for error in self.errors)
)
def apply_logical_validation(config_filename, parsed_configuration):
@ -42,61 +66,37 @@ def apply_logical_validation(config_filename, parsed_configuration):
below), run through any additional logical validation checks. If there are any such validation
problems, raise a Validation_error.
'''
archive_name_format = parsed_configuration.get('storage', {}).get('archive_name_format')
prefix = parsed_configuration.get('retention', {}).get('prefix')
if archive_name_format and not prefix:
raise Validation_error(
config_filename,
('If you provide an archive_name_format, you must also specify a retention prefix.',),
)
location_repositories = parsed_configuration.get('location', {}).get('repositories')
check_repositories = parsed_configuration.get('consistency', {}).get('check_repositories', [])
for repository in check_repositories:
if repository not in location_repositories:
if not any(
repositories_match(repository, config_repository)
for config_repository in location_repositories
):
raise Validation_error(
config_filename,
(
'Unknown repository in the consistency section\'s check_repositories: {}'.format(
repository
),
f'Unknown repository in the "consistency" section\'s "check_repositories": {repository}',
),
)
def remove_examples(schema):
def parse_configuration(config_filename, schema_filename, overrides=None, resolve_env=True):
'''
pykwalify gets angry if the example field is not a string. So rather than bend to its will,
remove all examples from the given schema before passing the schema to pykwalify.
'''
if 'map' in schema:
for item_name, item_schema in schema['map'].items():
item_schema.pop('example', None)
remove_examples(item_schema)
elif 'seq' in schema:
for item_schema in schema['seq']:
item_schema.pop('example', None)
remove_examples(item_schema)
return schema
def parse_configuration(config_filename, schema_filename, overrides=None):
'''
Given the path to a config filename in YAML format, the path to a schema filename in pykwalify
YAML schema format, a sequence of configuration file override strings in the form of
"section.option=value", return the parsed configuration as a data structure of nested dicts and
lists corresponding to the schema. Example return value:
Given the path to a config filename in YAML format, the path to a schema filename in a YAML
rendition of JSON Schema format, a sequence of configuration file override strings in the form
of "section.option=value", return the parsed configuration as a data structure of nested dicts
and lists corresponding to the schema. Example return value:
{'location': {'source_directories': ['/home', '/etc'], 'repository': 'hostname.borg'},
'retention': {'keep_daily': 7}, 'consistency': {'checks': ['repository', 'archives']}}
Also return a sequence of logging.LogRecord instances containing any warnings about the
configuration.
Raise FileNotFoundError if the file does not exist, PermissionError if the user does not
have permissions to read the file, or Validation_error if the config does not match the schema.
'''
logging.getLogger('pykwalify').setLevel(logging.ERROR)
try:
config = load.load_configuration(config_filename)
schema = load.load_configuration(schema_filename)
@ -104,61 +104,65 @@ def parse_configuration(config_filename, schema_filename, overrides=None):
raise Validation_error(config_filename, (str(error),))
override.apply_overrides(config, overrides)
normalize.normalize(config)
logs = normalize.normalize(config_filename, config)
if resolve_env:
environment.resolve_env_variables(config)
validator = pykwalify.core.Core(source_data=config, schema_data=remove_examples(schema))
parsed_result = validator.validate(raise_exception=False)
try:
validator = jsonschema.Draft7Validator(schema)
except AttributeError: # pragma: no cover
validator = jsonschema.Draft4Validator(schema)
validation_errors = tuple(validator.iter_errors(config))
if validator.validation_errors:
raise Validation_error(config_filename, validator.validation_errors)
if validation_errors:
raise Validation_error(
config_filename, tuple(format_json_error(error) for error in validation_errors)
)
apply_logical_validation(config_filename, parsed_result)
apply_logical_validation(config_filename, config)
return parsed_result
return config, logs
def normalize_repository_path(repository):
'''
Given a repository path, return the absolute path of it (for local repositories).
'''
# A colon in the repository indicates it's a remote repository. Bail.
if ':' in repository:
# A colon in the repository could mean that it's either a file:// URL or a remote repository.
# If it's a remote repository, we don't want to normalize it. If it's a file:// URL, we do.
if ':' not in repository:
return os.path.abspath(repository)
elif repository.startswith('file://'):
return os.path.abspath(repository.partition('file://')[-1])
else:
return repository
return os.path.abspath(repository)
def repositories_match(first, second):
'''
Given two repository paths (relative and/or absolute), return whether they match.
Given two repository dicts with keys 'path' (relative and/or absolute),
and 'label', or two repository paths, return whether they match.
'''
return normalize_repository_path(first) == normalize_repository_path(second)
if isinstance(first, str):
first = {'path': first, 'label': first}
if isinstance(second, str):
second = {'path': second, 'label': second}
return (first.get('label') == second.get('label')) or (
normalize_repository_path(first.get('path'))
== normalize_repository_path(second.get('path'))
)
def guard_configuration_contains_repository(repository, configurations):
'''
Given a repository path and a dict mapping from config filename to corresponding parsed config
dict, ensure that the repository is declared exactly once in all of the configurations.
If no repository is given, then error if there are multiple configured repositories.
dict, ensure that the repository is declared exactly once in all of the configurations. If no
repository is given, skip this check.
Raise ValueError if the repository is not found in a configuration, or is declared multiple
times.
'''
if not repository:
count = len(
tuple(
config_repository
for config in configurations.values()
for config_repository in config['location']['repositories']
)
)
if count > 1:
raise ValueError(
'Can\'t determine which repository to use. Use --repository option to disambiguate'
)
return
count = len(
@ -166,11 +170,34 @@ def guard_configuration_contains_repository(repository, configurations):
config_repository
for config in configurations.values()
for config_repository in config['location']['repositories']
if repositories_match(repository, config_repository)
if repositories_match(config_repository, repository)
)
)
if count == 0:
raise ValueError('Repository {} not found in configuration files'.format(repository))
raise ValueError(f'Repository {repository} not found in configuration files')
if count > 1:
raise ValueError('Repository {} found in multiple configuration files'.format(repository))
raise ValueError(f'Repository {repository} found in multiple configuration files')
def guard_single_repository_selected(repository, configurations):
'''
Given a repository path and a dict mapping from config filename to corresponding parsed config
dict, ensure either a single repository exists across all configuration files or a repository
path was given.
'''
if repository:
return
count = len(
tuple(
config_repository
for config in configurations.values()
for config_repository in config['location']['repositories']
)
)
if count != 1:
raise ValueError(
"Can't determine which repository to use. Use --repository to disambiguate"
)

View File

@ -11,7 +11,7 @@ ERROR_OUTPUT_MAX_LINE_COUNT = 25
BORG_ERROR_EXIT_CODE = 2
def exit_code_indicates_error(process, exit_code, borg_local_path=None):
def exit_code_indicates_error(command, exit_code, borg_local_path=None):
'''
Return True if the given exit code from running a command corresponds to an error. If a Borg
local path is given and matches the process' command, then treat exit code 1 as a warning
@ -20,10 +20,8 @@ def exit_code_indicates_error(process, exit_code, borg_local_path=None):
if exit_code is None:
return False
command = process.args.split(' ') if isinstance(process.args, str) else process.args
if borg_local_path and command[0] == borg_local_path:
return bool(exit_code >= BORG_ERROR_EXIT_CODE)
return bool(exit_code < 0 or exit_code >= BORG_ERROR_EXIT_CODE)
return bool(exit_code != 0)
@ -45,11 +43,32 @@ def output_buffer_for_process(process, exclude_stdouts):
return process.stderr if process.stdout in exclude_stdouts else process.stdout
def append_last_lines(last_lines, captured_output, line, output_log_level):
'''
Given a rolling list of last lines, a list of captured output, a line to append, and an output
log level, append the line to the last lines and (if necessary) the captured output. Then log
the line at the requested output log level.
'''
last_lines.append(line)
if len(last_lines) > ERROR_OUTPUT_MAX_LINE_COUNT:
last_lines.pop(0)
if output_log_level is None:
captured_output.append(line)
else:
logger.log(output_log_level, line)
def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
'''
Given a sequence of subprocess.Popen() instances for multiple processes, log the output for each
process with the requested log level. Additionally, raise a CalledProcessError if a process
exits with an error (or a warning for exit code 1, if that process matches the Borg local path).
exits with an error (or a warning for exit code 1, if that process does not match the Borg local
path).
If output log level is None, then instead of logging, capture output for each process and return
it as a dict from the process to its output.
For simplicity, it's assumed that the output buffer for each process is its stdout. But if any
stdouts are given to exclude, then for any matching processes, log from their stderr instead.
@ -59,11 +78,14 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
'''
# Map from output buffer to sequence of last lines.
buffer_last_lines = collections.defaultdict(list)
output_buffers = [
output_buffer_for_process(process, exclude_stdouts)
process_for_output_buffer = {
output_buffer_for_process(process, exclude_stdouts): process
for process in processes
if process.stdout or process.stderr
]
}
output_buffers = list(process_for_output_buffer.keys())
captured_outputs = collections.defaultdict(list)
still_running = True
# Log output for each process until they all exit.
while True:
@ -71,18 +93,37 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
(ready_buffers, _, _) = select.select(output_buffers, [], [])
for ready_buffer in ready_buffers:
line = ready_buffer.readline().rstrip().decode()
if not line:
continue
ready_process = process_for_output_buffer.get(ready_buffer)
# Keep the last few lines of output in case the process errors, and we need the output for
# the exception below.
last_lines = buffer_last_lines[ready_buffer]
last_lines.append(line)
if len(last_lines) > ERROR_OUTPUT_MAX_LINE_COUNT:
last_lines.pop(0)
# The "ready" process has exited, but it might be a pipe destination with other
# processes (pipe sources) waiting to be read from. So as a measure to prevent
# hangs, vent all processes when one exits.
if ready_process and ready_process.poll() is not None:
for other_process in processes:
if (
other_process.poll() is None
and other_process.stdout
and other_process.stdout not in output_buffers
):
# Add the process's output to output_buffers to ensure it'll get read.
output_buffers.append(other_process.stdout)
logger.log(output_log_level, line)
while True:
line = ready_buffer.readline().rstrip().decode()
if not line or not ready_process:
break
# Keep the last few lines of output in case the process errors, and we need the output for
# the exception below.
append_last_lines(
buffer_last_lines[ready_buffer],
captured_outputs[ready_process],
line,
output_log_level,
)
if not still_running:
break
still_running = False
@ -92,13 +133,24 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
if exit_code is None:
still_running = True
command = process.args.split(' ') if isinstance(process.args, str) else process.args
# If any process errors, then raise accordingly.
if exit_code_indicates_error(process, exit_code, borg_local_path):
if exit_code_indicates_error(command, exit_code, borg_local_path):
# If an error occurs, include its output in the raised exception so that we don't
# inadvertently hide error output.
output_buffer = output_buffer_for_process(process, exclude_stdouts)
last_lines = buffer_last_lines[output_buffer] if output_buffer else []
# Collect any straggling output lines that came in since we last gathered output.
while output_buffer: # pragma: no cover
line = output_buffer.readline().rstrip().decode()
if not line:
break
append_last_lines(
last_lines, captured_outputs[process], line, output_log_level=logging.ERROR
)
if len(last_lines) == ERROR_OUTPUT_MAX_LINE_COUNT:
last_lines.insert(0, '...')
@ -113,31 +165,21 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
exit_code, command_for_process(process), '\n'.join(last_lines)
)
if not still_running:
break
# Consume any remaining output that we missed (if any).
for process in processes:
output_buffer = output_buffer_for_process(process, exclude_stdouts)
if not output_buffer:
continue
remaining_output = output_buffer.read().rstrip().decode()
if remaining_output: # pragma: no cover
logger.log(output_log_level, remaining_output)
if captured_outputs:
return {
process: '\n'.join(output_lines) for process, output_lines in captured_outputs.items()
}
def log_command(full_command, input_file, output_file):
def log_command(full_command, input_file=None, output_file=None):
'''
Log the given command (a sequence of command/argument strings), along with its input/output file
paths.
'''
logger.debug(
' '.join(full_command)
+ (' < {}'.format(getattr(input_file, 'name', '')) if input_file else '')
+ (' > {}'.format(getattr(output_file, 'name', '')) if output_file else '')
+ (f" < {getattr(input_file, 'name', '')}" if input_file else '')
+ (f" > {getattr(output_file, 'name', '')}" if output_file else '')
)
@ -160,15 +202,14 @@ def execute_command(
):
'''
Execute the given command (a sequence of command/argument strings) and log its output at the
given log level. If output log level is None, instead capture and return the output. (Implies
run_to_completion.) If an open output file object is given, then write stdout to the file and
only log stderr (but only if an output log level is set). If an open input file object is given,
then read stdin from the file. If shell is True, execute the command within a shell. If an extra
environment dict is given, then use it to augment the current environment, and pass the result
into the command. If a working directory is given, use that as the present working directory
when running the command. If a Borg local path is given, and the command matches it (regardless
of arguments), treat exit code 1 as a warning instead of an error. If run to completion is
False, then return the process for the command without executing it to completion.
given log level. If an open output file object is given, then write stdout to the file and only
log stderr. If an open input file object is given, then read stdin from the file. If shell is
True, execute the command within a shell. If an extra environment dict is given, then use it to
augment the current environment, and pass the result into the command. If a working directory is
given, use that as the present working directory when running the command. If a Borg local path
is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning
instead of an error. If run to completion is False, then return the process for the command
without executing it to completion.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
@ -177,12 +218,6 @@ def execute_command(
do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command
if output_log_level is None:
output = subprocess.check_output(
command, shell=shell, env=environment, cwd=working_directory
)
return output.decode() if output is not None else None
process = subprocess.Popen(
command,
stdin=input_file,
@ -200,6 +235,38 @@ def execute_command(
)
def execute_command_and_capture_output(
full_command, capture_stderr=False, shell=False, extra_environment=None, working_directory=None,
):
'''
Execute the given command (a sequence of command/argument strings), capturing and returning its
output (stdout). If capture stderr is True, then capture and return stderr in addition to
stdout. If shell is True, execute the command within a shell. If an extra environment dict is
given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the command.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
log_command(full_command)
environment = {**os.environ, **extra_environment} if extra_environment else None
command = ' '.join(full_command) if shell else full_command
try:
output = subprocess.check_output(
command,
stderr=subprocess.STDOUT if capture_stderr else None,
shell=shell,
env=environment,
cwd=working_directory,
)
except subprocess.CalledProcessError as error:
if exit_code_indicates_error(command, error.returncode):
raise
output = error.output
return output.decode() if output is not None else None
def execute_command_with_processes(
full_command,
processes,
@ -217,13 +284,14 @@ def execute_command_with_processes(
run as well. This is useful, for instance, for processes that are streaming output to a named
pipe that the given command is consuming from.
If an open output file object is given, then write stdout to the file and only log stderr (but
only if an output log level is set). If an open input file object is given, then read stdin from
the file. If shell is True, execute the command within a shell. If an extra environment dict is
given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the
command. If a Borg local path is given, then for any matching command or process (regardless of
arguments), treat exit code 1 as a warning instead of an error.
If an open output file object is given, then write stdout to the file and only log stderr. But
if output log level is None, instead suppress logging and return the captured output for (only)
the given command. If an open input file object is given, then read stdin from the file. If
shell is True, execute the command within a shell. If an extra environment dict is given, then
use it to augment the current environment, and pass the result into the command. If a working
directory is given, use that as the present working directory when running the command. If a
Borg local path is given, then for any matching command or process (regardless of arguments),
treat exit code 1 as a warning instead of an error.
Raise subprocesses.CalledProcessError if an error occurs while running the command or in the
upstream process.
@ -254,9 +322,12 @@ def execute_command_with_processes(
process.kill()
raise
log_outputs(
captured_outputs = log_outputs(
tuple(processes) + (command_process,),
(input_file, output_file),
output_log_level,
borg_local_path=borg_local_path,
)
if output_log_level is None:
return captured_outputs.get(command_process)

View File

@ -1,5 +1,6 @@
import logging
import os
import re
from borgmatic import execute
@ -9,13 +10,18 @@ logger = logging.getLogger(__name__)
SOFT_FAIL_EXIT_CODE = 75
def interpolate_context(command, context):
def interpolate_context(config_filename, hook_description, command, context):
'''
Given a single hook command and a dict of context names/values, interpolate the values by
"{name}" into the command and return the result.
Given a config filename, a hook description, a single hook command, and a dict of context
names/values, interpolate the values by "{name}" into the command and return the result.
'''
for name, value in context.items():
command = command.replace('{%s}' % name, str(value))
command = command.replace(f'{{{name}}}', str(value))
for unsupported_variable in re.findall(r'{\w+}', command):
logger.warning(
f"{config_filename}: Variable '{unsupported_variable}' is not supported in {hook_description} hook"
)
return command
@ -26,35 +32,32 @@ def execute_hook(commands, umask, config_filename, description, dry_run, **conte
a hook description, and whether this is a dry run, run the given commands. Or, don't run them
if this is a dry run.
The context contains optional values interpolated by name into the hook commands. Currently,
this only applies to the on_error hook.
The context contains optional values interpolated by name into the hook commands.
Raise ValueError if the umask cannot be parsed.
Raise subprocesses.CalledProcessError if an error occurs in a hook.
'''
if not commands:
logger.debug('{}: No commands to run for {} hook'.format(config_filename, description))
logger.debug(f'{config_filename}: No commands to run for {description} hook')
return
dry_run_label = ' (dry run; not actually running hooks)' if dry_run else ''
context['configuration_filename'] = config_filename
commands = [interpolate_context(command, context) for command in commands]
commands = [
interpolate_context(config_filename, description, command, context) for command in commands
]
if len(commands) == 1:
logger.info(
'{}: Running command for {} hook{}'.format(config_filename, description, dry_run_label)
)
logger.info(f'{config_filename}: Running command for {description} hook{dry_run_label}')
else:
logger.info(
'{}: Running {} commands for {} hook{}'.format(
config_filename, len(commands), description, dry_run_label
)
f'{config_filename}: Running {len(commands)} commands for {description} hook{dry_run_label}',
)
if umask:
parsed_umask = int(str(umask), 8)
logger.debug('{}: Set hook umask to {}'.format(config_filename, oct(parsed_umask)))
logger.debug(f'{config_filename}: Set hook umask to {oct(parsed_umask)}')
original_umask = os.umask(parsed_umask)
else:
original_umask = None
@ -86,9 +89,7 @@ def considered_soft_failure(config_filename, error):
if exit_code == SOFT_FAIL_EXIT_CODE:
logger.info(
'{}: Command hook exited with soft failure exit code ({}); skipping remaining actions'.format(
config_filename, SOFT_FAIL_EXIT_CODE
)
f'{config_filename}: Command hook exited with soft failure exit code ({SOFT_FAIL_EXIT_CODE}); skipping remaining actions',
)
return True

View File

@ -22,23 +22,36 @@ def initialize_monitor(
pass
def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run):
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the given Cronhub URL, modified with the monitor.State. Use the given configuration
Ping the configured Cronhub URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything.
'''
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
formatted_state = '/{}/'.format(MONITOR_STATE_TO_CRONHUB[state])
ping_url = ping_url.replace('/start/', formatted_state).replace('/ping/', formatted_state)
if state not in MONITOR_STATE_TO_CRONHUB:
logger.debug(
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in Cronhub hook'
)
return
logger.info(
'{}: Pinging Cronhub {}{}'.format(config_filename, state.name.lower(), dry_run_label)
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
formatted_state = f'/{MONITOR_STATE_TO_CRONHUB[state]}/'
ping_url = (
hook_config['ping_url']
.replace('/start/', formatted_state)
.replace('/ping/', formatted_state)
)
logger.debug('{}: Using Cronhub ping URL {}'.format(config_filename, ping_url))
logger.info(f'{config_filename}: Pinging Cronhub {state.name.lower()}{dry_run_label}')
logger.debug(f'{config_filename}: Using Cronhub ping URL {ping_url}')
if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.get(ping_url)
try:
response = requests.get(ping_url)
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Cronhub error: {error}')
def destroy_monitor(

View File

@ -22,22 +22,31 @@ def initialize_monitor(
pass
def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run):
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the given Cronitor URL, modified with the monitor.State. Use the given configuration
Ping the configured Cronitor URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything.
'''
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
ping_url = '{}/{}'.format(ping_url, MONITOR_STATE_TO_CRONITOR[state])
if state not in MONITOR_STATE_TO_CRONITOR:
logger.debug(
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in Cronitor hook'
)
return
logger.info(
'{}: Pinging Cronitor {}{}'.format(config_filename, state.name.lower(), dry_run_label)
)
logger.debug('{}: Using Cronitor ping URL {}'.format(config_filename, ping_url))
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
ping_url = f"{hook_config['ping_url']}/{MONITOR_STATE_TO_CRONITOR[state]}"
logger.info(f'{config_filename}: Pinging Cronitor {state.name.lower()}{dry_run_label}')
logger.debug(f'{config_filename}: Using Cronitor ping URL {ping_url}')
if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.get(ping_url)
try:
response = requests.get(ping_url)
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Cronitor error: {error}')
def destroy_monitor(

View File

@ -1,16 +1,29 @@
import logging
from borgmatic.hooks import cronhub, cronitor, healthchecks, mysql, pagerduty, postgresql
from borgmatic.hooks import (
cronhub,
cronitor,
healthchecks,
mongodb,
mysql,
ntfy,
pagerduty,
postgresql,
sqlite,
)
logger = logging.getLogger(__name__)
HOOK_NAME_TO_MODULE = {
'healthchecks': healthchecks,
'cronitor': cronitor,
'cronhub': cronhub,
'cronitor': cronitor,
'healthchecks': healthchecks,
'mongodb_databases': mongodb,
'mysql_databases': mysql,
'ntfy': ntfy,
'pagerduty': pagerduty,
'postgresql_databases': postgresql,
'mysql_databases': mysql,
'sqlite_databases': sqlite,
}
@ -18,26 +31,21 @@ def call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs):
'''
Given the hooks configuration dict and a prefix to use in log entries, call the requested
function of the Python module corresponding to the given hook name. Supply that call with the
configuration for this hook, the log prefix, and any given args and kwargs. Return any return
value.
If the hook name is not present in the hooks configuration, then bail without calling anything.
configuration for this hook (if any), the log prefix, and any given args and kwargs. Return any
return value.
Raise ValueError if the hook name is unknown.
Raise AttributeError if the function name is not found in the module.
Raise anything else that the called function raises.
'''
config = hooks.get(hook_name)
if not config:
logger.debug('{}: No {} hook configured.'.format(log_prefix, hook_name))
return
config = hooks.get(hook_name, {})
try:
module = HOOK_NAME_TO_MODULE[hook_name]
except KeyError:
raise ValueError('Unknown hook name: {}'.format(hook_name))
raise ValueError(f'Unknown hook name: {hook_name}')
logger.debug('{}: Calling {} hook function {}'.format(log_prefix, hook_name, function_name))
logger.debug(f'{log_prefix}: Calling {hook_name} hook function {function_name}')
return getattr(module, function_name)(config, log_prefix, *args, **kwargs)
@ -48,7 +56,7 @@ def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
values into a dict from hook name to return value.
If the hook name is not present in the hooks configuration, then don't call the function for it,
If the hook name is not present in the hooks configuration, then don't call the function for it
and omit it from the return values.
Raise ValueError if the hook name is unknown.
@ -60,3 +68,19 @@ def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
for hook_name in hook_names
if hooks.get(hook_name)
}
def call_hooks_even_if_unconfigured(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
'''
Given the hooks configuration dict and a prefix to use in log entries, call the requested
function of the Python module corresponding to each given hook name. Supply each call with the
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
values into a dict from hook name to return value.
Raise AttributeError if the function name is not found in the module.
Raise anything else that a called function raises. An error stops calls to subsequent functions.
'''
return {
hook_name: call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs)
for hook_name in hook_names
}

View File

@ -2,11 +2,16 @@ import logging
import os
import shutil
from borgmatic.borg.create import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
from borgmatic.borg.state import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
logger = logging.getLogger(__name__)
DATABASE_HOOK_NAMES = ('postgresql_databases', 'mysql_databases')
DATABASE_HOOK_NAMES = (
'postgresql_databases',
'mysql_databases',
'mongodb_databases',
'sqlite_databases',
)
def make_database_dump_path(borgmatic_source_directory, database_hook_name):
@ -28,7 +33,7 @@ def make_database_dump_filename(dump_path, name, hostname=None):
Raise ValueError if the database name is invalid.
'''
if os.path.sep in name:
raise ValueError('Invalid database name {}'.format(name))
raise ValueError(f'Invalid database name {name}')
return os.path.join(os.path.expanduser(dump_path), hostname or 'localhost', name)
@ -55,9 +60,7 @@ def remove_database_dumps(dump_path, database_type_name, log_prefix, dry_run):
'''
dry_run_label = ' (dry run; not actually removing anything)' if dry_run else ''
logger.info(
'{}: Removing {} database dumps{}'.format(log_prefix, database_type_name, dry_run_label)
)
logger.debug(f'{log_prefix}: Removing {database_type_name} database dumps{dry_run_label}')
expanded_path = os.path.expanduser(dump_path)
@ -73,4 +76,4 @@ def convert_glob_patterns_to_borg_patterns(patterns):
Convert a sequence of shell glob patterns like "/etc/*" to the corresponding Borg archive
patterns like "sh:etc/*".
'''
return ['sh:{}'.format(pattern.lstrip(os.path.sep)) for pattern in patterns]
return [f'sh:{pattern.lstrip(os.path.sep)}' for pattern in patterns]

View File

@ -10,16 +10,18 @@ MONITOR_STATE_TO_HEALTHCHECKS = {
monitor.State.START: 'start',
monitor.State.FINISH: None, # Healthchecks doesn't append to the URL for the finished state.
monitor.State.FAIL: 'fail',
monitor.State.LOG: 'log',
}
PAYLOAD_TRUNCATION_INDICATOR = '...\n'
PAYLOAD_LIMIT_BYTES = 10 * 1024 - len(PAYLOAD_TRUNCATION_INDICATOR)
DEFAULT_PING_BODY_LIMIT_BYTES = 100000
class Forgetful_buffering_handler(logging.Handler):
'''
A buffering log handler that stores log messages in memory, and throws away messages (oldest
first) once a particular capacity in bytes is reached.
first) once a particular capacity in bytes is reached. But if the given byte capacity is zero,
don't throw away any messages.
'''
def __init__(self, byte_capacity, log_level):
@ -36,6 +38,9 @@ class Forgetful_buffering_handler(logging.Handler):
self.byte_count += len(message)
self.buffer.append(message)
if not self.byte_capacity:
return
while self.byte_count > self.byte_capacity and self.buffer:
self.byte_count -= len(self.buffer[0])
self.buffer.pop(0)
@ -65,51 +70,70 @@ def format_buffered_logs_for_payload():
return payload
def initialize_monitor(
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
def initialize_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
'''
Add a handler to the root logger that stores in memory the most recent logs emitted. That
way, we can send them all to Healthchecks upon a finish or failure state.
Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
we can send them all to Healthchecks upon a finish or failure state. But skip this if the
"send_logs" option is false.
'''
if hook_config.get('send_logs') is False:
return
ping_body_limit = max(
hook_config.get('ping_body_limit', DEFAULT_PING_BODY_LIMIT_BYTES)
- len(PAYLOAD_TRUNCATION_INDICATOR),
0,
)
logging.getLogger().addHandler(
Forgetful_buffering_handler(PAYLOAD_LIMIT_BYTES, monitoring_log_level)
Forgetful_buffering_handler(ping_body_limit, monitoring_log_level)
)
def ping_monitor(ping_url_or_uuid, config_filename, state, monitoring_log_level, dry_run):
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the given Healthchecks URL or UUID, modified with the monitor.State. Use the given
Ping the configured Healthchecks URL or UUID, modified with the monitor.State. Use the given
configuration filename in any log entries, and log to Healthchecks with the giving log level.
If this is a dry run, then don't actually ping anything.
'''
ping_url = (
ping_url_or_uuid
if ping_url_or_uuid.startswith('http')
else 'https://hc-ping.com/{}'.format(ping_url_or_uuid)
hook_config['ping_url']
if hook_config['ping_url'].startswith('http')
else f"https://hc-ping.com/{hook_config['ping_url']}"
)
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
if 'states' in hook_config and state.name.lower() not in hook_config['states']:
logger.info(
f'{config_filename}: Skipping Healthchecks {state.name.lower()} ping due to configured states'
)
return
healthchecks_state = MONITOR_STATE_TO_HEALTHCHECKS.get(state)
if healthchecks_state:
ping_url = '{}/{}'.format(ping_url, healthchecks_state)
ping_url = f'{ping_url}/{healthchecks_state}'
logger.info(
'{}: Pinging Healthchecks {}{}'.format(config_filename, state.name.lower(), dry_run_label)
)
logger.debug('{}: Using Healthchecks ping URL {}'.format(config_filename, ping_url))
logger.info(f'{config_filename}: Pinging Healthchecks {state.name.lower()}{dry_run_label}')
logger.debug(f'{config_filename}: Using Healthchecks ping URL {ping_url}')
if state in (monitor.State.FINISH, monitor.State.FAIL):
if state in (monitor.State.FINISH, monitor.State.FAIL, monitor.State.LOG):
payload = format_buffered_logs_for_payload()
else:
payload = ''
if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.post(ping_url, data=payload.encode('utf-8'))
try:
response = requests.post(
ping_url, data=payload.encode('utf-8'), verify=hook_config.get('verify_tls', True)
)
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Healthchecks error: {error}')
def destroy_monitor(ping_url_or_uuid, config_filename, monitoring_log_level, dry_run):
def destroy_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
'''
Remove the monitor handler that was added to the root logger. This prevents the handler from
getting reused by other instances of this monitor.

164
borgmatic/hooks/mongodb.py Normal file
View File

@ -0,0 +1,164 @@
import logging
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'mongodb_databases'
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given MongoDB databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given log
prefix in any log entries. Use the given location configuration dict to construct the
destination path.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
logger.info(f'{log_prefix}: Dumping MongoDB databases{dry_run_label}')
processes = []
for database in databases:
name = database['name']
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), name, database.get('hostname')
)
dump_format = database.get('format', 'archive')
logger.debug(
f'{log_prefix}: Dumping MongoDB database {name} to {dump_filename}{dry_run_label}',
)
if dry_run:
continue
command = build_dump_command(database, dump_filename, dump_format)
if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename)
execute_command(command, shell=True)
else:
dump.create_named_pipe_for_dump(dump_filename)
processes.append(execute_command(command, shell=True, run_to_completion=False))
return processes
def build_dump_command(database, dump_filename, dump_format):
'''
Return the mongodump command from a single database configuration.
'''
all_databases = database['name'] == 'all'
command = ['mongodump']
if dump_format == 'directory':
command.extend(('--out', dump_filename))
if 'hostname' in database:
command.extend(('--host', database['hostname']))
if 'port' in database:
command.extend(('--port', str(database['port'])))
if 'username' in database:
command.extend(('--username', database['username']))
if 'password' in database:
command.extend(('--password', database['password']))
if 'authentication_database' in database:
command.extend(('--authenticationDatabase', database['authentication_database']))
if not all_databases:
command.extend(('--db', database['name']))
if 'options' in database:
command.extend(database['options'].split(' '))
if dump_format != 'directory':
command.extend(('--archive', '>', dump_filename))
return command
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the log
prefix in any log entries. Use the given location configuration dict to construct the
destination path. If this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'MongoDB', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
'''
Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
and a database name to match, return the corresponding glob patterns to match the database dump
in an archive.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
'''
Restore the given MongoDB database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database = database_config[0]
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), database['name'], database.get('hostname')
)
restore_command = build_restore_command(extract_process, database, dump_filename)
logger.debug(f"{log_prefix}: Restoring MongoDB database {database['name']}{dry_run_label}")
if dry_run:
return
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process] if extract_process else [],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout if extract_process else None,
)
def build_restore_command(extract_process, database, dump_filename):
'''
Return the mongorestore command from a single database configuration.
'''
command = ['mongorestore']
if extract_process:
command.append('--archive')
else:
command.extend(('--dir', dump_filename))
if database['name'] != 'all':
command.extend(('--drop', '--db', database['name']))
if 'hostname' in database:
command.extend(('--host', database['hostname']))
if 'port' in database:
command.extend(('--port', str(database['port'])))
if 'username' in database:
command.extend(('--username', database['username']))
if 'password' in database:
command.extend(('--password', database['password']))
if 'authentication_database' in database:
command.extend(('--authenticationDatabase', database['authentication_database']))
if 'restore_options' in database:
command.extend(database['restore_options'].split(' '))
return command

View File

@ -1,9 +1,10 @@
from enum import Enum
MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty')
MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty', 'ntfy')
class State(Enum):
START = 1
FINISH = 2
FAIL = 3
LOG = 4

View File

@ -1,6 +1,12 @@
import copy
import logging
import os
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.execute import (
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
@ -18,19 +24,20 @@ def make_dump_path(location_config): # pragma: no cover
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
def database_names_to_dump(database, extra_environment, log_prefix, dry_run_label):
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
'''
Given a requested database name, return the corresponding sequence of database names to dump.
Given a requested database config, return the corresponding sequence of database names to dump.
In the case of "all", query for the names of databases on the configured host and return them,
excluding any system databases that will cause problems during restore.
'''
requested_name = database['name']
if requested_name != 'all':
return (requested_name,)
if database['name'] != 'all':
return (database['name'],)
if dry_run:
return ()
show_command = (
('mysql',)
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
@ -38,11 +45,9 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run_labe
+ ('--skip-column-names', '--batch')
+ ('--execute', 'show schemas')
)
logger.debug(
'{}: Querying for "all" MySQL databases to dump{}'.format(log_prefix, dry_run_label)
)
show_output = execute_command(
show_command, output_log_level=None, extra_environment=extra_environment
logger.debug(f'{log_prefix}: Querying for "all" MySQL databases to dump')
show_output = execute_command_and_capture_output(
show_command, extra_environment=extra_environment
)
return tuple(
@ -52,6 +57,53 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run_labe
)
def execute_dump_command(
database, log_prefix, dump_path, database_names, extra_environment, dry_run, dry_run_label
):
'''
Kick off a dump for the given MySQL/MariaDB database (provided as a configuration dict) to a
named pipe constructed from the given dump path and database names. Use the given log prefix in
any log entries.
Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if
this is a dry run, then don't actually dump anything and return None.
'''
database_name = database['name']
dump_filename = dump.make_database_dump_filename(
dump_path, database['name'], database.get('hostname')
)
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of MySQL database "{database_name}" to {dump_filename}'
)
return None
dump_command = (
('mysqldump',)
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ ('--databases',)
+ database_names
+ ('--result-file', dump_filename)
)
logger.debug(
f'{log_prefix}: Dumping MySQL database "{database_name}" to {dump_filename}{dry_run_label}'
)
if dry_run:
return None
dump.create_named_pipe_for_dump(dump_filename)
return execute_command(
dump_command, extra_environment=extra_environment, run_to_completion=False,
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given MySQL/MariaDB databases to a named pipe. The databases are supplied as a sequence
@ -65,55 +117,50 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info('{}: Dumping MySQL databases{}'.format(log_prefix, dry_run_label))
logger.info(f'{log_prefix}: Dumping MySQL databases{dry_run_label}')
for database in databases:
requested_name = database['name']
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), requested_name, database.get('hostname')
)
dump_path = make_dump_path(location_config)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run_label
database, extra_environment, log_prefix, dry_run
)
if not dump_database_names:
if dry_run:
continue
raise ValueError('Cannot find any MySQL databases to dump.')
dump_command = (
('mysqldump',)
+ ('--add-drop-database',)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ ('--databases',)
+ dump_database_names
# Use shell redirection rather than execute_command(output_file=open(...)) to prevent
# the open() call on a named pipe from hanging the main borgmatic process.
+ ('>', dump_filename)
)
logger.debug(
'{}: Dumping MySQL database {} to {}{}'.format(
log_prefix, requested_name, dump_filename, dry_run_label
if database['name'] == 'all' and database.get('format'):
for dump_name in dump_database_names:
renamed_database = copy.copy(database)
renamed_database['name'] = dump_name
processes.append(
execute_dump_command(
renamed_database,
log_prefix,
dump_path,
(dump_name,),
extra_environment,
dry_run,
dry_run_label,
)
)
else:
processes.append(
execute_dump_command(
database,
log_prefix,
dump_path,
dump_database_names,
extra_environment,
dry_run,
dry_run_label,
)
)
)
if dry_run:
continue
dump.create_named_pipe_for_dump(dump_filename)
processes.append(
execute_command(
dump_command,
shell=True,
extra_environment=extra_environment,
run_to_completion=False,
)
)
return processes
return [process for process in processes if process]
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
@ -151,7 +198,8 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
database = database_config[0]
restore_command = (
('mysql', '--batch', '--verbose')
('mysql', '--batch')
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
@ -159,17 +207,16 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
logger.debug(
'{}: Restoring MySQL database {}{}'.format(log_prefix, database['name'], dry_run_label)
)
logger.debug(f"{log_prefix}: Restoring MySQL database {database['name']}{dry_run_label}")
if dry_run:
return
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout,
extra_environment=extra_environment,
borg_local_path=location_config.get('local_path', 'borg'),
)

83
borgmatic/hooks/ntfy.py Normal file
View File

@ -0,0 +1,83 @@
import logging
import requests
logger = logging.getLogger(__name__)
def initialize_monitor(
ping_url, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No initialization is necessary for this monitor.
'''
pass
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the configured Ntfy topic. Use the given configuration filename in any log entries.
If this is a dry run, then don't actually ping anything.
'''
run_states = hook_config.get('states', ['fail'])
if state.name.lower() in run_states:
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
state_config = hook_config.get(
state.name.lower(),
{
'title': f'A Borgmatic {state.name} event happened',
'message': f'A Borgmatic {state.name} event happened',
'priority': 'default',
'tags': 'borgmatic',
},
)
base_url = hook_config.get('server', 'https://ntfy.sh')
topic = hook_config.get('topic')
logger.info(f'{config_filename}: Pinging ntfy topic {topic}{dry_run_label}')
logger.debug(f'{config_filename}: Using Ntfy ping URL {base_url}/{topic}')
headers = {
'X-Title': state_config.get('title'),
'X-Message': state_config.get('message'),
'X-Priority': state_config.get('priority'),
'X-Tags': state_config.get('tags'),
}
username = hook_config.get('username')
password = hook_config.get('password')
auth = None
if (username and password) is not None:
auth = requests.auth.HTTPBasicAuth(username, password)
logger.info(f'{config_filename}: Using basic auth with user {username} for ntfy')
elif username is not None:
logger.warning(
f'{config_filename}: Password missing for ntfy authentication, defaulting to no auth'
)
elif password is not None:
logger.warning(
f'{config_filename}: Username missing for ntfy authentication, defaulting to no auth'
)
if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR)
try:
response = requests.post(f'{base_url}/{topic}', headers=headers, auth=auth)
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: ntfy error: {error}')
def destroy_monitor(
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No destruction is necessary for this monitor.
'''
pass

View File

@ -21,22 +21,20 @@ def initialize_monitor(
pass
def ping_monitor(integration_key, config_filename, state, monitoring_log_level, dry_run):
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
If this is an error state, create a PagerDuty event with the given integration key. Use the
given configuration filename in any log entries. If this is a dry run, then don't actually
If this is an error state, create a PagerDuty event with the configured integration key. Use
the given configuration filename in any log entries. If this is a dry run, then don't actually
create an event.
'''
if state != monitor.State.FAIL:
logger.debug(
'{}: Ignoring unsupported monitoring {} in PagerDuty hook'.format(
config_filename, state.name.lower()
)
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in PagerDuty hook',
)
return
dry_run_label = ' (dry run; not actually sending)' if dry_run else ''
logger.info('{}: Sending failure event to PagerDuty {}'.format(config_filename, dry_run_label))
logger.info(f'{config_filename}: Sending failure event to PagerDuty {dry_run_label}')
if dry_run:
return
@ -47,10 +45,10 @@ def ping_monitor(integration_key, config_filename, state, monitoring_log_level,
)
payload = json.dumps(
{
'routing_key': integration_key,
'routing_key': hook_config['integration_key'],
'event_action': 'trigger',
'payload': {
'summary': 'backup failed on {}'.format(hostname),
'summary': f'backup failed on {hostname}',
'severity': 'error',
'source': hostname,
'timestamp': local_timestamp,
@ -65,10 +63,15 @@ def ping_monitor(integration_key, config_filename, state, monitoring_log_level,
},
}
)
logger.debug('{}: Using PagerDuty payload: {}'.format(config_filename, payload))
logger.debug(f'{config_filename}: Using PagerDuty payload: {payload}')
logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.post(EVENTS_API_URL, data=payload.encode('utf-8'))
try:
response = requests.post(EVENTS_API_URL, data=payload.encode('utf-8'))
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: PagerDuty error: {error}')
def destroy_monitor(

View File

@ -1,6 +1,12 @@
import csv
import logging
import os
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.execute import (
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
@ -34,6 +40,44 @@ def make_extra_environment(database):
return extra
EXCLUDED_DATABASE_NAMES = ('template0', 'template1')
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
'''
Given a requested database config, return the corresponding sequence of database names to dump.
In the case of "all" when a database format is given, query for the names of databases on the
configured host and return them. For "all" without a database format, just return a sequence
containing "all".
'''
requested_name = database['name']
if requested_name != 'all':
return (requested_name,)
if not database.get('format'):
return ('all',)
if dry_run:
return ()
list_command = (
('psql', '--list', '--no-password', '--csv', '--tuples-only')
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
)
logger.debug(f'{log_prefix}: Querying for "all" PostgreSQL databases to dump')
list_output = execute_command_and_capture_output(
list_command, extra_environment=extra_environment
)
return tuple(
row[0]
for row in csv.reader(list_output.splitlines(), delimiter=',', quotechar='"')
if row[0] not in EXCLUDED_DATABASE_NAMES
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given PostgreSQL databases to a named pipe. The databases are supplied as a sequence of
@ -43,58 +87,76 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
Raise ValueError if the databases to dump cannot be determined.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info('{}: Dumping PostgreSQL databases{}'.format(log_prefix, dry_run_label))
logger.info(f'{log_prefix}: Dumping PostgreSQL databases{dry_run_label}')
for database in databases:
name = database['name']
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), name, database.get('hostname')
)
all_databases = bool(name == 'all')
dump_format = database.get('format', 'custom')
command = (
(
'pg_dumpall' if all_databases else 'pg_dump',
'--no-password',
'--clean',
'--if-exists',
)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (() if all_databases else ('--format', dump_format))
+ (('--file', dump_filename) if dump_format == 'directory' else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (() if all_databases else (name,))
# Use shell redirection rather than the --file flag to sidestep synchronization issues
# when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump
# format in a particular, a named destination is required, and redirection doesn't work.
+ (('>', dump_filename) if dump_format != 'directory' else ())
)
extra_environment = make_extra_environment(database)
logger.debug(
'{}: Dumping PostgreSQL database {} to {}{}'.format(
log_prefix, name, dump_filename, dry_run_label
)
dump_path = make_dump_path(location_config)
dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run
)
if dry_run:
continue
if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename)
else:
dump.create_named_pipe_for_dump(dump_filename)
if not dump_database_names:
if dry_run:
continue
processes.append(
execute_command(
command, shell=True, extra_environment=extra_environment, run_to_completion=False
raise ValueError('Cannot find any PostgreSQL databases to dump.')
for database_name in dump_database_names:
dump_format = database.get('format', None if database_name == 'all' else 'custom')
default_dump_command = 'pg_dumpall' if database_name == 'all' else 'pg_dump'
dump_command = database.get('pg_dump_command') or default_dump_command
dump_filename = dump.make_database_dump_filename(
dump_path, database_name, database.get('hostname')
)
)
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of PostgreSQL database "{database_name}" to {dump_filename}'
)
continue
command = (
(dump_command, '--no-password', '--clean', '--if-exists',)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (('--format', dump_format) if dump_format else ())
+ (('--file', dump_filename) if dump_format == 'directory' else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (() if database_name == 'all' else (database_name,))
# Use shell redirection rather than the --file flag to sidestep synchronization issues
# when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump
# format in a particular, a named destination is required, and redirection doesn't work.
+ (('>', dump_filename) if dump_format != 'directory' else ())
)
logger.debug(
f'{log_prefix}: Dumping PostgreSQL database "{database_name}" to {dump_filename}{dry_run_label}'
)
if dry_run:
continue
if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename)
execute_command(
command, shell=True, extra_environment=extra_environment,
)
else:
dump.create_named_pipe_for_dump(dump_filename)
processes.append(
execute_command(
command,
shell=True,
extra_environment=extra_environment,
run_to_completion=False,
)
)
return processes
@ -140,16 +202,19 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), database['name'], database.get('hostname')
)
psql_command = database.get('psql_command') or 'psql'
analyze_command = (
('psql', '--no-password', '--quiet')
(psql_command, '--no-password', '--quiet')
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (('--dbname', database['name']) if not all_databases else ())
+ (tuple(database['analyze_options'].split(' ')) if 'analyze_options' in database else ())
+ ('--command', 'ANALYZE')
)
pg_restore_command = database.get('pg_restore_command') or 'pg_restore'
restore_command = (
('psql' if all_databases else 'pg_restore', '--no-password')
(psql_command if all_databases else pg_restore_command, '--no-password')
+ (
('--if-exists', '--exit-on-error', '--clean', '--dbname', database['name'])
if not all_databases
@ -158,22 +223,22 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (() if extract_process else (dump_filename,))
)
extra_environment = make_extra_environment(database)
logger.debug(
'{}: Restoring PostgreSQL database {}{}'.format(log_prefix, database['name'], dry_run_label)
)
logger.debug(f"{log_prefix}: Restoring PostgreSQL database {database['name']}{dry_run_label}")
if dry_run:
return
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process] if extract_process else [],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout if extract_process else None,
extra_environment=extra_environment,
borg_local_path=location_config.get('local_path', 'borg'),
)
execute_command(analyze_command, extra_environment=extra_environment)

125
borgmatic/hooks/sqlite.py Normal file
View File

@ -0,0 +1,125 @@
import logging
import os
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'sqlite_databases'
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given SQLite3 databases to a file. The databases are supplied as a sequence of
configuration dicts, as per the configuration schema. Use the given log prefix in any log
entries. Use the given location configuration dict to construct the destination path. If this
is a dry run, then don't actually dump anything.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info(f'{log_prefix}: Dumping SQLite databases{dry_run_label}')
for database in databases:
database_path = database['path']
if database['name'] == 'all':
logger.warning('The "all" database name has no meaning for SQLite3 databases')
if not os.path.exists(database_path):
logger.warning(
f'{log_prefix}: No SQLite database at {database_path}; An empty database will be created and dumped'
)
dump_path = make_dump_path(location_config)
dump_filename = dump.make_database_dump_filename(dump_path, database['name'])
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of SQLite database at {database_path} to {dump_filename}'
)
continue
command = (
'sqlite3',
database_path,
'.dump',
'>',
dump_filename,
)
logger.debug(
f'{log_prefix}: Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}'
)
if dry_run:
continue
dump.create_parent_directory_for_dump(dump_filename)
processes.append(execute_command(command, shell=True, run_to_completion=False))
return processes
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
'''
Remove the given SQLite3 database dumps from the filesystem. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema. Use the given log prefix in
any log entries. Use the given location configuration dict to construct the destination path.
If this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'SQLite', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
'''
Make a pattern that matches the given SQLite3 databases. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name)
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
'''
Restore the given SQLite3 database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database_path = database_config[0]['path']
logger.debug(f'{log_prefix}: Restoring SQLite database at {database_path}{dry_run_label}')
if dry_run:
return
try:
os.remove(database_path)
logger.warning(f'{log_prefix}: Removed existing SQLite database at {database_path}')
except FileNotFoundError: # pragma: no cover
pass
restore_command = (
'sqlite3',
database_path,
)
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout,
)

View File

@ -1,4 +1,5 @@
import logging
import logging.handlers
import os
import sys
@ -67,7 +68,7 @@ class Multi_stream_handler(logging.Handler):
def emit(self, record):
'''
Dispatch the log record to the approriate stream handler for the record's log level.
Dispatch the log record to the appropriate stream handler for the record's log level.
'''
self.log_level_to_handler[record.levelno].emit(record)
@ -84,18 +85,19 @@ class Multi_stream_handler(logging.Handler):
handler.setLevel(level)
LOG_LEVEL_TO_COLOR = {
logging.CRITICAL: colorama.Fore.RED,
logging.ERROR: colorama.Fore.RED,
logging.WARN: colorama.Fore.YELLOW,
logging.INFO: colorama.Fore.GREEN,
logging.DEBUG: colorama.Fore.CYAN,
}
class Console_color_formatter(logging.Formatter):
def format(self, record):
color = LOG_LEVEL_TO_COLOR.get(record.levelno)
add_custom_log_levels()
color = {
logging.CRITICAL: colorama.Fore.RED,
logging.ERROR: colorama.Fore.RED,
logging.WARN: colorama.Fore.YELLOW,
logging.ANSWER: colorama.Fore.MAGENTA,
logging.INFO: colorama.Fore.GREEN,
logging.DEBUG: colorama.Fore.CYAN,
}.get(record.levelno)
return color_text(color, record.msg)
@ -106,7 +108,46 @@ def color_text(color, message):
if not color:
return message
return '{}{}{}'.format(color, message, colorama.Style.RESET_ALL)
return f'{color}{message}{colorama.Style.RESET_ALL}'
def add_logging_level(level_name, level_number):
'''
Globally add a custom logging level based on the given (all uppercase) level name and number.
Do this idempotently.
Inspired by https://stackoverflow.com/questions/2183233/how-to-add-a-custom-loglevel-to-pythons-logging-facility/35804945#35804945
'''
method_name = level_name.lower()
if not hasattr(logging, level_name):
logging.addLevelName(level_number, level_name)
setattr(logging, level_name, level_number)
if not hasattr(logging, method_name):
def log_for_level(self, message, *args, **kwargs): # pragma: no cover
if self.isEnabledFor(level_number):
self._log(level_number, message, args, **kwargs)
setattr(logging.getLoggerClass(), method_name, log_for_level)
if not hasattr(logging.getLoggerClass(), method_name):
def log_to_root(message, *args, **kwargs): # pragma: no cover
logging.log(level_number, message, *args, **kwargs)
setattr(logging, method_name, log_to_root)
ANSWER = logging.WARN - 5
def add_custom_log_levels(): # pragma: no cover
'''
Add a custom log level between WARN and INFO for user-requested answers.
'''
add_logging_level('ANSWER', ANSWER)
def configure_logging(
@ -115,6 +156,7 @@ def configure_logging(
log_file_log_level=None,
monitoring_log_level=None,
log_file=None,
log_file_format=None,
):
'''
Configure logging to go to both the console and (syslog or log file). Use the given log levels,
@ -129,6 +171,8 @@ def configure_logging(
if monitoring_log_level is None:
monitoring_log_level = console_log_level
add_custom_log_levels()
# Log certain log levels to console stderr and others to stdout. This supports use cases like
# grepping (non-error) output.
console_error_handler = logging.StreamHandler(sys.stderr)
@ -137,7 +181,8 @@ def configure_logging(
{
logging.CRITICAL: console_error_handler,
logging.ERROR: console_error_handler,
logging.WARN: console_standard_handler,
logging.WARN: console_error_handler,
logging.ANSWER: console_standard_handler,
logging.INFO: console_standard_handler,
logging.DEBUG: console_standard_handler,
}
@ -151,15 +196,23 @@ def configure_logging(
syslog_path = '/dev/log'
elif os.path.exists('/var/run/syslog'):
syslog_path = '/var/run/syslog'
elif os.path.exists('/var/run/log'):
syslog_path = '/var/run/log'
if syslog_path and not interactive_console():
syslog_handler = logging.handlers.SysLogHandler(address=syslog_path)
syslog_handler.setFormatter(logging.Formatter('borgmatic: %(levelname)s %(message)s'))
syslog_handler.setFormatter(
logging.Formatter('borgmatic: {levelname} {message}', style='{') # noqa: FS003
)
syslog_handler.setLevel(syslog_log_level)
handlers = (console_handler, syslog_handler)
elif log_file:
file_handler = logging.handlers.WatchedFileHandler(log_file)
file_handler.setFormatter(logging.Formatter('[%(asctime)s] %(levelname)s: %(message)s'))
file_handler.setFormatter(
logging.Formatter(
log_file_format or '[{asctime}] {levelname}: {message}', style='{' # noqa: FS003
)
)
file_handler.setLevel(log_file_log_level)
handlers = (console_handler, file_handler)
else:

View File

@ -1,23 +1,34 @@
import logging
import os
import signal
import sys
logger = logging.getLogger(__name__)
def _handle_signal(signal_number, frame): # pragma: no cover
EXIT_CODE_FROM_SIGNAL = 128
def handle_signal(signal_number, frame):
'''
Send the signal to all processes in borgmatic's process group, which includes child processes.
'''
# Prevent infinite signal handler recursion. If the parent frame is this very same handler
# function, we know we're recursing.
if frame.f_back.f_code.co_name == _handle_signal.__name__:
if frame.f_back.f_code.co_name == handle_signal.__name__:
return
os.killpg(os.getpgrp(), signal_number)
if signal_number == signal.SIGTERM:
logger.critical('Exiting due to TERM signal')
sys.exit(EXIT_CODE_FROM_SIGNAL + signal.SIGTERM)
def configure_signals(): # pragma: no cover
def configure_signals():
'''
Configure borgmatic's signal handlers to pass relevant signals through to any child processes
like Borg. Note that SIGINT gets passed through even without these changes.
'''
for signal_number in (signal.SIGHUP, signal.SIGTERM, signal.SIGUSR1, signal.SIGUSR2):
signal.signal(signal_number, _handle_signal)
signal.signal(signal_number, handle_signal)

View File

@ -1,7 +1,9 @@
import logging
import borgmatic.logger
VERBOSITY_ERROR = -1
VERBOSITY_WARNING = 0
VERBOSITY_ANSWER = 0
VERBOSITY_SOME = 1
VERBOSITY_LOTS = 2
@ -10,9 +12,11 @@ def verbosity_to_log_level(verbosity):
'''
Given a borgmatic verbosity value, return the corresponding Python log level.
'''
borgmatic.logger.add_custom_log_levels()
return {
VERBOSITY_ERROR: logging.ERROR,
VERBOSITY_WARNING: logging.WARNING,
VERBOSITY_ANSWER: logging.ANSWER,
VERBOSITY_SOME: logging.INFO,
VERBOSITY_LOTS: logging.DEBUG,
}.get(verbosity, logging.WARNING)

View File

@ -1,13 +1,14 @@
FROM python:3.8.1-alpine3.11 as borgmatic
FROM docker.io/alpine:3.17.1 as borgmatic
COPY . /app
RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib
RUN pip install --no-cache /app && generate-borgmatic-config && chmod +r /etc/borgmatic/config.yaml
RUN borgmatic --help > /command-line.txt \
&& for action in init prune create check extract mount umount restore list info; do \
&& for action in rcreate transfer create prune compact check extract export-tar mount umount restore rlist list rinfo info break-lock borg; do \
echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \
&& borgmatic "$action" --help >> /command-line.txt; done
FROM node:13.7.0-alpine as html
FROM docker.io/node:19.5.0-alpine as html
ARG ENVIRONMENT=production
@ -17,6 +18,7 @@ RUN npm install @11ty/eleventy \
@11ty/eleventy-plugin-syntaxhighlight \
@11ty/eleventy-plugin-inclusive-language \
@11ty/eleventy-navigation \
eleventy-plugin-code-clipboard \
markdown-it \
markdown-it-anchor \
markdown-it-replace-link
@ -26,7 +28,7 @@ COPY . /source
RUN NODE_ENV=${ENVIRONMENT} npx eleventy --input=/source/docs --output=/output/docs \
&& mv /output/docs/index.html /output/index.html
FROM nginx:1.16.1-alpine
FROM docker.io/nginx:1.22.1-alpine
COPY --from=html /output /usr/share/nginx/html
COPY --from=borgmatic /etc/borgmatic/config.yaml /usr/share/nginx/html/docs/reference/config.yaml

View File

@ -63,11 +63,6 @@
top: -2px;
bottom: 2px;
}
@media (prefers-color-scheme: dark) {
.inlinelist .inlinelist-item code:before {
border-left-color: rgba(0,0,0,.8);
}
}
}
a.buzzword {
text-decoration: underline;
@ -91,26 +86,9 @@ a.buzzword {
.buzzword {
background-color: #f7f7f7;
}
@media (prefers-color-scheme: dark) {
.buzzword-list li,
.buzzword {
background-color: #080808;
}
}
.inlinelist .inlinelist-item {
background-color: #e9e9e9;
}
@media (prefers-color-scheme: dark) {
.inlinelist .inlinelist-item {
background-color: #000;
}
.inlinelist .inlinelist-item a {
color: #fff;
}
.inlinelist .inlinelist-item code {
color: inherit;
}
}
.inlinelist .inlinelist-item:hover,
.inlinelist .inlinelist-item:focus,
.buzzword-list li:hover,
@ -217,12 +195,6 @@ main p a.buzzword {
height: 1.75em;
font-weight: 600;
}
@media (prefers-color-scheme: dark) {
.numberflag {
background-color: #00bcd4;
color: #222;
}
}
h1 .numberflag,
h2 .numberflag,
h3 .numberflag,
@ -244,11 +216,6 @@ h2 .numberflag:after {
background-color: #fff;
width: calc(100% + 0.4em); /* 16px /40 */
}
@media (prefers-color-scheme: dark) {
h2 .numberflag:after {
background-color: #222;
}
}
/* Super featured list on home page */
.list-superfeatured .avatar {

View File

@ -12,16 +12,6 @@
line-height: 1.285714285714; /* 18px /14 */
font-family: system-ui, -apple-system, sans-serif;
}
@media (prefers-color-scheme: dark) {
.minilink {
background-color: #222;
/*
!important to override .elv-callout a
see _includes/components/callout.css
*/
color: #fff !important;
}
}
table .minilink {
margin-top: 6px;
}
@ -32,12 +22,6 @@ table .minilink {
.minilink[href]:focus {
background-color: #bbb;
}
@media (prefers-color-scheme: dark) {
.minilink[href]:hover,
.minilink[href]:focus {
background-color: #444;
}
}
pre + .minilink {
color: #fff;
border-radius: 0 0 0.2857142857143em 0.2857142857143em; /* 4px /14 */
@ -74,11 +58,6 @@ h4 .minilink {
text-transform: none;
box-shadow: 0 0 0 1px rgba(0,0,0,0.3);
}
@media (prefers-color-scheme: dark) {
.minilink-addedin {
box-shadow: 0 0 0 1px rgba(255,255,255,0.3);
}
}
.minilink-addedin:not(:first-child) {
margin-left: .5em;
}

View File

@ -1,18 +0,0 @@
#suggestion-form textarea {
font-family: sans-serif;
width: 100%;
}
#suggestion-form label {
font-weight: bold;
}
#suggestion-form input[type=email] {
font-size: 16px;
width: 100%;
}
#suggestion-form .form-error {
color: red;
}

View File

@ -1,33 +0,0 @@
<h2>Improve this documentation</h2>
<p>Have an idea on how to make this documentation even better? Send your
feedback below! But if you need help with borgmatic, or have an idea for a
borgmatic feature, please use our <a href="https://torsion.org/borgmatic/#issues">issue
tracker</a> instead.</p>
<form id="suggestion-form">
<div><label for="suggestion">Documentation suggestion</label></div>
<textarea id="suggestion" rows="8" cols="60" name="suggestion"></textarea>
<div data-sk-error="suggestion" class="form-error"></div>
<input id="_page" type="hidden" name="_page">
<input id="_subject" type="hidden" name="_subject" value="borgmatic documentation suggestion">
<br />
<label for="email">Email address</label>
<div><input id="email" type="email" name="email" placeholder="Only required if you want a response!"></div>
<div data-sk-error="email" class="form-error"></div>
<br />
<div><button type="submit">Send</button></div>
<br />
</form>
<script>
document.getElementById('_page').value = window.location.href;
window.sk=window.sk||function(){(sk.q=sk.q||[]).push(arguments)};
sk('form', 'init', {
id: '1d536680ab96',
element: '#suggestion-form'
});
</script>
<script defer src="https://js.statickit.com/statickit.js"></script>

View File

@ -0,0 +1,5 @@
<h2>Improve this documentation</h2>
<p>Have an idea on how to make this documentation even better? Use our <a
href="https://projects.torsion.org/borgmatic-collective/borgmatic/issues">issue tracker</a> to send your
feedback!</p>

View File

@ -79,22 +79,11 @@
border-bottom: 1px solid #ddd;
margin-bottom: 0.25em; /* 4px /16 */
}
@media (prefers-color-scheme: dark) {
.elv-toc-list > li > a {
color: #fff;
border-color: #444;
}
}
/* Active links */
.elv-toc-list li.elv-toc-active > a {
background-color: #dff7ff;
}
@media (prefers-color-scheme: dark) {
.elv-toc-list li.elv-toc-active > a {
background-color: #353535;
}
}
.elv-toc-list ul .elv-toc-active > a:after {
content: "";
}
@ -105,7 +94,7 @@
display: block;
}
/* Footer catgory navigation */
/* Footer category navigation */
.elv-cat-list-active {
font-weight: 600;
}

View File

@ -258,6 +258,7 @@ footer.elv-layout {
/* Header */
.elv-header {
position: relative;
text-align: center;
}
.elv-header-default {
display: flex;
@ -284,11 +285,6 @@ footer.elv-layout {
.elv-hero {
background-color: #222;
}
@media (prefers-color-scheme: dark) {
.elv-hero {
background-color: #292929;
}
}
.elv-hero img,
.elv-hero svg {
width: 42.95774646vh;
@ -529,3 +525,26 @@ main .elv-toc + h1 .direct-link {
display: none ;
}
}
.header-anchor {
text-decoration: none;
}
.header-anchor:hover::after {
content: " 🔗";
}
.mdi {
display: inline-block;
width: 1em;
height: 1em;
background-color: currentColor;
-webkit-mask: no-repeat center / 100%;
mask: no-repeat center / 100%;
-webkit-mask-image: var(--svg);
mask-image: var(--svg);
}
.mdi.mdi-content-copy {
--svg: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24' width='24' height='24'%3E%3Cpath fill='black' d='M19 21H8V7h11m0-2H8a2 2 0 0 0-2 2v14a2 2 0 0 0 2 2h11a2 2 0 0 0 2-2V7a2 2 0 0 0-2-2m-3-4H4a2 2 0 0 0-2 2v14h2V3h12V1Z'/%3E%3C/svg%3E");
}

View File

@ -3,6 +3,7 @@
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" href="docs/static/borgmatic.png" type="image/x-icon">
<title>{{ subtitle + ' - ' if subtitle}}{{ title }}</title>
{%- set css %}
{% include 'index.css' %}
@ -11,7 +12,6 @@
{% include 'components/minilink.css' %}
{% include 'components/toc.css' %}
{% include 'components/info-blocks.css' %}
{% include 'components/suggestion-form.css' %}
{% include 'prism-theme.css' %}
{% include 'asciinema.css' %}
{% endset %}
@ -23,6 +23,6 @@
<body>
{{ content | safe }}
{% initClipboardJS %}
</body>
</html>

View File

@ -28,5 +28,5 @@ headerClass: elv-header-default
{{ content | safe }}
{% include 'components/suggestion-form.html' %}
{% include 'components/suggestion-link.html' %}
</main>

View File

@ -1,17 +1,18 @@
---
title: How to add preparation and cleanup steps to backups
eleventyNavigation:
key: Add preparation and cleanup steps
key: 🧹 Add preparation and cleanup steps
parent: How-to guides
order: 8
order: 9
---
## Preparation and cleanup hooks
If you find yourself performing prepraration tasks before your backup runs, or
If you find yourself performing preparation tasks before your backup runs, or
cleanup work afterwards, borgmatic hooks may be of interest. Hooks are shell
commands that borgmatic executes for you at various points, and they're
configured in the `hooks` section of your configuration file. But if you're
looking to backup a database, it's probably easier to use the [database backup
commands that borgmatic executes for you at various points as it runs, and
they're configured in the `hooks` section of your configuration file. But if
you're looking to backup a database, it's probably easier to use the [database
backup
feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
instead.
@ -27,15 +28,55 @@ hooks:
- umount /some/filesystem
```
The `before_backup` and `after_backup` hooks each run once per configuration
file. `before_backup` hooks run prior to backups of all repositories in a
configuration file, right before the `create` action. `after_backup` hooks run
afterwards, but not if an error occurs in a previous hook or in the backups
themselves.
<span class="minilink minilink-addedin">New in version 1.6.0</span> The
`before_backup` and `after_backup` hooks each run once per repository in a
configuration file. `before_backup` hooks runs right before the `create`
action for a particular repository, and `after_backup` hooks run afterwards,
but not if an error occurs in a previous hook or in the backups themselves.
(Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file
rather than once per repository.)
There are additional hooks for the `prune` and `check` actions as well.
`before_prune` and `after_prune` run if there are any `prune` actions, while
`before_check` and `after_check` run if there are any `check` actions.
There are additional hooks that run before/after other actions as well. For
instance, `before_prune` runs before a `prune` action for a repository, while
`after_prune` runs after it.
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. These hooks are a good
place to run per-repository steps like mounting/unmounting a remote
filesystem.
## Variable interpolation
The before and after action hooks support interpolating particular runtime
variables into the hook command. Here's an example that assumes you provide a
separate shell script:
```yaml
hooks:
after_prune:
- record-prune.sh "{configuration_filename}" "{repository}"
```
In this example, when the hook is triggered, borgmatic interpolates runtime
values into the hook command: the borgmatic configuration filename and the
paths of the current Borg repository. Here's the full set of supported
variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the
hook was defined
* `log_file`
<span class="minilink minilink-addedin">New in version 1.7.12</span>:
path of the borgmatic log file, only set when the `--log-file` flag is used
* `repository`: path of the current repository as configured in the current
borgmatic configuration file
Note that you can also interpolate in [arbitrary environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
## Global hooks
You can also use `before_everything` and `after_everything` hooks to perform
global setup or cleanup:
@ -58,6 +99,8 @@ but only if there is a `create` action. It runs even if an error occurs during
a backup or a backup hook, but not if an error occurs during a
`before_everything` hook.
## Error hooks
borgmatic also runs `on_error` hooks if an error occurs, either when creating
a backup or running a backup hook. See the [monitoring and alerting
documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/)

View File

@ -1,9 +1,9 @@
---
title: How to backup to a removable drive or an intermittent server
eleventyNavigation:
key: Backup to a removable drive or server
key: 💾 Backup to a removable drive/server
parent: How-to guides
order: 9
order: 10
---
## Occasional backups
@ -16,9 +16,14 @@ But if you run borgmatic and your hard drive isn't plugged in, or your buddy's
server is offline, then you'll get an annoying error message and the overall
borgmatic run will fail (even if individual repositories still complete).
Another variant is when the source machine is only sometimes available for
backups, e.g. a laptop where you want to skip backups when the battery falls
below a certain level.
So what if you want borgmatic to swallow the error of a missing drive
or an offline server, and continue trucking along? That's where the concept of
"soft failure" come in.
or an offline server or a low battery—and exit gracefully? That's where the
concept of "soft failure" come in.
## Soft failure command hooks
@ -44,9 +49,12 @@ location:
- /home
repositories:
- /mnt/removable/backup.borg
- path: /mnt/removable/backup.borg
```
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list.
Then, write a `before_backup` hook in that same configuration file that uses
the external `findmnt` utility to see whether the drive is mounted before
proceeding.
@ -63,6 +71,9 @@ borgmatic. borgmatic logs the soft failure, skips all further actions in that
configurable file, and proceeds onward to any other borgmatic configuration
files you may have.
Note that `before_backup` only runs on the `create` action. See below about
optionally using `before_actions` instead.
You can imagine a similar check for the sometimes-online server case:
```yaml
@ -71,13 +82,33 @@ location:
- /home
repositories:
- me@buddys-server.org:backup.borg
- path: ssh://me@buddys-server.org/./backup.borg
hooks:
before_backup:
- ping -q -c 1 buddys-server.org > /dev/null || exit 75
```
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list.
Or to only run backups if the battery level is high enough:
```yaml
hooks:
before_backup:
- is_battery_percent_at_least.sh 25
```
(Writing the battery script is left as an exercise to the reader.)
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. So if you'd like your soft
failure command hook to run regardless of action, consider using
`before_actions` instead of `before_backup`.
## Caveats and details
There are some caveats you should be aware of with this feature.
@ -85,8 +116,8 @@ There are some caveats you should be aware of with this feature.
* You'll generally want to put a soft failure command in the `before_backup`
hook, so as to gate whether the backup action occurs. While a soft failure is
also supported in the `after_backup` hook, returning a soft failure there
won't prevent any actions from occuring, because they've already occurred!
Similiarly, you can return a soft failure from an `on_error` hook, but at
won't prevent any actions from occurring, because they've already occurred!
Similarly, you can return a soft failure from an `on_error` hook, but at
that point it's too late to prevent the error.
* Returning a soft failure does prevent further commands in the same hook from
executing. So, like a standard error, it is an "early out". Unlike a standard
@ -99,6 +130,6 @@ There are some caveats you should be aware of with this feature.
* The soft failure doesn't have to apply to a repository. You can even perform
a test to make sure that individual source directories are mounted and
available. Use your imagination!
* The soft failure feature also works for `before_prune`, `after_prune`,
`before_check`, and `after_check` hooks. But it is not implemented for
`before_everything` or `after_everything`.
* The soft failure feature also works for before/after hooks for other
actions as well. But it is not implemented for `before_everything` or
`after_everything`.

View File

@ -1,9 +1,9 @@
---
title: How to backup your databases
eleventyNavigation:
key: Backup your databases
key: 🗄️ Backup your databases
parent: How-to guides
order: 7
order: 8
---
## Database dump hooks
@ -15,7 +15,7 @@ consistent snapshot that is more suited for backups.
Fortunately, borgmatic includes built-in support for creating database dumps
prior to running backups. For example, here is everything you need to dump and
backup a couple of local PostgreSQL databases and a MySQL/MariaDB database:
backup a couple of local PostgreSQL databases and a MySQL/MariaDB database.
```yaml
hooks:
@ -26,10 +26,31 @@ hooks:
- name: posts
```
<span class="minilink minilink-addedin">New in version 1.5.22</span> You can
also dump MongoDB databases. For example:
```yaml
hooks:
mongodb_databases:
- name: messages
```
<span class="minilink minilink-addedin">New in version 1.7.9</span>
Additionally, you can dump SQLite databases. For example:
```yaml
hooks:
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
```
As part of each backup, borgmatic streams a database dump for each configured
database directly to Borg, so it's included in the backup without consuming
additional disk space. (The one exception is PostgreSQL's "directory" dump
format, which can't stream and therefore does consume temporary disk space.)
additional disk space. (The exceptions are the PostgreSQL/MongoDB "directory"
dump formats, which can't stream and therefore do consume temporary disk
space. Additionally, prior to borgmatic 1.5.3, all database dumps consumed
temporary disk space.)
To support this, borgmatic creates temporary named pipes in `~/.borgmatic` by
default. To customize this path, set the `borgmatic_source_directory` option
@ -47,6 +68,8 @@ hooks:
postgresql_databases:
- name: users
hostname: database1.example.org
- name: orders
hostname: database2.example.org
port: 5433
username: postgres
password: trustsome1
@ -54,13 +77,32 @@ hooks:
options: "--role=someone"
mysql_databases:
- name: posts
hostname: database2.example.org
hostname: database3.example.org
port: 3307
username: root
password: trustsome1
options: "--skip-comments"
mongodb_databases:
- name: messages
hostname: database4.example.org
port: 27018
username: dbuser
password: trustsome1
authentication_database: mongousers
options: "--ssl"
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
```
See your [borgmatic configuration
file](https://torsion.org/borgmatic/docs/reference/configuration/) for
additional customization of the options passed to database commands (when
listing databases, restoring databases, etc.).
### All databases
If you want to dump all databases on a host, use `all` for the database name:
```yaml
@ -69,13 +111,86 @@ hooks:
- name: all
mysql_databases:
- name: all
mongodb_databases:
- name: all
```
Note that you may need to use a `username` of the `postgres` superuser for
this to work with PostgreSQL.
If you would like to backup databases only and not source directories, you can
specify an empty `source_directories` value because it is a mandatory field:
The SQLite hook in particular does not consider "all" a special database name.
<span class="minilink minilink-addedin">New in version 1.7.6</span> With
PostgreSQL and MySQL, you can optionally dump "all" databases to separate
files instead of one combined dump file, allowing more convenient restores of
individual databases. Enable this by specifying your desired database dump
`format`:
```yaml
hooks:
postgresql_databases:
- name: all
format: custom
mysql_databases:
- name: all
format: sql
```
### Containers
If your database is running within a Docker container and borgmatic is too, no
problem—simply configure borgmatic to connect to the container's name on its
exposed port. For instance:
```yaml
hooks:
postgresql_databases:
- name: users
hostname: your-database-container-name
port: 5433
username: postgres
password: trustsome1
```
But what if borgmatic is running on the host? You can still connect to a
database container if its ports are properly exposed to the host. For
instance, when running the database container with Docker, you can specify
`--publish 127.0.0.1:5433:5432` so that it exposes the container's port 5432
to port 5433 on the host (only reachable on localhost, in this case). Or the
same thing with Docker Compose:
```yaml
services:
your-database-container-name:
image: postgres
ports:
- 127.0.0.1:5433:5432
```
And then you can connect to the database from borgmatic running on the host:
```yaml
hooks:
postgresql_databases:
- name: users
hostname: 127.0.0.1
port: 5433
username: postgres
password: trustsome1
```
Of course, alter the ports in these examples to suit your particular database
system.
### No source directories
<span class="minilink minilink-addedin">New in version 1.7.1</span> If you
would like to backup databases only and not source directories, you can omit
`source_directories` entirely.
In older versions of borgmatic, instead specify an empty `source_directories`
value, as it is a mandatory option prior to version 1.7.1:
```yaml
location:
@ -86,6 +201,15 @@ hooks:
```
### External passwords
If you don't want to keep your database passwords in your borgmatic
configuration file, you can instead pass them in via [environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/)
or command-line [configuration
overrides](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides).
### Configuration backups
An important note about this database configuration: You'll need the
@ -97,38 +221,37 @@ bring back any missing configuration files in order to restore a database.
## Supported databases
As of now, borgmatic supports PostgreSQL and MySQL/MariaDB databases
directly. But see below about general-purpose preparation and cleanup hooks as
a work-around with other database systems. Also, please [file a
ticket](https://torsion.org/borgmatic/#issues) for additional database systems
that you'd like supported.
As of now, borgmatic supports PostgreSQL, MySQL/MariaDB, MongoDB, and SQLite
databases directly. But see below about general-purpose preparation and
cleanup hooks as a work-around with other database systems. Also, please [file
a ticket](https://torsion.org/borgmatic/#issues) for additional database
systems that you'd like supported.
## Database restoration
To restore a database dump from an archive, use the `borgmatic restore`
action. But the first step is to figure out which archive to restore from. A
good way to do that is to use the `list` action:
good way to do that is to use the `rlist` action:
```bash
borgmatic list
borgmatic rlist
```
(No borgmatic `list` action? Try the old-style `--list`, or upgrade
borgmatic!)
(No borgmatic `rlist` action? Try `list` instead or upgrade borgmatic!)
That should yield output looking something like:
```text
host-2019-01-01T04:05:06.070809 Tue, 2019-01-01 04:05:06 [...]
host-2019-01-02T04:06:07.080910 Wed, 2019-01-02 04:06:07 [...]
host-2023-01-01T04:05:06.070809 Tue, 2023-01-01 04:05:06 [...]
host-2023-01-02T04:06:07.080910 Wed, 2023-01-02 04:06:07 [...]
```
Assuming that you want to restore all database dumps from the archive with the
most up-to-date files and therefore the latest timestamp, run a command like:
```bash
borgmatic restore --archive host-2019-01-02T04:06:07.080910
borgmatic restore --archive host-2023-01-02T04:06:07.080910
```
(No borgmatic `restore` action? Upgrade borgmatic!)
@ -154,10 +277,11 @@ If you have a single repository in your borgmatic configuration file(s), no
problem: the `restore` action figures out which repository to use.
But if you have multiple repositories configured, then you'll need to specify
the repository path containing the archive to restore. Here's an example:
the repository to use via the `--repository` flag. This can be done either
with the repository's path or its label as configured in your borgmatic configuration file.
```bash
borgmatic restore --repository repo.borg --archive host-2019-...
borgmatic restore --repository repo.borg --archive host-2023-...
```
### Restore particular databases
@ -167,9 +291,39 @@ restore one of them, use the `--database` flag to select one or more
databases. For instance:
```bash
borgmatic restore --archive host-2019-... --database users
borgmatic restore --archive host-2023-... --database users
```
<span class="minilink minilink-addedin">New in version 1.7.6</span> You can
also restore individual databases even if you dumped them as "all"—as long as
you dumped them into separate files via use of the "format" option. See above
for more information.
### Restore all databases
To restore all databases:
```bash
borgmatic restore --archive host-2023-... --database all
```
Or omit the `--database` flag entirely:
```bash
borgmatic restore --archive host-2023-...
```
Prior to borgmatic version 1.7.6, this restores a combined "all" database
dump from the archive.
<span class="minilink minilink-addedin">New in version 1.7.6</span> Restoring
"all" databases restores each database found in the selected archive. That
includes any combined dump file named "all" and any other individual database
dumps found in the archive.
### Limitations
There are a few important limitations with borgmatic's current database
@ -185,19 +339,34 @@ backups to avoid getting caught without a way to restore a database.
databases that share the exact same name on different hosts.
4. Because database hooks implicitly enable the `read_special` configuration
setting to support dump and restore streaming, you'll need to ensure that any
special files are excluded from backups (named pipes, block devices, and
character devices) to prevent hanging. Try a command like `find / -type c,b,p`
to find such files. Common directories to exclude are `/dev` and `/run`, but
that may not be exhaustive.
special files are excluded from backups (named pipes, block devices,
character devices, and sockets) to prevent hanging. Try a command like
`find /your/source/path -type b -or -type c -or -type p -or -type s` to find
such files. Common directories to exclude are `/dev` and `/run`, but that may
not be exhaustive. <span class="minilink minilink-addedin">New in version
1.7.3</span> When database hooks are enabled, borgmatic automatically excludes
special files that may cause Borg to hang, so you no longer need to manually
exclude them. (This includes symlinks with special files as a destination.) You
can override/prevent this behavior by explicitly setting `read_special` to true.
### Manual restoration
If you prefer to restore a database without the help of borgmatic, first
[extract](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/) an
archive containing a database dump, and then manually restore the dump file
found within the extracted `~/.borgmatic/` path (e.g. with `pg_restore` or
`mysql` commands).
archive containing a database dump.
borgmatic extracts the dump file into the *`username`*`/.borgmatic/` directory
within the extraction destination path, where *`username`* is the user that
created the backup. For example, if you created the backup with the `root`
user and you're extracting to `/tmp`, then the dump will be in
`/tmp/root/.borgmatic`.
After extraction, you can manually restore the dump file using native database
commands like `pg_restore`, `mysql`, `mongorestore`, `sqlite`, or similar.
Also see the documentation on [listing database
dumps](https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#listing-database-dumps).
## Preparation and cleanup hooks
@ -212,6 +381,23 @@ dumps with any database system.
## Troubleshooting
### PostgreSQL/MySQL authentication errors
With PostgreSQL and MySQL/MariaDB, if you're getting authentication errors
when borgmatic tries to connect to your database, a natural reaction is to
increase your borgmatic verbosity with `--verbosity 2` and go looking in the
logs. You'll notice however that your database password does not show up in
the logs. This is likely not the cause of the authentication problem unless
you mistyped your password, however; borgmatic passes your password to the
database via an environment variable that does not appear in the logs.
The cause of an authentication error is often on the database side—in the
configuration of which users are allowed to connect and how they are
authenticated. For instance, with PostgreSQL, check your
[pg_hba.conf](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html)
file for that configuration.
### MySQL table lock errors
If you encounter table lock errors during a database dump with MySQL/MariaDB,
@ -230,5 +416,14 @@ hooks:
### borgmatic hangs during backup
See Limitations above about `read_special`. You may need to exclude certain
paths with named pipes, block devices, or character devices on which borgmatic
is hanging.
paths with named pipes, block devices, character devices, or sockets on which
borgmatic is hanging.
Alternatively, if excluding special files is too onerous, you can create two
separate borgmatic configuration files—one for your source files and a
separate one for backing up databases. That way, the database `read_special`
option will not be active when backing up special files.
<span class="minilink minilink-addedin">New in version 1.7.3</span> See
Limitations above about borgmatic's automatic exclusion of special files to
prevent Borg hangs.

View File

@ -1,86 +1,164 @@
---
title: How to deal with very large backups
eleventyNavigation:
key: Deal with very large backups
key: 📏 Deal with very large backups
parent: How-to guides
order: 3
order: 4
---
## Biggish data
Borg itself is great for efficiently de-duplicating data across successive
backup archives, even when dealing with very large repositories. But you may
find that while borgmatic's default mode of "prune, create, and check" works
well on small repositories, it's not so great on larger ones. That's because
running the default pruning and consistency checks take a long time on large
repositories.
find that while borgmatic's default actions of `create`, `prune`, `compact`,
and `check` works well on small repositories, it's not so great on larger
ones. That's because running the default pruning, compact, and consistency
checks take a long time on large repositories.
<span class="minilink minilink-addedin">Prior to version 1.7.9</span> The
default action ordering was `prune`, `compact`, `create`, and `check`.
### A la carte actions
If you find yourself in this situation, you have some options. First, you can
run borgmatic's pruning, creating, or checking actions separately. For
instance, the following optional actions are available:
If you find yourself wanting to customize the actions, you have some options.
First, you can run borgmatic's `prune`, `compact`, `create`, or `check`
actions separately. For instance, the following optional actions are
available (among others):
```bash
borgmatic prune
borgmatic create
borgmatic prune
borgmatic compact
borgmatic check
```
(No borgmatic `prune`, `create`, or `check` actions? Try the old-style
`--prune`, `--create`, or `--check`. Or upgrade borgmatic!)
You can run with only one of these actions provided, or you can mix and match
any number of them in a single borgmatic run. This supports approaches like
skipping certain actions while running others. For instance, this skips
`prune` and only runs `create` and `check`:
You can run borgmatic with only one of these actions provided, or you can mix
and match any number of them in a single borgmatic run. This supports
approaches like skipping certain actions while running others. For instance,
this skips `prune` and `compact` and only runs `create` and `check`:
```bash
borgmatic create check
```
Or, you can make backups with `create` on a frequent schedule (e.g. with
`borgmatic create` called from one cron job), while only running expensive
consistency checks with `check` on a much less frequent basis (e.g. with
`borgmatic check` called from a separate cron job).
<span class="minilink minilink-addedin">New in version 1.7.9</span> borgmatic
now respects your specified command-line action order, running actions in the
order you specify. In previous versions, borgmatic ran your specified actions
in a fixed ordering regardless of the order they appeared on the command-line.
But instead of running actions together, another option is to run backups with
`create` on a frequent schedule (e.g. with `borgmatic create` called from one
cron job), while only running expensive consistency checks with `check` on a
much less frequent basis (e.g. with `borgmatic check` called from a separate
cron job).
### Consistency check configuration
Another option is to customize your consistency checks. The default
consistency checks run both full-repository checks and per-archive checks
within each repository.
Another option is to customize your consistency checks. By default, if you
omit consistency checks from configuration, borgmatic runs full-repository
checks (`repository`) and per-archive checks (`archives`) within each
repository. (Although see below about check frequency.) This is equivalent to
what `borg check` does if run without options.
But if you find that archive checks are too slow, for example, you can
configure borgmatic to run repository checks only. Configure this in the
`consistency` section of borgmatic configuration:
```yaml
consistency:
checks:
- name: repository
```
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> The
`checks` option was a plain list of strings without the `name:` part, and
borgmatic ran each configured check every time checks were run. For example:
```yaml
consistency:
checks:
- repository
```
Here are the available checks from fastest to slowest:
* `repository`: Checks the consistency of the repository itself.
* `archives`: Checks all of the archives in the repository.
* `extract`: Performs an extraction dry-run of the most recent archive.
* `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data (implies `archives` as well).
* `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data.
See [Borg's check documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html) for more information.
Note that the `data` check is a more thorough version of the `archives` check,
so enabling the `data` check implicitly enables the `archives` check as well.
See [Borg's check
documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html)
for more information.
### Check frequency
<span class="minilink minilink-addedin">New in version 1.6.2</span> You can
optionally configure checks to run on a periodic basis rather than every time
borgmatic runs checks. For instance:
```yaml
consistency:
checks:
- name: repository
frequency: 2 weeks
- name: archives
frequency: 1 month
```
This tells borgmatic to run the `repository` consistency check at most once
every two weeks for a given repository and the `archives` check at most once a
month. The `frequency` value is a number followed by a unit of time, e.g. "3
days", "1 week", "2 months", etc.
The `frequency` defaults to `always` for a check configured without a
`frequency`, which means run this check every time checks run. But if you omit
consistency checks from configuration entirely, borgmatic runs full-repository
checks (`repository`) and per-archive checks (`archives`) within each
repository, at most once a month.
Unlike a real scheduler like cron, borgmatic only makes a best effort to run
checks on the configured frequency. It compares that frequency with how long
it's been since the last check for a given repository (as recorded in a file
within `~/.borgmatic/checks`). If it hasn't been long enough, the check is
skipped. And you still have to run `borgmatic check` (or `borgmatic` without
actions) in order for checks to run, even when a `frequency` is configured!
This also applies *across* configuration files that have the same repository
configured. Make sure you have the same check frequency configured in each
though—or the most frequently configured check will apply.
If you want to temporarily ignore your configured frequencies, you can invoke
`borgmatic check --force` to run checks unconditionally.
### Disabling checks
If that's still too slow, you can disable consistency checks entirely,
either for a single repository or for all repositories.
Disabling all consistency checks looks like this:
```yaml
consistency:
checks:
- name: disabled
```
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> `checks`
was a plain list of strings without the `name:` part. For instance:
```yaml
consistency:
checks:
- disabled
```
Or, if you have multiple repositories in your borgmatic configuration file,
If you have multiple repositories in your borgmatic configuration file,
you can keep running consistency checks, but only against a subset of the
repositories:
@ -98,7 +176,8 @@ borgmatic check --only data --only extract
```
This is useful for running slow consistency checks on an infrequent basis,
separate from your regular checks.
separate from your regular checks. It is still subject to any configured
check frequencies unless the `--force` flag is used.
## Troubleshooting

View File

@ -1,32 +1,32 @@
---
title: How to develop on borgmatic
eleventyNavigation:
key: Develop on borgmatic
key: 🏗️ Develop on borgmatic
parent: How-to guides
order: 11
order: 13
---
## Source code
To get set up to hack on borgmatic, first clone master via HTTPS or SSH:
```bash
git clone https://projects.torsion.org/witten/borgmatic.git
git clone https://projects.torsion.org/borgmatic-collective/borgmatic.git
```
Or:
```bash
git clone ssh://git@projects.torsion.org:3022/witten/borgmatic.git
git clone ssh://git@projects.torsion.org:3022/borgmatic-collective/borgmatic.git
```
Then, install borgmatic
"[editable](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs)"
"[editable](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs)"
so that you can run borgmatic commands while you're hacking on them to
make sure your changes work.
```bash
cd borgmatic/
pip3 install --editable --user .
cd borgmatic
pip3 install --user --editable .
```
Note that this will typically install the borgmatic commands into
@ -51,7 +51,6 @@ pip3 install --user tox
Finally, to actually run tests, run:
```bash
cd borgmatic
tox
```
@ -66,8 +65,6 @@ following:
tox -e black
```
Note that Black requires at minimum Python 3.6.
And if you get a complaint from the
[isort](https://github.com/timothycrosley/isort) Python import orderer, you
can ask isort to order your imports for you:
@ -76,6 +73,15 @@ can ask isort to order your imports for you:
tox -e isort
```
Similarly, if you get errors about spelling mistakes in source code, you can
ask [codespell](https://github.com/codespell-project/codespell) to correct
them:
```bash
tox -e codespell
```
### End-to-end tests
borgmatic additionally includes some end-to-end tests that integration test
@ -89,12 +95,36 @@ If you would like to run the full test suite, first install Docker and [Docker
Compose](https://docs.docker.com/compose/install/). Then run:
```bash
scripts/run-full-dev-tests
scripts/run-end-to-end-dev-tests
```
Note that this scripts assumes you have permission to run Docker. If you
don't, then you may need to run with `sudo`.
#### Podman
<span class="minilink minilink-addedin">New in version 1.7.12</span>
borgmatic's end-to-end tests optionally support using
[rootless](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md)
[Podman](https://podman.io/) instead of Docker.
Setting up Podman is outside the scope of this documentation, but here are
some key points to double-check:
* Install Podman along with `podman-docker` and your desired networking
support.
* Configure `/etc/subuid` and `/etc/subgid` to map users/groups for the
non-root user who will run tests.
* Create a non-root Podman socket for that user:
```bash
systemctl --user enable --now podman.socket
```
Then you'll be able to run end-to-end tests as per normal, and the test script
will automatically use your non-root Podman socket instead of a Docker socket.
## Code style
Start with [PEP 8](https://www.python.org/dev/peps/pep-0008/). But then, apply
@ -103,10 +133,10 @@ the following deviations from it:
* For strings, prefer single quotes over double quotes.
* Limit all lines to a maximum of 100 characters.
* Use trailing commas within multiline values or argument lists.
* For multiline constructs, put opening and closing delimeters on lines
* For multiline constructs, put opening and closing delimiters on lines
separate from their contents.
* Within multiline constructs, use standard four-space indentation. Don't align
indentation with an opening delimeter.
indentation with an opening delimiter.
borgmatic code uses the [Black](https://black.readthedocs.io/en/stable/) code
formatter, the [Flake8](http://flake8.pycqa.org/en/latest/) code checker, and
@ -118,7 +148,7 @@ See the Black, Flake8, and isort documentation for more information.
Each pull request triggers a continuous integration build which runs the test
suite. You can view these builds on
[build.torsion.org](https://build.torsion.org/witten/borgmatic), and they're
[build.torsion.org](https://build.torsion.org/borgmatic-collective/borgmatic), and they're
also linked from the commits list on each pull request.
## Documentation development
@ -143,3 +173,15 @@ http://localhost:8080 to view the documentation with your changes.
To close the documentation server, ctrl-C the script. Note that it does not
currently auto-reload, so you'll need to stop it and re-run it for any
additional documentation changes to take effect.
#### Podman
<span class="minilink minilink-addedin">New in version 1.7.12</span>
borgmatic's developer build for documentation optionally supports using
[rootless](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md)
[Podman](https://podman.io/) instead of Docker.
Setting up Podman is outside the scope of this documentation. But once you
install `podman-docker`, then `scripts/dev-docs` should automatically use
Podman instead of Docker.

View File

@ -1,41 +1,39 @@
---
title: How to extract a backup
eleventyNavigation:
key: Extract a backup
key: 📤 Extract a backup
parent: How-to guides
order: 6
order: 7
---
## Extract
When the worst happens—or you want to test your backups—the first step is
to figure out which archive to extract. A good way to do that is to use the
`list` action:
`rlist` action:
```bash
borgmatic list
borgmatic rlist
```
(No borgmatic `list` action? Try the old-style `--list`, or upgrade
borgmatic!)
(No borgmatic `rlist` action? Try `list` instead or upgrade borgmatic!)
That should yield output looking something like:
```text
host-2019-01-01T04:05:06.070809 Tue, 2019-01-01 04:05:06 [...]
host-2019-01-02T04:06:07.080910 Wed, 2019-01-02 04:06:07 [...]
host-2023-01-01T04:05:06.070809 Tue, 2023-01-01 04:05:06 [...]
host-2023-01-02T04:06:07.080910 Wed, 2023-01-02 04:06:07 [...]
```
Assuming that you want to extract the archive with the most up-to-date files
and therefore the latest timestamp, run a command like:
```bash
borgmatic extract --archive host-2019-01-02T04:06:07.080910
borgmatic extract --archive host-2023-01-02T04:06:07.080910
```
(No borgmatic `extract` action? Try the old-style `--extract`, or upgrade
borgmatic!)
(No borgmatic `extract` action? Upgrade borgmatic!)
With newer versions of borgmatic, you can simplify this to:
Or simplify this to:
```bash
borgmatic extract --archive latest
@ -43,7 +41,8 @@ borgmatic extract --archive latest
The `--archive` value is the name of the archive to extract. This extracts the
entire contents of the archive to the current directory, so make sure you're
in the right place before running the command.
in the right place before running the command—or see below about the
`--destination` flag.
## Repository selection
@ -52,10 +51,11 @@ If you have a single repository in your borgmatic configuration file(s), no
problem: the `extract` action figures out which repository to use.
But if you have multiple repositories configured, then you'll need to specify
the repository path containing the archive to extract. Here's an example:
the repository to use via the `--repository` flag. This can be done either
with the repository's path or its label as configured in your borgmatic configuration file.
```bash
borgmatic extract --repository repo.borg --archive host-2019-...
borgmatic extract --repository repo.borg --archive host-2023-...
```
## Extract particular files
@ -65,13 +65,22 @@ everything from an archive. To do that, tack on one or more `--path` values.
For instance:
```bash
borgmatic extract --archive host-2019-... --path path/1 path/2
borgmatic extract --archive latest --path path/1 path/2
```
Note that the specified restore paths should not have a leading slash. Like a
whole-archive extract, this also extracts into the current directory. So for
example, if you happen to be in the directory `/var` and you run the `extract`
command above, borgmatic will extract `/var/path/1` and `/var/path/2`.
whole-archive extract, this also extracts into the current directory by
default. So for example, if you happen to be in the directory `/var` and you
run the `extract` command above, borgmatic will extract `/var/path/1` and
`/var/path/2`.
### Searching for files
If you're not sure which archive contains the files you're looking for, you
can [search across
archives](https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#searching-for-a-file).
## Extract to a particular destination
@ -80,7 +89,7 @@ extract files to a particular destination directory, use the `--destination`
flag:
```bash
borgmatic extract --archive host-2019-... --destination /tmp
borgmatic extract --archive latest --destination /tmp
```
When using the `--destination` flag, be careful not to overwrite your system's
@ -104,7 +113,7 @@ archive as a [FUSE](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)
filesystem, you can use the `borgmatic mount` action. Here's an example:
```bash
borgmatic mount --archive host-2019-... --mount-point /mnt
borgmatic mount --archive latest --mount-point /mnt
```
This mounts the entire archive on the given mount point `/mnt`, so that you
@ -116,7 +125,7 @@ Omit the `--archive` flag to mount all archives (lazy-loaded):
borgmatic mount --mount-point /mnt
```
Or use the "latest" value for the archive to mount the latest successful archive:
Or use the "latest" value for the archive to mount the latest archive:
```bash
borgmatic mount --archive latest --mount-point /mnt
@ -127,7 +136,7 @@ your archive, use the `--path` flag, similar to the `extract` action above.
For instance:
```bash
borgmatic mount --archive host-2019-... --mount-point /mnt --path var/lib
borgmatic mount --archive latest --mount-point /mnt --path var/lib
```
When you're all done exploring your files, unmount your mount point. No

View File

@ -1,9 +1,9 @@
---
title: How to inspect your backups
eleventyNavigation:
key: Inspect your backups
key: 🔎 Inspect your backups
parent: How-to guides
order: 4
order: 5
---
## Backup progress
@ -37,18 +37,72 @@ borgmatic --stats
## Existing backups
borgmatic provides convenient actions for Borg's
[list](https://borgbackup.readthedocs.io/en/stable/usage/list.html) and
[info](https://borgbackup.readthedocs.io/en/stable/usage/info.html)
[`list`](https://borgbackup.readthedocs.io/en/stable/usage/list.html) and
[`info`](https://borgbackup.readthedocs.io/en/stable/usage/info.html)
functionality:
```bash
borgmatic list
borgmatic info
```
(No borgmatic `list` or `info` actions? Try the old-style `--list` or
`--info`. Or upgrade borgmatic!)
You can change the output format of `borgmatic list` by specifying your own
with `--format`. Refer to the [borg list --format
documentation](https://borgbackup.readthedocs.io/en/stable/usage/list.html#the-format-specifier-syntax)
for available values.
*(No borgmatic `list` or `info` actions? Upgrade borgmatic!)*
<span class="minilink minilink-addedin">New in borgmatic version 1.7.0</span>
There are also `rlist` and `rinfo` actions for displaying repository
information with Borg 2.x:
```bash
borgmatic rlist
borgmatic rinfo
```
See the [borgmatic command-line
reference](https://torsion.org/borgmatic/docs/reference/command-line/) for
more information.
### Searching for a file
<span class="minilink minilink-addedin">New in version 1.6.3</span> Let's say
you've accidentally deleted a file and want to find the backup archive(s)
containing it. `borgmatic list` provides a `--find` flag for exactly this
purpose. For instance, if you're looking for a `foo.txt`:
```bash
borgmatic list --find foo.txt
```
This will list your archives and indicate those with files matching
`*foo.txt*` anywhere in the archive. The `--find` parameter can alternatively
be a [Borg
pattern](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-patterns).
To limit the archives searched, use the standard `list` parameters for
filtering archives such as `--last`, `--archive`, `--match-archives`, etc. For
example, to search only the last five archives:
```bash
borgmatic list --find foo.txt --last 5
```
## Listing database dumps
If you have enabled borgmatic's [database
hooks](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/), you
can list backed up database dumps via borgmatic. For example:
```bash
borgmatic list --archive latest --find .borgmatic/*_databases
```
This gives you a listing of all database dump files contained in the latest
archive, complete with file sizes.
## Logging
@ -57,7 +111,7 @@ By default, borgmatic logs to a local syslog-compatible daemon if one is
present and borgmatic is running in a non-interactive console. Where those
logs show up depends on your particular system. If you're using systemd, try
running `journalctl -xe`. Otherwise, try viewing `/var/log/syslog` or
similiar.
similar.
You can customize the log level used for syslog logging with the
`--syslog-verbosity` flag, and this is independent from the console logging
@ -100,5 +154,39 @@ borgmatic --log-file /path/to/file.log
Note that if you use the `--log-file` flag, you are responsible for rotating
the log file so it doesn't grow too large, for example with
[logrotate](https://wiki.archlinux.org/index.php/Logrotate). Also, there is a
`--log-file-verbosity` flag to customize the log file's log level.
[logrotate](https://wiki.archlinux.org/index.php/Logrotate).
You can the `--log-file-verbosity` flag to customize the log file's log level:
```bash
borgmatic --log-file /path/to/file.log --log-file-verbosity 2
```
<span class="minilink minilink-addedin">New in version 1.7.11</span> Use the
`--log-file-format` flag to override the default log message format. This
format string can contain a series of named placeholders wrapped in curly
brackets. For instance, the default log format is: `[{asctime}] {levelname}:
{message}`. This means each log message is recorded as the log time (in square
brackets), a logging level name, a colon, and the actual log message.
So if you just want each log message to get logged *without* a timestamp or a
logging level name:
```bash
borgmatic --log-file /path/to/file.log --log-file-format "{message}"
```
Here is a list of available placeholders:
* `{asctime}`: time the log message was created
* `{levelname}`: level of the log message (`INFO`, `DEBUG`, etc.)
* `{lineno}`: line number in the source file where the log message originated
* `{message}`: actual log message
* `{pathname}`: path of the source file where the log message originated
See the [Python logging
documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes)
for additional placeholders.
Note that this `--log-file-format` flg only applies to the specified
`--log-file` and not to syslog or other logging.

View File

@ -1,9 +1,9 @@
---
title: How to make backups redundant
eleventyNavigation:
key: Make backups redundant
key: ☁️ Make backups redundant
parent: How-to guides
order: 2
order: 3
---
## Multiple repositories
@ -20,12 +20,13 @@ location:
# Paths of local or remote repositories to backup to.
repositories:
- 1234@usw-s001.rsync.net:backups.borg
- k8pDxu32@k8pDxu32.repo.borgbase.com:repo
- user1@scp2.cdn.lima-labs.com:repo
- /var/lib/backups/local.borg
- path: ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
- path: /var/lib/backups/local.borg
```
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list.
When you run borgmatic with this configuration, it invokes Borg once for each
configured repository in sequence. (So, not in parallel.) That means—in each
repository—borgmatic creates a single new backup archive containing all of
@ -33,10 +34,8 @@ your source directories.
Here's a way of visualizing what borgmatic does with the above configuration:
1. Backup `/home` and `/etc` to `1234@usw-s001.rsync.net:backups.borg`
2. Backup `/home` and `/etc` to `k8pDxu32@k8pDxu32.repo.borgbase.com:repo`
3. Backup `/home` and `/etc` to `user1@scp2.cdn.lima-labs.com:repo`
4. Backup `/home` and `/etc` to `/var/lib/backups/local.borg`
1. Backup `/home` and `/etc` to `k8pDxu32@k8pDxu32.repo.borgbase.com:repo`
2. Backup `/home` and `/etc` to `/var/lib/backups/local.borg`
This gives you redundancy of your data across repositories and even
potentially across providers.
@ -44,3 +43,13 @@ potentially across providers.
See [Borg repository URLs
documentation](https://borgbackup.readthedocs.io/en/stable/usage/general.html#repository-urls)
for more information on how to specify local and remote repository paths.
### Different options per repository
What if you want borgmatic to backup to multiple repositories—while also
setting different options for each one? In that case, you'll need to use
[a separate borgmatic configuration file for each
repository](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/)
instead of the multiple repositories in one configuration file as described
above. That's because all of the repositories in a particular configuration
file get the same options applied.

View File

@ -1,20 +1,22 @@
---
title: How to make per-application backups
eleventyNavigation:
key: Make per-application backups
key: 🔀 Make per-application backups
parent: How-to guides
order: 1
---
## Multiple backup configurations
You may find yourself wanting to create different backup policies for
different applications on your system. For instance, you may want one backup
configuration for your database data directory, and a different configuration
for your user home directories.
different applications on your system or even for different backup
repositories. For instance, you might want one backup configuration for your
database data directory and a different configuration for your user home
directories. Or one backup configuration for your local backups with a
different configuration for your remote repository.
The way to accomplish that is pretty simple: Create multiple separate
configuration files and place each one in a `/etc/borgmatic.d/` directory. For
instance:
instance, for applications:
```bash
sudo mkdir /etc/borgmatic.d
@ -22,6 +24,14 @@ sudo generate-borgmatic-config --destination /etc/borgmatic.d/app1.yaml
sudo generate-borgmatic-config --destination /etc/borgmatic.d/app2.yaml
```
Or, for repositories:
```bash
sudo mkdir /etc/borgmatic.d
sudo generate-borgmatic-config --destination /etc/borgmatic.d/repo1.yaml
sudo generate-borgmatic-config --destination /etc/borgmatic.d/repo2.yaml
```
When you set up multiple configuration files like this, borgmatic will run
each one in turn from a single borgmatic invocation. This includes, by
default, the traditional `/etc/borgmatic/config.yaml` as well.
@ -29,12 +39,106 @@ default, the traditional `/etc/borgmatic/config.yaml` as well.
Each configuration file is interpreted independently, as if you ran borgmatic
for each configuration file one at a time. In other words, borgmatic does not
perform any merging of configuration files by default. If you'd like borgmatic
to merge your configuration files, see below about configuration includes.
to merge your configuration files, for instance to avoid duplication of
settings, see below about configuration includes.
Additionally, the `~/.config/borgmatic.d/` directory works the same way as
`/etc/borgmatic.d`. If you need even more customizability, you can specify
alternate configuration paths on the command-line with borgmatic's `--config`
flag. See `borgmatic --help` for more information.
`/etc/borgmatic.d`.
If you need even more customizability, you can specify alternate configuration
paths on the command-line with borgmatic's `--config` flag. (See `borgmatic
--help` for more information.) For instance, if you want to schedule your
various borgmatic backups to run at different times, you'll need multiple
entries in your [scheduling software of
choice](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#autopilot),
each entry using borgmatic's `--config` flag instead of relying on
`/etc/borgmatic.d`.
## Archive naming
If you've got multiple borgmatic configuration files, you might want to create
archives with different naming schemes for each one. This is especially handy
if each configuration file is backing up to the same Borg repository but you
still want to be able to distinguish backup archives for one application from
another.
borgmatic supports this use case with an `archive_name_format` option. The
idea is that you define a string format containing a number of [Borg
placeholders](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-placeholders),
and borgmatic uses that format to name any new archive it creates. For
instance:
```yaml
storage:
...
archive_name_format: home-directories-{now}
```
This means that when borgmatic creates an archive, its name will start with
the string `home-directories-` and end with a timestamp for its creation time.
If `archive_name_format` is unspecified, the default is
`{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}`, meaning your system hostname plus a
timestamp in a particular format.
<span class="minilink minilink-addedin">New in version 1.7.11</span> borgmatic
uses the `archive_name_format` option to automatically limit which archives
get used for actions operating on multiple archives. This prevents, for
instance, duplicate archives from showing up in `rlist` or `info` results—even
if the same repository appears in multiple borgmatic configuration files. To
take advantage of this feature, simply use a different `archive_name_format`
in each configuration file.
Under the hood, borgmatic accomplishes this by substituting globs for certain
ephemeral data placeholders in your `archive_name_format`—and using the result
to filter archives when running supported actions.
For instance, let's say that you have this in your configuration:
```yaml
storage:
...
archive_name_format: {hostname}-user-data-{now}
```
borgmatic considers `{now}` an emphemeral data placeholder that will probably
change per archive, while `{hostname}` won't. So it turns the example value
into `{hostname}-user-data-*` and applies it to filter down the set of
archives used for actions like `rlist`, `info`, `prune`, `check`, etc.
The end result is that when borgmatic runs the actions for a particular
application-specific configuration file, it only operates on the archives
created for that application. Of course, this doesn't apply to actions like
`compact` that operate on an entire repository.
If this behavior isn't quite smart enough for your needs, you can use the
`match_archives` option to override the pattern that borgmatic uses for
filtering archives. For example:
```yaml
storage:
...
archive_name_format: {hostname}-user-data-{now}
match_archives: sh:myhost-user-data-*
```
For Borg 1.x, use a shell pattern for the `match_archives` value and see the
[Borg patterns
documentation](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-help-patterns)
for more information. For Borg 2.x, see the [match archives
documentation](https://borgbackup.readthedocs.io/en/2.0.0b5/usage/help.html#borg-help-match-archives).
Some borgmatic command-line actions also have a `--match-archives` flag that
overrides both the auto-matching behavior and the `match_archives`
configuration option.
<span class="minilink minilink-addedin">Prior to 1.7.11</span> The way to
limit the archives used for the `prune` action was a `prefix` option in the
`retention` section for matching against the start of archive names. And the
option for limiting the archives used for the `check` action was a separate
`prefix` in the `consistency` section. Both of these options are deprecated in
favor of the auto-matching behavior (or `match_archives`/`--match-archives`)
in newer versions of borgmatic.
## Configuration includes
@ -69,6 +173,10 @@ themselves and complaining that they are not valid configuration files, you
should put them in a directory other than `/etc/borgmatic.d/`. (A subdirectory
is fine.)
When a configuration include is a relative path, borgmatic loads it from either
the current working directory or from the directory containing the file doing
the including.
Note that this form of include must be a YAML value rather than a key. For
example, this will not work:
@ -80,44 +188,167 @@ location:
!include /etc/borgmatic/common_retention.yaml
```
But if you do want to merge in a YAML key and its values, keep reading!
But if you do want to merge in a YAML key *and* its values, keep reading!
## Include merging
If you need to get even fancier and pull in common configuration options while
potentially overriding individual options, you can perform a YAML merge of
included configuration using the YAML `<<` key. For instance, here's an
example of a main configuration file that pulls in two retention options via
an include, and then overrides one of them locally:
If you need to get even fancier and merge in common configuration options, you
can perform a YAML merge of included configuration using the YAML `<<` key.
For instance, here's an example of a main configuration file that pulls in
retention and consistency options via a single include:
```yaml
<<: !include /etc/borgmatic/common.yaml
location:
...
```
This is what `common.yaml` might look like:
```yaml
retention:
keep_hourly: 24
keep_daily: 7
consistency:
checks:
- name: repository
```
Once this include gets merged in, the resulting configuration would have all
of the `location` options from the original configuration file *and* the
`retention` and `consistency` options from the include.
Prior to borgmatic version 1.6.0, when there's a section collision between the
local file and the merged include, the local file's section takes precedence.
So if the `retention` section appears in both the local file and the include
file, the included `retention` is ignored in favor of the local `retention`.
But see below about deep merge in version 1.6.0+.
Note that this `<<` include merging syntax is only for merging in mappings
(configuration options and their values). But if you'd like to include a
single value directly, please see the section above about standard includes.
Additionally, there is a limitation preventing multiple `<<` include merges
per section. So for instance, that means you can do one `<<` merge at the
global level, another `<<` within each configuration section, etc. (This is a
YAML limitation.)
### Deep merge
<span class="minilink minilink-addedin">New in version 1.6.0</span> borgmatic
performs a deep merge of merged include files, meaning that values are merged
at all levels in the two configuration files. This allows you to include
common configuration—up to full borgmatic configuration files—while overriding
only the parts you want to customize.
For instance, here's an example of a main configuration file that pulls in two
retention options via an include and then overrides one of them locally:
```yaml
<<: !include /etc/borgmatic/common.yaml
location:
...
retention:
keep_daily: 5
<<: !include /etc/borgmatic/common_retention.yaml
```
This is what `common_retention.yaml` might look like:
This is what `common.yaml` might look like:
```yaml
keep_hourly: 24
keep_daily: 7
retention:
keep_hourly: 24
keep_daily: 7
```
Once this include gets merged in, the resulting configuration would have a
`keep_hourly` value of `24` and an overridden `keep_daily` value of `5`.
When there is a collision of an option between the local file and the merged
include, the local file's option takes precedent. And note that this is a
shallow merge rather than a deep merge, so the merging does not descend into
nested values.
When there's an option collision between the local file and the merged
include, the local file's option takes precedence.
Note that this `<<` include merging syntax is only for merging in mappings
(keys/values). If you'd like to include other types like scalars or lists
directly, please see the section above about standard includes.
<span class="minilink minilink-addedin">New in version 1.6.1</span> Colliding
list values are appended together.
### Shallow merge
Even though deep merging is generally pretty handy for included files,
sometimes you want specific sections in the local file to take precedence over
included sections—without any merging occurring for them.
<span class="minilink minilink-addedin">New in version 1.7.12</span> That's
where the `!retain` tag comes in. Whenever you're merging an included file
into your configuration file, you can optionally add the `!retain` tag to
particular local mappings or lists to retain the local values and ignore
included values.
For instance, start with this configuration file containing the `!retain` tag
on the `retention` mapping:
```yaml
<<: !include /etc/borgmatic/common.yaml
location:
repositories:
- repo.borg
retention: !retain
keep_daily: 5
```
And `common.yaml` like this:
```yaml
location:
repositories:
- common.borg
retention:
keep_hourly: 24
keep_daily: 7
```
Once this include gets merged in, the resulting configuration will have a
`keep_daily` value of `5` and nothing else in the `retention` section. That's
because the `!retain` tag says to retain the local version of `retention` and
ignore any values coming in from the include. But because the `repositories`
list doesn't have a `!retain` tag, it still gets merged together to contain
both `common.borg` and `repo.borg`.
The `!retain` tag can only be placed on mappings and lists, and it goes right
after the name of the option (and its colon) on the same line. The effects of
`!retain` are recursive, meaning that if you place a `!retain` tag on a
top-level mapping, even deeply nested values within it will not be merged.
Additionally, the `!retain` tag only works in a configuration file that also
performs a merge include with `<<: !include`. It doesn't make sense within,
for instance, an included configuration file itself (unless it in turn
performs its own merge include). That's because `!retain` only applies to the
file doing the include; it doesn't work in reverse or propagate through
includes.
## Debugging includes
<span class="minilink minilink-addedin">New in version 1.7.12</span> If you'd
like to see what the loaded configuration looks like after includes get merged
in, run `validate-borgmatic-config` on your configuration file:
```bash
sudo validate-borgmatic-config --show
```
You'll need to specify your configuration file with `--config` if it's not in
a default location.
This will output the merged configuration as borgmatic sees it, which can be
helpful for understanding how your includes work in practice.
## Configuration overrides
@ -140,7 +371,19 @@ What this does is load your configuration files, and for each one, disregard
the configured value for the `remote_path` option in the `location` section,
and use the value of `/usr/local/bin/borg1` instead.
Note that the value is parsed as an actual YAML string, so you can even set
You can even override multiple values at once. For instance:
```bash
borgmatic create --override section.option1=value1 section.option2=value2
```
This will accomplish the same thing:
```bash
borgmatic create --override section.option1=value1 --override section.option2=value2
```
Note that each value is parsed as an actual YAML string, so you can even set
list values by using brackets. For instance:
```bash
@ -150,7 +393,14 @@ borgmatic create --override location.repositories=[test1.borg,test2.borg]
Or even a single list element:
```bash
borgmatic create --override location.repositories=[/root/test1.borg]
borgmatic create --override location.repositories=[/root/test.borg]
```
If your override value contains special YAML characters like colons, then
you'll need quotes for it to parse correctly:
```bash
borgmatic create --override location.repositories="['user@server:test.borg']"
```
There is not currently a way to override a single element of a list without
@ -165,3 +415,53 @@ indentation and a leading dash.)
Be sure to quote your overrides if they contain spaces or other characters
that your shell may interpret.
An alternate to command-line overrides is passing in your values via [environment variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
## Constant interpolation
<span class="minilink minilink-addedin">New in version 1.7.10</span> Another
tool is borgmatic's support for defining custom constants. This is similar to
the [variable interpolation
feature](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation)
for command hooks, but the constants feature lets you substitute your own
custom values into anywhere in the entire configuration file. (Constants don't
work across includes or separate configuration files though.)
Here's an example usage:
```yaml
constants:
user: foo
archive_prefix: bar
location:
source_directories:
- /home/{user}/.config
- /home/{user}/.ssh
...
storage:
archive_name_format: '{archive_prefix}-{now}'
```
In this example, when borgmatic runs, all instances of `{user}` get replaced
with `foo` and all instances of `{archive-prefix}` get replaced with `bar-`.
(And in this particular example, `{now}` doesn't get replaced with anything,
but gets passed directly to Borg.) After substitution, the logical result
looks something like this:
```yaml
location:
source_directories:
- /home/foo/.config
- /home/foo/.ssh
...
storage:
archive_name_format: 'bar-{now}'
```
An alternate to constants is passing in your values via [environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).

View File

@ -1,9 +1,9 @@
---
title: How to monitor your backups
eleventyNavigation:
key: Monitor your backups
key: 🚨 Monitor your backups
parent: How-to guides
order: 5
order: 6
---
## Monitoring and alerting
@ -38,17 +38,19 @@ below for how to configure this.
borgmatic integrates with monitoring services like
[Healthchecks](https://healthchecks.io/), [Cronitor](https://cronitor.io),
[Cronhub](https://cronhub.io), and [PagerDuty](https://www.pagerduty.com/) and
pings these services whenever borgmatic runs. That way, you'll receive an
alert when something goes wrong or (for certain hooks) the service doesn't
hear from borgmatic for a configured interval. See [Healthchecks
[Cronhub](https://cronhub.io), [PagerDuty](https://www.pagerduty.com/), and
[ntfy](https://ntfy.sh/) and pings these services whenever borgmatic runs.
That way, you'll receive an alert when something goes wrong or (for certain
hooks) the service doesn't hear from borgmatic for a configured interval. See
[Healthchecks
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook),
[Cronitor
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook),
[Cronhub
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook),
and [PagerDuty
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook)
[PagerDuty
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook),
and [ntfy hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
below for how to configure this.
While these services offer different features, you probably only need to use
@ -59,8 +61,6 @@ one of them at most.
You can use traditional monitoring software to consume borgmatic JSON output
and track when the last successful backup occurred. See [scripting
borgmatic](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#scripting-borgmatic)
and [related
software](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#related-software)
below for how to configure this.
### Borg hosting providers
@ -83,10 +83,10 @@ tests](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/).
## Error hooks
When an error occurs during a `prune`, `create`, or `check` action, borgmatic
can run configurable shell commands to fire off custom error notifications or
take other actions, so you can get alerted as soon as something goes wrong.
Here's a not-so-useful example:
When an error occurs during a `create`, `prune`, `compact`, or `check` action,
borgmatic can run configurable shell commands to fire off custom error
notifications or take other actions, so you can get alerted as soon as
something goes wrong. Here's a not-so-useful example:
```yaml
hooks:
@ -104,10 +104,9 @@ hooks:
- send-text-message.sh "{configuration_filename}" "{repository}"
```
In this example, when the error occurs, borgmatic interpolates a few runtime
values into the hook command: the borgmatic configuration filename, and the
path of the repository. Here's the full set of supported variables you can use
here:
In this example, when the error occurs, borgmatic interpolates runtime values
into the hook command: the borgmatic configuration filename, and the path of
the repository. Here's the full set of supported variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the
error occurred
@ -117,9 +116,9 @@ here:
* `output`: output of the command that failed (may be blank if an error
occurred without running a command)
Note that borgmatic runs the `on_error` hooks only for `prune`, `create`, or
`check` actions or hooks in which an error occurs, and not other actions.
borgmatic does not run `on_error` hooks if an error occurs within a
Note that borgmatic runs the `on_error` hooks only for `create`, `prune`,
`compact`, or `check` actions or hooks in which an error occurs, and not other
actions. borgmatic does not run `on_error` hooks if an error occurs within a
`before_everything` or `after_everything` hook. For more about hooks, see the
[borgmatic hooks
documentation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/),
@ -137,14 +136,15 @@ URL" for your project. Here's an example:
```yaml
hooks:
healthchecks: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a
healthchecks:
ping_url: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a
```
With this hook in place, borgmatic pings your Healthchecks project when a
backup begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Healthchecks know that it has started if any of
the `prune`, `create`, or `check` actions are run.
the `create`, `prune`, `compact`, or `check` actions are run.
Then, if the actions complete successfully, borgmatic notifies Healthchecks of
the success after the `after_backup` hooks run, and includes borgmatic logs in
@ -154,12 +154,15 @@ in the Healthchecks UI, although be aware that Healthchecks currently has a
If an error occurs during any action or hook, borgmatic notifies Healthchecks
after the `on_error` hooks run, also tacking on logs including the error
itself. But the logs are only included for errors that occur when a `prune`,
`create`, or `check` action is run.
itself. But the logs are only included for errors that occur when a `create`,
`prune`, `compact`, or `check` action is run.
You can customize the verbosity of the logs that are sent to Healthchecks with
borgmatic's `--monitoring-verbosity` flag. The `--files` and `--stats` flags
may also be of use. See `borgmatic --help` for more information.
borgmatic's `--monitoring-verbosity` flag. The `--list` and `--stats` flags
may also be of use. See `borgmatic create --help` for more information.
Additionally, see the [borgmatic configuration
file](https://torsion.org/borgmatic/docs/reference/configuration/) for
additional Healthchecks options.
You can configure Healthchecks to notify you by a [variety of
mechanisms](https://healthchecks.io/#welcome-integrations) when backups fail
@ -177,15 +180,16 @@ API URL" for your monitor. Here's an example:
```yaml
hooks:
cronitor: https://cronitor.link/d3x0c1
cronitor:
ping_url: https://cronitor.link/d3x0c1
```
With this hook in place, borgmatic pings your Cronitor monitor when a backup
begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Cronitor know that it has started if any of the
`prune`, `create`, or `check` actions are run. Then, if the actions complete
successfully, borgmatic notifies Cronitor of the success after the
`prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
complete successfully, borgmatic notifies Cronitor of the success after the
`after_backup` hooks run. And if an error occurs during any action or hook,
borgmatic notifies Cronitor after the `on_error` hooks run.
@ -205,15 +209,16 @@ URL" for your monitor. Here's an example:
```yaml
hooks:
cronhub: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031
cronhub:
ping_url: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031
```
With this hook in place, borgmatic pings your Cronhub monitor when a backup
begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
hooks</a> run, borgmatic lets Cronhub know that it has started if any of the
`prune`, `create`, or `check` actions are run. Then, if the actions complete
successfully, borgmatic notifies Cronhub of the success after the
`prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
complete successfully, borgmatic notifies Cronhub of the success after the
`after_backup` hooks run. And if an error occurs during any action or hook,
borgmatic notifies Cronhub after the `on_error` hooks run.
@ -247,14 +252,15 @@ Here's an example:
```yaml
hooks:
pagerduty: a177cad45bd374409f78906a810a3074
pagerduty:
integration_key: a177cad45bd374409f78906a810a3074
```
With this hook in place, borgmatic creates a PagerDuty event for your service
whenever backups fail. Specifically, if an error occurs during a `create`,
`prune`, or `check` action, borgmatic sends an event to PagerDuty before the
`on_error` hooks run. Note that borgmatic does not contact PagerDuty when a
backup starts or ends without error.
`prune`, `compact`, or `check` action, borgmatic sends an event to PagerDuty
before the `on_error` hooks run. Note that borgmatic does not contact
PagerDuty when a backup starts or ends without error.
You can configure PagerDuty to notify you by a [variety of
mechanisms](https://support.pagerduty.com/docs/notifications) when backups
@ -264,51 +270,69 @@ If you have any issues with the integration, [please contact
us](https://torsion.org/borgmatic/#support-and-contributing).
## ntfy hook
[ntfy](https://ntfy.sh) is a free, simple, service (either hosted or self-hosted)
which offers simple pub/sub push notifications to multiple platforms including
[web](https://ntfy.sh/stats), [Android](https://play.google.com/store/apps/details?id=io.heckel.ntfy)
and [iOS](https://apps.apple.com/us/app/ntfy/id1625396347).
Since push notifications for regular events might soon become quite annoying,
this hook only fires on any errors by default in order to instantly alert you to issues.
The `states` list can override this.
As ntfy is unauthenticated, it isn't a suitable channel for any private information
so the default messages are intentionally generic. These can be overridden, depending
on your risk assessment. Each `state` can have its own custom messages, priorities and tags
or, if none are provided, will use the default.
An example configuration is shown here, with all the available options, including
[priorities](https://ntfy.sh/docs/publish/#message-priority) and
[tags](https://ntfy.sh/docs/publish/#tags-emojis):
```yaml
hooks:
ntfy:
topic: my-unique-topic
server: https://ntfy.my-domain.com
start:
title: A Borgmatic backup started
message: Watch this space...
tags: borgmatic
priority: min
finish:
title: A Borgmatic backup completed successfully
message: Nice!
tags: borgmatic,+1
priority: min
fail:
title: A Borgmatic backup failed
message: You should probably fix it
tags: borgmatic,-1,skull
priority: max
states:
- start
- finish
- fail
```
## Scripting borgmatic
To consume the output of borgmatic in other software, you can include an
optional `--json` flag with `create`, `list`, or `info` to get the output
formatted as JSON.
optional `--json` flag with `create`, `rlist`, `rinfo`, or `info` to get the
output formatted as JSON.
Note that when you specify the `--json` flag, Borg's other non-JSON output is
suppressed so as not to interfere with the captured JSON. Also note that JSON
output only shows up at the console, and not in syslog.
## Related software
* [Borgmacator GNOME AppIndicator](https://github.com/N-Coder/borgmacator/)
### Successful backups
`borgmatic list` includes support for a `--successful` flag that only lists
successful (non-checkpoint) backups. This flag works via a basic heuristic: It
assumes that non-checkpoint archive names end with a digit (e.g. from a
timestamp), while checkpoint archive names do not. This means that if you're
using custom archive names that do not end in a digit, the `--successful` flag
will not work as expected.
Combined with a built-in Borg flag like `--last`, you can list the last
successful backup for use in your monitoring scripts. Here's an example
combined with `--json`:
```bash
borgmatic list --successful --last 1 --json
```
Note that this particular combination will only work if you've got a single
backup "series" in your repository. If you're instead backing up, say, from
multiple different hosts into a single repository, then you'll need to get
fancier with your archive listing. See `borg list --help` for more flags.
### Latest backups
All borgmatic actions that accept an "--archive" flag allow you to specify an
archive name of "latest". This lets you get the latest successful archive
without having to first run "borgmatic list" manually, which can be handy in
automated scripts. Here's an example:
All borgmatic actions that accept an `--archive` flag allow you to specify an
archive name of `latest`. This lets you get the latest archive without having
to first run `borgmatic rlist` manually, which can be handy in automated
scripts. Here's an example:
```bash
borgmatic info --archive latest

View File

@ -0,0 +1,90 @@
---
title: How to provide your passwords
eleventyNavigation:
key: 🔒 Provide your passwords
parent: How-to guides
order: 2
---
## Environment variable interpolation
If you want to use a Borg repository passphrase or database passwords with
borgmatic, you can set them directly in your borgmatic configuration file,
treating those secrets like any other option value. But if you'd rather store
them outside of borgmatic, whether for convenience or security reasons, read
on.
<span class="minilink minilink-addedin">New in version 1.6.4</span> borgmatic
supports interpolating arbitrary environment variables directly into option
values in your configuration file. That means you can instruct borgmatic to
pull your repository passphrase, your database passwords, or any other option
values from environment variables. For instance:
```yaml
storage:
encryption_passphrase: ${MY_PASSPHRASE}
```
This uses the `MY_PASSPHRASE` environment variable as your encryption
passphrase. Note that the `{` `}` brackets are required. `$MY_PASSPHRASE` by
itself will not work.
In the case of `encryption_passphrase` in particular, an alternate approach
is to use Borg's `BORG_PASSPHRASE` environment variable, which doesn't even
require setting an explicit `encryption_passphrase` value in borgmatic's
configuration file.
For [database
configuration](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/),
the same approach applies. For example:
```yaml
hooks:
postgresql_databases:
- name: users
password: ${MY_DATABASE_PASSWORD}
```
This uses the `MY_DATABASE_PASSWORD` environment variable as your database
password.
### Interpolation defaults
If you'd like to set a default for your environment variables, you can do so with the following syntax:
```yaml
storage:
encryption_passphrase: ${MY_PASSPHRASE:-defaultpass}
```
Here, "`defaultpass`" is the default passphrase if the `MY_PASSPHRASE`
environment variable is not set. Without a default, if the environment
variable doesn't exist, borgmatic will error.
### Disabling interpolation
To disable this environment variable interpolation feature entirely, you can
pass the `--no-environment-interpolation` flag on the command-line.
Or if you'd like to disable interpolation within a single option value, you
can escape it with a backslash. For instance, if your password is literally
`${A}@!`:
```yaml
storage:
encryption_passphrase: \${A}@!
```
### Related features
Another way to override particular options within a borgmatic configuration
file is to use a [configuration
override](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides)
on the command-line. But please be aware of the security implications of
specifying secrets on the command-line.
Additionally, borgmatic action hooks support their own [variable
interpolation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation),
although in that case it's for particular borgmatic runtime values rather than
(only) environment variables.

View File

@ -0,0 +1,97 @@
---
title: How to run arbitrary Borg commands
eleventyNavigation:
key: 🔧 Run arbitrary Borg commands
parent: How-to guides
order: 11
---
## Running Borg with borgmatic
Borg has several commands (and options) that borgmatic does not currently
support. Sometimes though, as a borgmatic user, you may find yourself wanting
to take advantage of these off-the-beaten-path Borg features. You could of
course drop down to running Borg directly. But then you'd give up all the
niceties of your borgmatic configuration. You could file a [borgmatic
ticket](https://torsion.org/borgmatic/#issues) or even a [pull
request](https://torsion.org/borgmatic/#contributing) to add the feature. But
what if you need it *now*?
That's where borgmatic's support for running "arbitrary" Borg commands comes
in. Running Borg commands with borgmatic takes advantage of the following, all
based on your borgmatic configuration files or command-line arguments:
* configured repositories (automatically runs your Borg command once for each
one)
* local and remote Borg binary paths
* SSH settings and Borg environment variables
* lock wait settings
* verbosity
### borg action
<span class="minilink minilink-addedin">New in version 1.5.15</span> The way
you run Borg with borgmatic is via the `borg` action. Here's a simple example:
```bash
borgmatic borg break-lock
```
(No `borg` action in borgmatic? Time to upgrade!)
This runs Borg's `break-lock` command once on each configured borgmatic
repository. Notice how the repository isn't present in the specified Borg
options, as that part is provided by borgmatic.
You can also specify Borg options for relevant commands:
```bash
borgmatic borg rlist --short
```
This runs Borg's `rlist` command once on each configured borgmatic repository.
(The native `borgmatic rlist` action should be preferred for most use.)
What if you only want to run Borg on a single configured borgmatic repository
when you've got several configured? Not a problem. The `--repository` argument
lets you specify the repository to use, either by its path or its label:
```bash
borgmatic borg --repository repo.borg break-lock
```
And what about a single archive?
```bash
borgmatic borg --archive your-archive-name rlist
```
### Limitations
borgmatic's `borg` action is not without limitations:
* The Borg command you want to run (`create`, `list`, etc.) *must* come first
after the `borg` action. If you have any other Borg options to specify,
provide them after. For instance, `borgmatic borg list --progress` will work,
but `borgmatic borg --progress list` will not.
* borgmatic supplies the repository/archive name to Borg for you (based on
your borgmatic configuration or the `borgmatic borg --repository`/`--archive`
arguments), so do not specify the repository/archive otherwise.
* The `borg` action will not currently work for any Borg commands like `borg
serve` that do not accept a repository/archive name.
* Do not specify any global borgmatic arguments to the right of the `borg`
action. (They will be passed to Borg instead of borgmatic.) If you have
global borgmatic arguments, specify them *before* the `borg` action.
* Unlike other borgmatic actions, you cannot combine the `borg` action with
other borgmatic actions. This is to prevent ambiguity in commands like
`borgmatic borg list`, in which `list` is both a valid Borg command and a
borgmatic action. In this case, only the Borg command is run.
* Unlike normal borgmatic actions that support JSON, the `borg` action will
not disable certain borgmatic logs to avoid interfering with JSON output.
* Unlike other borgmatic actions, the `borg` action captures (and logs) all
output, so interactive prompts or flags like `--progress` will not work as
expected.
In general, this `borgmatic borg` feature should be considered an escape
valve—a feature of second resort. In the long run, it's preferable to wrap
Borg commands with borgmatic actions that can support them fully.

Some files were not shown because too many files have changed in this diff Show More