Compare commits

..

779 Commits

Author SHA1 Message Date
Yoann Laissus 061a74cbc2 Add a config entry for BORG_CHECK_I_KNOW_WHAT_I_AM_DOING env var 2023-07-09 12:48:01 +02:00
Dan Helfman 9cafc16052 For "borgmatic borg", pass the repository to Borg via a Borg-supported environment variable (#575).
continuous-integration/drone/push Build is passing Details
2023-07-03 00:08:54 -07:00
Dan Helfman fbbfc684ce Add referral link for Hetzner.
continuous-integration/drone/push Build is passing Details
2023-07-02 22:14:36 -07:00
Dan Helfman 13a37a1d9b Reddit is dead.
continuous-integration/drone/push Build is passing Details
2023-06-30 22:55:47 -07:00
Dan Helfman 9cf27fa4ba Deprecated configuration options warning logging.
continuous-integration/drone/push Build is passing Details
2023-06-29 10:03:36 -07:00
Dan Helfman e2c95327fb Fix an error when dumping a MySQL database and the "exclude_nodump" option is set (#720).
continuous-integration/drone/push Build is passing Details
2023-06-28 09:15:11 -07:00
Dan Helfman f60e97d5bf When merging two configuration files, error gracefully if the two files do not adhere to the same format.
continuous-integration/drone/push Build is passing Details
2023-06-26 16:46:09 -07:00
Dan Helfman 44f9ab95f9 Fix typos (#575).
continuous-integration/drone/push Build is passing Details
2023-06-26 14:37:23 -07:00
Dan Helfman bb6004fc4f Revamp "borg" action to support REPOSITORY and ARCHIVE env vars instead of implicitly injecting repository/archive into the Borg command (#575). 2023-06-26 14:35:07 -07:00
Dan Helfman b242078f54 Fix an error when running "borg key export" through borgmatic (#719).
continuous-integration/drone/push Build is passing Details
2023-06-26 09:30:46 -07:00
Dan Helfman c3004c6090 Some brief documentation on running only checks (#364).
continuous-integration/drone/push Build is passing Details
2023-06-25 22:49:36 -07:00
Dan Helfman b9a11e860d Remove legacy configuration parsing code, no longer needed with upgrade-borgmatic-config gone (#529). 2023-06-25 15:36:25 -07:00
Dan Helfman 37a0a0c421 Bump version for release.
continuous-integration/drone/tag Build is passing Details
2023-06-24 22:23:01 -07:00
Dan Helfman 325b561296 Switch from "init" to "rcreate" for creating repos in end-to-end tests.
continuous-integration/drone/push Build is passing Details
2023-06-24 15:52:20 -07:00
Dan Helfman b62017be4b Fix edge case in which "--config somepath.yaml" followed by an action alias (e.g. init for rcreate) wasn't parsed correctly (#716).
continuous-integration/drone/push Build is passing Details
2023-06-24 15:35:10 -07:00
Dan Helfman 8debcbeaba Remove duplicated tests (#716).
continuous-integration/drone/push Build is failing Details
2023-06-24 14:28:50 -07:00
Dan Helfman 35a11559ac Fix error parsing arguments with multiple verbosity flags (#716).
continuous-integration/drone/push Build is failing Details
2023-06-24 14:10:47 -07:00
Dan Helfman e4e455ee45 Deprecate validate-borgmatic-config in favor of new "config validate" action (#529).
continuous-integration/drone/push Build is passing Details
2023-06-23 10:11:41 -07:00
Dan Helfman 23809e9060 More Docker build fun (#326).
continuous-integration/drone/push Build is passing Details
2023-06-22 15:11:49 -07:00
Dan Helfman bb0dd14f69 Attempt to fix CI test failures (#326).
continuous-integration/drone/push Build was killed Details
2023-06-22 14:55:32 -07:00
Dan Helfman 308c96aeb5 Add comment describing need for dev-CI parity test. 2023-06-22 14:37:08 -07:00
Dan Helfman 62a2f5a1d0 Code formatting.
continuous-integration/drone/push Build is failing Details
2023-06-22 14:25:26 -07:00
Dan Helfman e8c862659c Add missing services to build service configuration and add a test to catch this in the future (#326).
continuous-integration/drone/push Build is failing Details
2023-06-22 14:20:42 -07:00
Dan Helfman 69611681e2 Add database restore overrides to NEWS, add a test, and move some tests (#326).
continuous-integration/drone/push Build is failing Details
2023-06-22 12:40:57 -07:00
Dan Helfman 9e0df595c8 Merge branch 'main' of github.com:borgmatic-collective/borgmatic 2023-06-22 12:29:32 -07:00
Dan Helfman 68d90e1e40
feat: allow restoring to different port/host/username (#326).
Merge pull request #73 from diivi/feat/restore-with-different-hostname-port-username
2023-06-22 12:28:34 -07:00
Dan Helfman 248500c7be Accidentally a word.
continuous-integration/drone/push Build is passing Details
2023-06-22 09:21:06 -07:00
Dan Helfman 3addb60fb8 Actually link to the most recent version.
continuous-integration/drone/push Build is passing Details
2023-06-22 09:13:45 -07:00
Dan Helfman 01fffab898 Clarify that references docs are only for the most recent version of borgmatic.
continuous-integration/drone/push Build is passing Details
2023-06-22 09:12:06 -07:00
Dan Helfman bc93401a70 Codespell fixes.
continuous-integration/drone/push Build is passing Details
2023-06-21 13:14:54 -07:00
Dan Helfman 1b90da5bf1 Deprecate generate-borgmatic-config in favor if new "config generate" action (#529).
continuous-integration/drone/push Build is failing Details
2023-06-21 12:19:49 -07:00
Dan Helfman 803fc25848 Add a test for another edge case (#712).
continuous-integration/drone/push Build is passing Details
2023-06-21 10:47:53 -07:00
Dan Helfman 248f82d6f6 Fix for another subaction argument-parsing edge case (#712).
continuous-integration/drone/push Build is passing Details
2023-06-21 10:41:32 -07:00
Divyansh Singh 87c6e5b349 make sure restore params in config aren't used when cli args are supplied 2023-06-21 00:03:07 +05:30
Dan Helfman 147516ae3f Remove additional upgrade-borgmatic-config code (#529).
continuous-integration/drone/push Build is passing Details
2023-06-20 09:41:26 -07:00
Dan Helfman b10aee3070 Remove upgrade-borgmatic-config command for upgrading borgmatic 1.1.0 INI-style configuration (#529).
continuous-integration/drone/push Build is failing Details
2023-06-19 23:17:59 -07:00
Dan Helfman 6098005f5d Fix an error when "data" check time files are accessed without getting upgraded first (#711, #713).
continuous-integration/drone/push Build is passing Details
2023-06-19 23:07:57 -07:00
Dan Helfman 7b8be800a4 Refactor arguments parsing to fix bootstrap action CLI issues (#712).
continuous-integration/drone/push Build is passing Details
2023-06-19 16:18:47 -07:00
Divyansh Singh 1a21eb03cd add tests for all databases 2023-06-20 00:52:01 +05:30
Divyansh Singh e2d82e9bba actually test port restores 2023-06-19 01:10:01 +05:30
Divyansh Singh 384182172a add unit tests for cases when cli/config restore args are used 2023-06-18 06:29:11 +05:30
Divyansh Singh 9016dcc418 all e2e tests 2023-06-18 05:47:35 +05:30
Divyansh Singh e53dd3da87 fix witten reported mysql error 2023-06-17 22:58:59 +05:30
Divyansh Singh 6c87608548 add tests for password logic 2023-06-17 00:47:15 +05:30
Dan Helfman ee2ebb79b8 Find sub-actions for an action without an isinstance() check.
continuous-integration/drone/push Build is passing Details
2023-06-16 10:57:01 -07:00
Divyansh Singh 89602d1614 pass all existing tests (and formatting) 2023-06-16 15:14:00 +05:30
Dan Helfman c294e78715 Use absolute paths when storing configuration files in an archive for later bootstrapping (#697).
continuous-integration/drone/push Build is passing Details
2023-06-15 21:45:43 -07:00
Dan Helfman 9152fed249 Add a documentation troubleshooting note for MySQL/MariaDB authentication errors (#399).
continuous-integration/drone/push Build is passing Details
2023-06-15 14:55:57 -07:00
Divyansh Singh 8389851f2f fix bug where port becomes truthy when none is converted to str 2023-06-15 23:34:50 +05:30
Dan Helfman bbc7f0596c Fix Bash completion for sub-actions like "borgmatic config bootstrap" (#697 follow-on work).
continuous-integration/drone/push Build is passing Details
2023-06-15 10:55:31 -07:00
Divyansh Singh 82d851d891 add argument for restore path 2023-06-15 23:05:53 +05:30
Divyansh Singh 62b6f13299 add restore-path support for sqlite 2023-06-15 23:02:09 +05:30
Divyansh Singh b7423c488e refactor password assignment logic 2023-06-15 22:54:06 +05:30
Dan Helfman 1d7c7eaaa7 Add sample systemd user serivce for running borgmatic as a non-root user (#669).
continuous-integration/drone/push Build is failing Details
2023-06-14 14:57:57 -07:00
Divyansh Singh a9386b7a87 add mongodb support, and sqlite restore path (config option only) 2023-06-15 02:18:24 +05:30
Divyansh Singh 205e5b1524 mysql support 2023-06-15 01:47:46 +05:30
Divyansh Singh 67f4d43aec witten review 2023-06-15 01:37:18 +05:30
Dan Helfman e15bec30e6 Mention some hang edge cases in database limitations (#710).
continuous-integration/drone/push Build is passing Details
2023-06-13 23:34:58 -07:00
Divyansh Singh 230cf6adc4 support command line args for hostname port username password 2023-06-14 00:11:19 +05:30
Divyansh Singh 8e8e64d920 add no-owner and refactor 2023-06-13 23:42:50 +05:30
Divyansh Singh f558cb3156 feat: allow restoring to different port/host/username 2023-06-12 21:54:39 +05:30
Dan Helfman 41924f2400 A little activism.
continuous-integration/drone/push Build is passing Details
2023-06-11 09:50:57 -07:00
Dan Helfman 670bdffb3c Code formatting.
continuous-integration/drone/push Build is passing Details
2023-06-10 19:25:49 -07:00
Dan Helfman 691d4f887a Fix incorrect log message (#697).
continuous-integration/drone/push Build is failing Details
2023-06-10 16:02:03 -07:00
Dan Helfman beb899d6fb Make user-facing manifest loading error messages a little friendlier (#697).
continuous-integration/drone/push Build is failing Details
2023-06-10 15:50:11 -07:00
Dan Helfman 0f9756e739 Fix failing test and add "bootstrap" action to CLI reference docs (#697).
continuous-integration/drone/push Build is passing Details
2023-06-10 15:17:18 -07:00
Dan Helfman d84f1ec616 Add bootstrap action to NEWS and make post-PR tweaks (#697).
continuous-integration/drone/push Build is failing Details
2023-06-10 14:52:00 -07:00
Dan Helfman ef409ad23c
Store configs used to create an archive in the archive and add borgmatic bootstrap (#697).
Merge pull request #71 from diivi/feat/store-config-in-archive
2023-06-10 14:39:53 -07:00
Divyansh Singh d370ff958d mock expand directories thrice 2023-06-10 01:05:34 +05:30
Divyansh Singh 197920d9ef improve tests and some docstrings. 2023-06-09 17:31:57 +05:30
Divyansh Singh 425f260a22 test parser merging 2023-06-09 04:15:18 +05:30
Divyansh Singh 3315555d06 cleaner test 2023-06-09 00:21:41 +05:30
Divyansh Singh 6475345a8f attempt to test parse_subparser_arguments 2023-06-08 01:02:43 +05:30
Divyansh Singh f90d30e0e1 remove duplicate comments 2023-06-08 00:08:39 +05:30
Divyansh Singh 8384eaefb1 reformat 2023-06-08 00:07:36 +05:30
Divyansh Singh dcb90bba50 some tests remaining 2023-06-07 23:56:02 +05:30
Divyansh Singh dc56fd33a0 formatting 2023-06-07 01:47:16 +05:30
Divyansh Singh 2d761dd86b coverage at 100 2023-06-07 01:43:01 +05:30
Divyansh Singh f82631e3bb tests for arguments.py 2023-06-07 00:56:19 +05:30
Divyansh Singh 4b024daae0 pass all tests with wittens recommendation 2023-06-06 23:37:09 +05:30
Divyansh Singh 6a1d1a2e59 fix indentation error that caused too many test failures 2023-06-05 20:31:09 +05:30
Divyansh Singh 206a9c9607 edit schema comments and work on witten review 2023-06-05 20:05:10 +05:30
Dan Helfman a6425b8867 Fix moved Arch Linux borgmatic URL.
continuous-integration/drone/push Build is passing Details
2023-06-04 22:21:16 -07:00
Dan Helfman b5d9398910 Stop uploading GPG signatures to pypi since it no longer supports them.
continuous-integration/drone/push Build is passing Details
2023-06-03 22:37:46 -07:00
Dan Helfman a185eb73b0 Fix GitHub release script now that "master" has been renamed to "main".
continuous-integration/drone/push Build is passing Details
2023-06-03 22:26:49 -07:00
Dan Helfman e80f27f922 Bump version for release.
continuous-integration/drone/tag Build is passing Details
2023-06-03 22:14:21 -07:00
Dan Helfman 1a5b3c9e4e Add Fedora schema loading fix to NEWS (#703).
continuous-integration/drone/push Build is passing Details
2023-06-03 22:07:24 -07:00
Dan Helfman b3f70434df Fix error loading configuration schema on Fedora Linux (#703).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #702
2023-06-04 05:04:41 +00:00
Felix Kaechele c61d63b235 Use open() to test for file existance and readability
Signed-off-by: Felix Kaechele <felix@kaechele.ca>
2023-06-04 00:54:29 -04:00
Felix Kaechele ba0899660d Verify that schema path exists before returning it
Signed-off-by: Felix Kaechele <felix@kaechele.ca>
2023-06-03 23:42:20 -04:00
Felix Kaechele 15cabb93ca Drop importlib_metadata entirely
The fallback option using the dirname of the config module location
seems to be more robust in a number of cases.

Signed-off-by: Felix Kaechele <felix@kaechele.ca>
2023-06-03 23:42:20 -04:00
Felix Kaechele ce6daff12f Fix importlib.metadata.files workaround
Some distributions, such as Fedora, do not install the RECORDS file as
part of a package's dist-info. As a result importlib.metadata.files will
return None.

Use the workaround for these cases as well.

Signed-off-by: Felix Kaechele <felix@kaechele.ca>
2023-06-03 23:42:20 -04:00
Dan Helfman caf654366c Document work-around for colons in YAML strings (#708).
continuous-integration/drone/push Build is passing Details
2023-06-03 10:19:34 -07:00
Divyansh Singh bb60b25399 merge subparsers and refactor 2023-06-02 02:04:35 +05:30
Divyansh Singh 74aa28e027 support more flags 2023-06-01 16:53:34 +05:30
Dan Helfman 4f49b345af NEWS wording fix for clarity (#706).
continuous-integration/drone/push Build is passing Details
2023-05-30 23:21:55 -07:00
Dan Helfman 1784ca5910 Fix "check" action error when repository and archive checks are configured but the archive check gets skipped due to the configured frequency (#704).
continuous-integration/drone/push Build is passing Details
2023-05-30 23:19:33 -07:00
Dan Helfman 8f4cce5fa5 Make dev docs message stand out a little more.
continuous-integration/drone/push Build is failing Details
2023-05-30 22:30:06 -07:00
Dan Helfman 518aeabb2a Document verbosity levels (#484).
continuous-integration/drone/push Build is failing Details
2023-05-30 22:25:27 -07:00
Dan Helfman 341bd4118d Fix "--archive latest" on "list" and "info" actions only working on the first of multiple configured repositories (#706).
continuous-integration/drone/push Build is failing Details
2023-05-30 16:53:55 -07:00
Dan Helfman b222f6a60b Mention new verbosity level to NEWS (#484).
continuous-integration/drone/push Build is failing Details
2023-05-30 15:52:49 -07:00
Dan Helfman c0aaba6891 Add option to disable syslog output (#484).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #675
2023-05-30 20:03:56 +00:00
Soumik Dutta a7f81d538d nit changes
- help strings in borgmatic commands
- test fixes in test_logger and test_borgmatic

Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-29 01:09:00 +05:30
Divyansh Singh 4c60bf84d7 extract config files 2023-05-28 01:36:32 +05:30
Divyansh Singh dbb778a4d6 finish parsing and add error for empty config subcommand 2023-05-26 22:44:31 +05:30
Divyansh Singh f4a169fdf3
Merge pull request #2 from witten/feat/store-config-in-archive 2023-05-26 21:29:18 +05:30
Soumik Dutta 3d41ed3a34 add test to check that log_file is disabled
if logging is disabled

Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 09:58:53 +05:30
Soumik Dutta 0283f9ae2a fix help string
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 08:57:12 +05:30
Soumik Dutta d556a23f97 update borgmatic tests
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 08:57:12 +05:30
Soumik Dutta f98d07e8d8 fix logger test
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 08:57:12 +05:30
Soumik Dutta 09f59ad97d disable monitoring hooks if monitoring_log_level is set to DISABLED
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 08:57:12 +05:30
Soumik Dutta 24be6272ed add test for logger
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 08:57:12 +05:30
Soumik Dutta 5a9bb4b97f update help strings
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 08:57:12 +05:30
Soumik Dutta 6a2eb1f157 make value of disabled level higher
so that no other log has higher priority

Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 08:57:12 +05:30
Soumik Dutta 99473c30a8 disable sending logs in Healthchecks
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 08:57:12 +05:30
Soumik Dutta f512d1e460 add verbosity level -2
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-05-26 08:57:12 +05:30
Dan Helfman 96adee444b Potential fix for nested subparsers not parsing correctly. 2023-05-25 15:03:15 -07:00
Divyansh Singh 8b7996dfda removed parents and used reversed remaining_args 2023-05-26 01:07:11 +05:30
Divyansh Singh 2241de11c0 start work on borgmatic config bootstrap command 2023-05-26 00:26:13 +05:30
Dan Helfman 84c21b062f Fix incorrect argument ordering (#659).
continuous-integration/drone/push Build is passing Details
2023-05-23 16:55:40 -07:00
Dan Helfman 76138faaf3 Add integration test for mount action (#659). 2023-05-23 14:49:04 -07:00
Dan Helfman 9299841a5b Add date-based matching flags to NEWS (#659).
continuous-integration/drone/push Build is failing Details
2023-05-23 14:30:16 -07:00
Dan Helfman 35b5c62ca6 Add Borg 2 date-based matching flags for archive selection (#659).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #661
2023-05-23 21:26:17 +00:00
Dan Helfman 05b989347c Upgrade requests test requirement (security).
continuous-integration/drone/push Build is passing Details
2023-05-23 08:43:45 -07:00
Chirag Aggarwal 00e9bb011a test should mock out make_flags_from_arguments
Signed-off-by: Chirag Aggarwal <thechiragaggarwal@gmail.com>
2023-05-20 09:23:09 -04:00
Dan Helfman 833796d1c4 Add archive check probing logic tweak to NEWS (#688).
continuous-integration/drone/push Build is passing Details
2023-05-17 08:48:54 -07:00
Divyansh Singh ee32b076eb update tests and formatting 2023-05-16 23:17:35 +05:30
Dan Helfman e3425f48be Instead of taking the first check time found, take the maximum value (#688)
continuous-integration/drone/push Build is passing Details
2023-05-16 10:20:52 -07:00
Dan Helfman 79b094d035 Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2023-05-16 09:59:09 -07:00
Dan Helfman b45e45f161 Partial conversion of showing repository labels in logs instead of paths (part of #635).
continuous-integration/drone/push Build is running Details
2023-05-16 09:36:50 -07:00
Divyansh Singh b10148844b change config_paths var name to used_config_paths to avoid collisions 2023-05-16 14:00:23 +05:30
Dan Helfman ba845d4008 Codespell saves the day.
continuous-integration/drone/push Build is passing Details
2023-05-15 23:25:13 -07:00
Dan Helfman 645d29b040 Fix archive checks being skipped even when particular archives haven't been checked recently (#688).
continuous-integration/drone/push Build is failing Details
2023-05-15 23:17:45 -07:00
Divyansh Singh 49b4d371ce create and add content to borgmatic-manifest.json 2023-05-16 00:24:19 +05:30
Divyansh Singh 1bc7bb4971 feat: store configs used to create an archive in the archive 2023-05-15 23:04:42 +05:30
Dan Helfman e66e449c3b Merge branch 'main' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic
continuous-integration/drone/push Build is passing Details
2023-05-14 12:51:23 -07:00
Dan Helfman 8eb05b840a Log a warning when "borgmatic borg" is run with an action that borgmatic natively supports (#694). 2023-05-14 09:59:28 -07:00
Dan Helfman f0fc638284 Docs: add Gentoo Linux to other ways to install (#696).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #696
2023-05-13 16:33:11 +00:00
ennui c6126a9226 Docs: add Gentoo Linux to other ways to install 2023-05-13 11:22:47 +00:00
ennui 62b11ba16b Docs: add Gentoo Linux to other ways to install 2023-05-13 11:20:47 +00:00
Dan Helfman 403ae0f698 Clarify configuration comment about source_directories also accepting files (#693).
continuous-integration/drone/push Build is passing Details
2023-05-09 10:14:03 -07:00
Dan Helfman 92a2230a07 Add support for logging each log line as a JSON object via global "--log-json" flag (#680).
continuous-integration/drone/push Build is passing Details
2023-05-08 23:00:49 -07:00
Dan Helfman b3b08ee6d7 Fix error in "borgmatic restore" action when the configured repository path is relative (#691).
continuous-integration/drone/push Build is passing Details
2023-05-07 21:21:35 -07:00
Dan Helfman 15ef37d89f Add test coverage for exact_options_completion() raising (#686).
continuous-integration/drone/push Build is passing Details
2023-05-06 16:25:26 -07:00
Dan Helfman e84bac29e5 Remove value type for compatibility with Python 3.8 (#686). 2023-05-06 16:18:37 -07:00
Dan Helfman 1a956e8b05 Add fish shell completions to NEWS (#686).
continuous-integration/drone/push Build is failing Details
2023-05-06 16:04:15 -07:00
Dan Helfman 4aae7968b8
Add fish shell completions support (#686).
Merge pull request #70 from isaec/feat/fish-completions
2023-05-06 16:00:25 -07:00
Isaac 66964f613c
formatting! 2023-05-06 15:56:50 -07:00
Isaac 614c1bf2e4
rename test to make function under test clearer 2023-05-06 15:52:42 -07:00
Isaac aa770b98f9
follow unit test module convention 2023-05-06 15:50:37 -07:00
Isaac 453b78c852
drop messages 2023-05-06 15:49:07 -07:00
Isaac 0657106893
clarify dedent test name 2023-05-06 15:46:15 -07:00
Isaac 43c532bc57
add test for dedent strip 2023-05-06 11:51:35 -07:00
Isaac efb81fc2c1
rename last arg helper function to current arg for clarity 2023-05-06 11:42:32 -07:00
Isaac c8f4344f89
add more justification to checks 2023-05-06 11:39:02 -07:00
Isaac a047f856a1
tweak docstring, add comment 2023-05-06 11:37:38 -07:00
Isaac d732059979
fix rotted comments 2023-05-06 11:32:10 -07:00
Isaac ccfdd6806f
test the value of completions 2023-05-06 11:29:14 -07:00
Isaac aa564ac5fe
fix the error thrown, unit test for it, and add string explanations 2023-05-06 11:25:15 -07:00
Isaac 77dbb5c499
create way for test cases to be shared 2023-05-06 11:16:45 -07:00
Isaac e623f401b9
write more unit tests 2023-05-06 10:56:54 -07:00
Isaac 372622fbb1
add more doccomments, drop a check 2023-05-06 10:46:27 -07:00
Isaac 469e0ccace
create doccomments, start writing unit tests 2023-05-06 10:42:06 -07:00
Isaac 59a6ce1462
replace double quotes with single quotes 2023-05-05 00:03:43 -07:00
Isaac 5a7a1747f2
add safety check to avoid infinite cat hang 2023-05-05 00:01:45 -07:00
Isaac b557d635fd
async validity check 2023-05-04 23:57:37 -07:00
Isaac d59b9b817f
support required actions 2023-05-04 23:44:54 -07:00
Isaac 16ac4824a5
handle typed without default params 2023-05-04 23:42:04 -07:00
Isaac 3592ec3ddf
dont show deprecated options 2023-05-04 23:32:09 -07:00
Isaac 8f3039be23
handle the expanding filters better 2023-05-04 23:23:29 -07:00
Isaac b4a38d8be9
fix flag showing up for paths 2023-05-04 23:06:11 -07:00
Isaac d962376a9d
refactor to only show specific options if possible 2023-05-04 21:58:30 -07:00
Isaac 193731a017
rename function 2023-05-04 21:14:48 -07:00
Isaac bbc3e9d717
show possible choices 2023-05-04 21:12:24 -07:00
Isaac 639e88262e
create working file completion 2023-05-04 20:17:26 -07:00
Isaac f12a10d888
start work on conditional file completion 2023-05-04 19:50:49 -07:00
Isaac 28efc85660
rearrange to improve legability of the file 2023-05-04 18:11:13 -07:00
Isaac f1fd2e88dd
drop blank completion 2023-05-04 13:49:29 -07:00
Isaac 700f8e9d9c
replace .format with fstring 2023-05-04 13:39:48 -07:00
Isaac f04036e4a7
use fstring to produce completion lines 2023-05-04 13:33:21 -07:00
Isaac 062453af51
replace actionStr with action_name 2023-05-04 13:29:25 -07:00
Isaac b7fe2a5031
lowercase fish in docs 2023-05-04 13:27:57 -07:00
Isaac ca689505e5
add e2e fish test 2023-05-04 13:27:00 -07:00
Isaac 9ff5ea5240
add a unit test, fix isort and black 2023-05-04 13:22:09 -07:00
Dan Helfman 359afe5318 Error if --list is used with --json for create action (#680).
continuous-integration/drone/push Build is passing Details
2023-05-03 17:16:36 -07:00
Dan Helfman 0b397a5bf9 Fix borgmatic error when not finding the configuration schema for certain "pip install --editable" development installs (#687).
continuous-integration/drone/push Build is passing Details
2023-04-30 16:24:10 -07:00
Dan Helfman a60d7fd173 Run "borgmatic borg" action without capturing output so interactive prompts and flags like "--progress" still work.
continuous-integration/drone/push Build is passing Details
2023-04-30 15:43:41 -07:00
Isaac f7e4024fca
add to readme 2023-04-28 14:02:06 -07:00
Isaac 98e3a81fcf
allow file completions as applicable 2023-04-28 12:42:26 -07:00
Isaac 9c77ebb016
continue deduping 2023-04-28 12:15:01 -07:00
Isaac 23f478ce74
use less completion lines 2023-04-28 12:13:08 -07:00
Isaac d265b6ed6f
add comments in generated files 2023-04-28 11:57:16 -07:00
Dan Helfman 77c3161c77 Fix canonical home link in README.
continuous-integration/drone/push Build is passing Details
2023-04-28 08:36:03 -07:00
Isaac 2e658cfa56
only allow one parser 2023-04-27 21:57:50 -07:00
Isaac 412d18f218
show sub options 2023-04-27 21:31:53 -07:00
Isaac 8060586d8b
fix the script and drop unneeded options 2023-04-27 20:05:17 -07:00
Isaac 25b3db72a0
make more precise, fix the version check fn 2023-04-27 19:58:22 -07:00
Isaac 5678f3a96e
basic working version 2023-04-27 19:44:11 -07:00
Isaac 28b152aedd
make upgrade message a template 2023-04-27 19:31:42 -07:00
Isaac 0009471f67
start work on completion 2023-04-27 18:46:13 -07:00
jetchirag a62ac42cca Merge branch 'main' into borg2-archive-flags 2023-04-27 16:57:29 +00:00
Chirag Aggarwal 68ee9687f5 Added tests for all subcommands and used black formatter
Signed-off-by: Chirag Aggarwal <thechiragaggarwal@gmail.com>
2023-04-27 22:27:23 +05:30
Chirag Aggarwal 32395e47f9 Added duplicate flags test for prune
Signed-off-by: Chirag Aggarwal <thechiragaggarwal@gmail.com>
2023-04-24 20:49:41 +05:30
Chirag Aggarwal 8aaba9bb0a Added new flags to prune test for review
Signed-off-by: Chirag Aggarwal <thechiragaggarwal@gmail.com>
2023-04-24 20:43:34 +05:30
Chirag Aggarwal 96aca4f446 Updated existing tests to use new parameters
Signed-off-by: Chirag Aggarwal <thechiragaggarwal@gmail.com>
2023-04-24 20:24:41 +05:30
Dan Helfman 22b84a2fea Switch to Docker Compose for dev-docs script, so podman-docker is no longer needed for Podman users.
continuous-integration/drone/push Build is passing Details
2023-04-22 10:07:40 -07:00
Dan Helfman 5962fd473e Another try. Backing out psql error changes (#678).
continuous-integration/drone/push Build is passing Details
2023-04-21 10:34:50 -07:00
Dan Helfman 7e64f415ba Attempt to fix failing end-to-end database test that only fails in CI.
continuous-integration/drone/push Build is failing Details
2023-04-21 10:03:29 -07:00
Dan Helfman ae12ccd8e6 And fixing again...
continuous-integration/drone/push Build is failing Details
2023-04-21 09:31:37 -07:00
Dan Helfman 3cefeaa229 Fix end-to-end test command-line syntax.
continuous-integration/drone/push Build was killed Details
2023-04-21 09:30:08 -07:00
Dan Helfman 71b75800cd Get more verbose in the end-to-end test restore.
continuous-integration/drone/push Build is failing Details
2023-04-20 23:32:57 -07:00
Dan Helfman 9ca31530a0 Add missing test for check_all_source_directories_exist() raising.
continuous-integration/drone/push Build is failing Details
2023-04-20 23:15:22 -07:00
Dan Helfman b555fcb956 Add "source_directories_must_exist" expansion fix to NEWS (#682). 2023-04-20 23:08:21 -07:00
Dan Helfman 5829196b70 Expand source directories when checking for existence (#682).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #683
2023-04-21 06:05:59 +00:00
Jesse Johnson a14870ce48 Expand source directories when checking for existence (#682). 2023-04-21 05:52:04 +00:00
Dan Helfman ee5c25f3bd Add additional tests for PostgreSQL hook fixes (#678).
continuous-integration/drone/push Build is failing Details
2023-04-20 21:44:42 -07:00
Dan Helfman da0f5a34f2 Fix multiple bugs in PostgreSQL hook (#678).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #677
2023-04-21 04:05:22 +00:00
Dan Helfman 065be1d9d4 More inclusive language.
continuous-integration/drone/push Build is passing Details
2023-04-20 14:28:04 -07:00
Dan Helfman f2f6fb537a !!!
continuous-integration/drone/push Build is passing Details
2023-04-20 14:19:34 -07:00
Dan Helfman 7ff994a964 🤦
continuous-integration/drone/push Build was killed Details
2023-04-20 13:56:12 -07:00
Dan Helfman 08edecacae WTF?!
continuous-integration/drone/push Build was killed Details
2023-04-20 13:55:37 -07:00
Dan Helfman 1e03046d9a *Seriously?*
continuous-integration/drone/push Build is failing Details
2023-04-20 13:50:26 -07:00
Dan Helfman c9bf52ee45 Sigh again.
continuous-integration/drone/push Build was killed Details
2023-04-20 13:46:49 -07:00
Dan Helfman f947525fca ?
continuous-integration/drone/push Build was killed Details
2023-04-20 13:45:26 -07:00
Dan Helfman 7f7b89d79c Trying a different approach: Ditching Podman-in-Podman.
continuous-integration/drone/push Build is failing Details
2023-04-20 12:03:51 -07:00
Dan Helfman 499e42df35 😭
continuous-integration/drone/push Build was killed Details
2023-04-20 11:58:06 -07:00
Dan Helfman 4302a07c9b WTF.
continuous-integration/drone/push Build was killed Details
2023-04-20 11:53:52 -07:00
Dan Helfman 1721c05d2e Yet more.
continuous-integration/drone/push Build was killed Details
2023-04-20 11:52:23 -07:00
Dan Helfman 8a31c27078 To see what sticks.
continuous-integration/drone/push Build was killed Details
2023-04-20 11:50:25 -07:00
Dan Helfman d6e1cef356 Throwing stuff at the wall.
continuous-integration/drone/push Build was killed Details
2023-04-20 11:49:43 -07:00
Dan Helfman f82bf619ff More.
continuous-integration/drone/push Build is failing Details
2023-04-20 11:41:35 -07:00
Dan Helfman 02eeca1fc2 Hmm.
continuous-integration/drone/push Build is failing Details
2023-04-20 11:36:30 -07:00
Dan Helfman 4e78cf1b95 ಠ_ಠ
continuous-integration/drone/push Build was killed Details
2023-04-20 11:33:15 -07:00
Dan Helfman 9e9a7c50e5 😊🔫
continuous-integration/drone/push Build was killed Details
2023-04-20 11:30:30 -07:00
Dan Helfman 51bc53e5ca Whee.
continuous-integration/drone/push Build is failing Details
2023-04-20 11:24:59 -07:00
Dan Helfman b85538c54c Double sigh.
continuous-integration/drone/push Build is failing Details
2023-04-20 11:11:49 -07:00
Dan Helfman bb5028e484 Sigh.
continuous-integration/drone/push Build is failing Details
2023-04-20 11:11:08 -07:00
Dan Helfman 53ee0fcfad Another attempt at Podman-in-Podman incantations.
continuous-integration/drone/push Build is failing Details
2023-04-20 11:06:15 -07:00
Dan Helfman 5f8c79dd16 Attempt to get Podman-in-Podman builds working.
continuous-integration/drone/push Build is failing Details
2023-04-20 10:50:44 -07:00
Dan Helfman 0a6f5452f4 Fix broken Podman image name.
continuous-integration/drone Build was killed Details
continuous-integration/drone/push Build is failing Details
2023-04-19 23:16:15 -07:00
Dan Helfman 269fac074b Attempt to use Podman-in-Podman for building docs instead of Docker-in-Podman.
continuous-integration/drone/push Build encountered an error Details
2023-04-19 23:14:51 -07:00
Dan Helfman 3b21ce4ce8 Rename "master" development branch to "main" to use more inclusive language (#684).
continuous-integration/drone/push Build is failing Details
2023-04-19 21:43:08 -07:00
Dan Helfman 8bb7631f50 Fix missing mock in unit test.
continuous-integration/drone/push Build is failing Details
2023-04-19 21:22:51 -07:00
Dan Helfman 9f5769f87b Make docs/schema a little more container agnostic / less Docker specific.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone Build is passing Details
2023-04-16 15:41:17 -07:00
Dan Helfman 991e08f16d Add Unraid borgmatic installation link to docs.
continuous-integration/drone/push Build is passing Details
2023-04-15 09:13:13 -07:00
Chirag Aggarwal 1ee56805f1 Merge remote-tracking branch 'upstream/master' into borg2-archive-flags 2023-04-15 17:29:20 +05:30
Dan Helfman 25506b8d2c Backing out upgrade of end-to-end test packages, because apparently we can't have nice things.
continuous-integration/drone/push Build is passing Details
2023-04-14 23:47:51 -07:00
Dan Helfman 28e62d824b Upgrade end-to-end test packages.
continuous-integration/drone/push Build is failing Details
2023-04-14 23:28:07 -07:00
Dan Helfman 7ee37a890e Fix broken end-to-end tests by no longer using an editable package there, a work-around for https://github.com/pypa/packaging-problems/issues/609
continuous-integration/drone/push Build is passing Details
2023-04-14 23:22:07 -07:00
Dan Helfman 8cb5a42a9e Drop deprecated pkg_resources in favor of importlib.metadata and packaging.
continuous-integration/drone/push Build is failing Details
2023-04-14 21:21:25 -07:00
Dan Helfman 5dbb71709c Upgrade test requirements and code style requirements. Auto-reformat code accordingly.
continuous-integration/drone/push Build is passing Details
2023-04-14 19:35:24 -07:00
Dan Helfman 1c67db5d62 Add documentation for "borgmatic restore --schema" (#375).
continuous-integration/drone/push Build is passing Details
2023-04-14 16:40:58 -07:00
Dan Helfman 96d4a8ee45 Add "borgmatic restore --schema" flag to NEWS (#375).
continuous-integration/drone/push Build is passing Details
2023-04-14 16:33:06 -07:00
Dan Helfman 81e167959b
feat: restore specific schemas (#375).
Merge pull request #67 from diivi/feat/restore-specific-schemas
2023-04-14 16:26:25 -07:00
Divyansh Singh f273e82d74 add tests 2023-04-15 02:57:51 +05:30
Jakub Jirutka 17f122bfe5 Use psql instead of pg_restore when format is "plain"
pg_restore: error: input file appears to be a text format dump. Please use psql.
2023-04-14 17:38:19 +02:00
Jakub Jirutka f0f43174c6 Swap if-else in restore_database_dump in postgresql hook for cleanliness 2023-04-14 17:38:19 +02:00
Jakub Jirutka dfccc1b94a Exit on error when restoring all PostgreSQL databases
"--set ON_ERROR_STOP=on" is equivalent to "--exit-on-error" in
pg_restore.
2023-04-14 17:38:18 +02:00
Jakub Jirutka 195024e505 Fix psql_command and pg_restore_command to accept command with arguments
These commands are executed without `shell=True`, so the subprocess
module treats e.g. "docker exec my_pg_container psql" as a single command
(resulting in Errno 2 "No such file or directory") instead of a command
with arguments.
2023-04-14 17:37:38 +02:00
Jakub Jirutka 19a00371f5 Run "psql" with "--no-psqlrc"
Some settings in user's .psqlrc, e.g. "linestyle unicode", may break the
CSV output. "--no-psqlrc" tells psql to not read startup file.

This is not necessary for the analyze_command and restore_command (with
all_databases), but it's generally recommended when running psql from a
script.
2023-04-14 17:37:37 +02:00
Jakub Jirutka 874fba7672 Fix PostgreSQL hook not using "psql_command" for list when dumping "all" 2023-04-14 15:13:49 +02:00
Dan Helfman 50b0a9ce38 Remove newline at end of file.
continuous-integration/drone/push Build is failing Details
2023-04-13 19:13:50 -07:00
Dan Helfman 8802f6888e Fix "TypeError: 'module' object is not callable" in test_commands.py' (#676).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #676
2023-04-14 02:12:58 +00:00
polyzen ebe5c5e839 Fix "TypeError: 'module' object is not callable" in test_commands.py 2023-04-14 01:01:31 +00:00
Dan Helfman 613f6c602c Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2023-04-13 15:12:19 -07:00
Dan Helfman 4a94c2c9bf Selectively omit list values when including configuration files (#672).
continuous-integration/drone/push Build is passing Details
2023-04-13 14:39:36 -07:00
Dan Helfman 08843d51d9 Replace "sequence" with "list" in docs for consistency.
continuous-integration/drone/push Build is passing Details
2023-04-12 10:30:23 -07:00
Dan Helfman ea9213cb03 Spelling.
continuous-integration/drone/push Build is passing Details
2023-04-11 22:12:57 -07:00
Dan Helfman 1ea4433aa9 Selectively shallow merge certain mappings or sequences when including configuration files (#672).
continuous-integration/drone/push Build is failing Details
2023-04-11 21:49:10 -07:00
Divyansh Singh 2fea429d78 collection restore for mongodb 2023-04-12 09:34:19 +05:30
Divyansh Singh 264cebd2b1 complete psql multi schema backup 2023-04-11 23:19:49 +05:30
Dan Helfman 4c0e2cab78 View the results of configuration file merging via "validate-borgmatic-config --show" flag (#673).
continuous-integration/drone/push Build is passing Details
2023-04-11 10:49:09 -07:00
Dan Helfman 31a2ac914a Add optional support for running end-to-end tests and building documentation with rootless Podman instead of Docker.
continuous-integration/drone/push Build is passing Details
2023-04-10 14:26:54 -07:00
Dan Helfman d6ef0df50d Mention #670 being fixed in NEWS.
continuous-integration/drone/push Build is passing Details
2023-04-09 10:01:08 -07:00
Dan Helfman cc60a71210 Clarify "log_file" NEWS (#413).
continuous-integration/drone/push Build is passing Details
2023-04-06 14:12:12 -07:00
Dan Helfman 4cd7556a34 Add "log_file" command hook context to NEWS and docs (#413).
continuous-integration/drone/push Build is passing Details
2023-04-06 13:58:37 -07:00
Dan Helfman b4b1fa939d
feat: add logfile name to hook context for interpolation
Merge pull request #68 from diivi/feat/add-log-filename-to-hook-context
2023-04-06 13:46:45 -07:00
Divyansh Singh 16d7131fb7 refactor tests 2023-04-07 01:00:38 +05:30
Divyansh Singh 091d60c226 refactor and improve tests 2023-04-06 12:36:10 +05:30
Divyansh Singh 0fbdf8d860 feat: add logfile name to hook context for interpolation 2023-04-06 09:31:24 +05:30
Dan Helfman 192bfe46a9 Fix error when running the "prune" action with both "archive_name_format" and "prefix" options set (#668).
continuous-integration/drone/push Build is passing Details
2023-04-05 14:58:05 -07:00
Dan Helfman 080c3afa0d Fix documentation referring to "archive_name_format" in wrong configuration section.
continuous-integration/drone/push Build is passing Details
2023-04-05 14:00:21 -07:00
Divyansh Singh 9bc2322f9a feat: restore specific schemas 2023-04-06 02:10:36 +05:30
Dan Helfman a9a65ebe54 Fix integration tests to actually assert (#666).
continuous-integration/drone/push Build is passing Details
2023-04-04 22:11:36 -07:00
Dan Helfman 616eb6b6da Fix error with "info --match-archives" and fix "--match-archives" overriding logic (#666).
continuous-integration/drone/push Build is passing Details
2023-04-04 21:25:10 -07:00
Dan Helfman 00d1dea94e Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2023-04-03 16:11:25 -07:00
Dan Helfman 127ad1dd1f
Add favicon to documentation.
continuous-integration/drone/push Build is passing Details
Merge pull request #66 from diivi/add-favicon
2023-04-03 10:22:12 -07:00
Divyansh Singh fc58ba5763 add favicon to documentation 2023-04-03 17:36:24 +05:30
Dan Helfman 7e6bee84b0 Add "--log-file-format" flag for customizing the log message format (#658).
continuous-integration/drone/push Build is passing Details
2023-04-02 23:06:36 -07:00
Dan Helfman 01811e03ba Tagged the auto-matching archive behavior as breaking in NEWS.
continuous-integration/drone/push Build is passing Details
2023-04-02 14:38:35 -07:00
Dan Helfman 9712d00680 Add "match_archives" option (#588).
continuous-integration/drone/push Build is passing Details
2023-04-01 23:57:55 -07:00
Dan Helfman 275e99d0b9 Add codespell link to documentation.
continuous-integration/drone/push Build encountered an error Details
2023-04-01 14:38:52 -07:00
Dan Helfman b9328e6d42 Add spellchecking of source code to NEWS.
continuous-integration/drone/push Build is passing Details
2023-04-01 14:09:48 -07:00
Dan Helfman 2934d0902c Code spell checking on every test run!
continuous-integration/drone/push Build is passing Details
2023-04-01 11:03:59 -07:00
Dan Helfman 1ad43ad4b5
Fix: run typos to fix various typos in source code.
continuous-integration/drone/push Build is failing Details
Merge pull request #65 from diivi/fix/run-typos
2023-04-01 10:44:11 -07:00
Divyansh Singh 32ab17fa46 merge 2023-04-01 22:12:41 +05:30
Divyansh Singh 6054ced931 fix: run typos 2023-04-01 22:10:32 +05:30
Dan Helfman 1412038ed3
Fix randomly failing test: test_log_outputs_kills_other_processes_when_one_errors (#635).
continuous-integration/drone/push Build is passing Details
Merge pull request #64 from kxxt/master
2023-03-31 23:19:57 -07:00
kxxt fa8bc285c8 Fix randomly failing test. 2023-04-01 14:02:30 +08:00
Dan Helfman f256908b27 Document wording tweaks (#479).
continuous-integration/drone/push Build is passing Details
2023-03-31 15:36:59 -07:00
Dan Helfman 3f78ac4085 Automatically use the "archive_name_format" option to filter which archives get used for borgmatic actions that operate on multiple archives (#479).
continuous-integration/drone/push Build is passing Details
2023-03-31 15:21:08 -07:00
Dan Helfman 5f595f7ac3 Fix regression in which the "transfer" action produced a traceback (#663).
continuous-integration/drone/push Build is passing Details
2023-03-30 23:21:20 -07:00
Dan Helfman b27e625a77 Update schema comment for check_repositories to mention labels (#635).
continuous-integration/drone/push Build is passing Details
2023-03-28 15:44:38 -07:00
Dan Helfman fc2c181b74 Add missing Docker Compose depends.
continuous-integration/drone/push Build is passing Details
2023-03-28 15:31:37 -07:00
Dan Helfman 010b82d6d8 Remove unnecessary cd in dev documentation.
continuous-integration/drone/push Build is passing Details
2023-03-28 12:45:39 -07:00
Dan Helfman aaf3462d17 Fix Drone intentation.
continuous-integration/drone/push Build is passing Details
2023-03-28 12:03:12 -07:00
Dan Helfman f709125110 Error out if run-full-tests is run not inside a test container.
continuous-integration/drone/push Build encountered an error Details
2023-03-28 12:02:07 -07:00
Dan Helfman 3512191f3e Add check_repositories regression fix to NEWS (#662).
continuous-integration/drone/push Build is passing Details
2023-03-28 11:45:55 -07:00
Dan Helfman 06b5d81baa Merge branch 'master' of github.com:borgmatic-collective/borgmatic 2023-03-28 11:15:31 -07:00
Dan Helfman 9d71bf916e
fix: make check repositories work with dict and str repositories (#662).
Merge pull request #63 from diivi/fix/check-repositories-by-label
2023-03-28 11:15:01 -07:00
Dan Helfman 59fe01b56d Update script comment.
continuous-integration/drone/push Build is passing Details
2023-03-28 11:09:25 -07:00
Divyansh Singh 08e358e27f add and update tests 2023-03-28 22:51:35 +05:30
Divyansh Singh ce22d2d302 reformat 2023-03-28 22:29:21 +05:30
Divyansh Singh 2d08a63e60 fix: make check repositories work with dict and str repositories 2023-03-28 22:14:50 +05:30
Chirag Aggarwal 98c6aa6443 Use Square brackets to denote version specific flag
Signed-off-by: Chirag Aggarwal <thechiragaggarwal@gmail.com>
2023-03-28 18:15:49 +05:30
Chirag Aggarwal edd79ed86c removed individual action parameters, and used make_flags_from_arguments
Signed-off-by: Chirag Aggarwal <thechiragaggarwal@gmail.com>
2023-03-28 18:10:42 +05:30
Dan Helfman d96f2239c1 Update OpenBSD borgmatic link.
continuous-integration/drone/push Build is passing Details
2023-03-27 23:43:39 -07:00
Dan Helfman 67a349ae44 I had one job... (#461).
continuous-integration/drone/push Build is passing Details
2023-03-27 23:28:36 -07:00
Dan Helfman dcefded0fa Document that most command-line flags are not config-file-able (#461).
continuous-integration/drone/push Build is passing Details
2023-03-27 23:21:14 -07:00
Dan Helfman 1bcdebd1cc Fix multiple repositories example.
continuous-integration/drone/push Build is passing Details
2023-03-27 23:16:44 -07:00
Dan Helfman 7a8e0e89dd Mention prior versions of borgmatic in repositories schema.
continuous-integration/drone/push Build is passing Details
2023-03-27 21:54:01 -07:00
Dan Helfman 489ae080e5 Update docs with a few more "path:" repositories references (#635).
continuous-integration/drone/push Build is passing Details
2023-03-27 21:49:31 -07:00
Dan Helfman 0e3da7be63 Fix repository schema description.
continuous-integration/drone/push Build is passing Details
2023-03-27 16:15:24 -07:00
Dan Helfman c5ffb76dfa Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2023-03-27 15:56:49 -07:00
Dan Helfman 61c7b8f13c Add optional repository labels so you can select a repository via "--repository yourlabel" at the command-line (#635).
continuous-integration/drone/push Build is failing Details
2023-03-27 15:54:55 -07:00
Dan Helfman 3e8e38011b
Labels for repositories (#635).
Merge pull request #57 from diivi/feat/tag-repos
2023-03-27 15:46:22 -07:00
Chirag Aggarwal 4fa4fccab7 Use make_flags_from_arguments on mount; Pending test fixes
Signed-off-by: Chirag Aggarwal <thechiragaggarwal@gmail.com>
2023-03-27 23:24:17 +05:30
Dan Helfman d0d3a39833 When a database command errors, display and log the error message instead of swallowing it (#396).
continuous-integration/drone/push Build is passing Details
2023-03-27 10:36:39 -07:00
Divyansh Singh 8bef1c698b add feature to docs 2023-03-27 22:16:39 +05:30
Dan Helfman acbbd6670a Removing debugging command output.
continuous-integration/drone/push Build is passing Details
2023-03-26 21:26:35 -07:00
Divyansh Singh b336b9bedf add tests for repo labels 2023-03-27 00:19:23 +05:30
Divyansh Singh ec9def4e71 rename repository arg to repository_path in all borg actions 2023-03-26 23:52:25 +05:30
Divyansh Singh a136fda92d check all tests 2023-03-26 23:35:47 +05:30
Divyansh Singh b511e679ae remove optional label for repos from tests 2023-03-26 16:59:29 +05:30
Dan Helfman f56fdab7a9 Add troubleshooting documentation on PostgreSQL/MySQL authentication errors.
continuous-integration/drone/push Build is passing Details
2023-03-25 17:08:17 -07:00
jetchirag ff1f4dc09c minor fixes to prune argument help text 2023-03-26 02:06:46 +05:30
jetchirag 141474ff07 Added TIMESPAN flags to match archive in various commands (Borg2 feature)
Signed-off-by: jetchirag <thechiragaggarwal@gmail.com>
2023-03-26 01:58:03 +05:30
Dan Helfman 8c0eea7229 Add additional documentation link to environment variable feature. Rename constants section.
continuous-integration/drone/push Build is passing Details
2023-03-25 08:56:25 -07:00
Dan Helfman 19e95628c3 Add documentation and NEWS for custom constants feature (#612).
continuous-integration/drone/push Build is passing Details
2023-03-24 23:47:05 -07:00
Dan Helfman 4d01e53414
Fix: replace primitive values in config without quotes (#612).
Merge pull request #62 from diivi/fix/config-json-replacement
2023-03-24 23:45:36 -07:00
Divyansh Singh a082cb87cb fix: replace primitive values in config without quotes 2023-03-25 12:12:56 +05:30
Dan Helfman 1c51a8e229
Allow defining custom variables in config file (#612).
Merge pull request #60 from diivi/feat/constants-support
2023-03-24 22:50:57 -07:00
Dan Helfman d14a8df71a Hide obnoxious ruamel.yaml warnings during test runs.
continuous-integration/drone/push Build is passing Details
2023-03-24 22:43:10 -07:00
Dan Helfman 739a58fe47 Rename scripts/run-full-dev-tests to scripts/run-end-to-end-dev-tests and make it run end-to-end tests only.
continuous-integration/drone/push Build is passing Details
2023-03-24 16:24:00 -07:00
Dan Helfman af3431d6ae
fix: docs cli reference create spelling
continuous-integration/drone/push Build is passing Details
Merge pull request #61 from diivi/docs/cli-reference
2023-03-24 16:09:50 -07:00
Dan Helfman 9851abc2e1 Add documentation on backing up a database running in a container (#649).
continuous-integration/drone/push Build is passing Details
2023-03-24 15:18:49 -07:00
Divyansh Singh 61ce6f0473 fix: docs cli reference create spelling 2023-03-25 02:44:56 +05:30
Divyansh Singh 78e8bb6c8c reformat 2023-03-25 02:08:52 +05:30
Divyansh Singh af95134cd2 add test for complex constant 2023-03-25 02:03:36 +05:30
Divyansh Singh d6dfb8753a reformat 2023-03-25 01:50:47 +05:30
Divyansh Singh 1bc003560a Merge branch 'master' of https://github.com/diivi/borgmatic into feat/tag-repos 2023-03-25 01:39:26 +05:30
Divyansh Singh aeaf69f49e pass all tests 2023-03-25 01:34:03 +05:30
Divyansh Singh e83ad9e1e4 use repository["path"] instead of repository 2023-03-25 01:04:57 +05:30
Dan Helfman f42890430c Add code style plugins to enforce use of Python f-strings and prevent single-letter variables.
continuous-integration/drone/push Build is passing Details
2023-03-23 23:11:14 -07:00
Divyansh Singh 6f300b0079 feat: constants support 2023-03-24 02:39:37 +05:30
Dan Helfman 9bec029b4f
Fix: remove extra links from docs css.
continuous-integration/drone/push Build is failing Details
continuous-integration/drone Build is passing Details
Merge pull request #59 from diivi/fix/remove-extra-links-from-css
2023-03-23 12:57:55 -07:00
Divyansh Singh 08afad5d81 end with newline 2023-03-24 01:25:15 +05:30
Divyansh Singh a01dc62468 fix: remove extra links from docs css 2023-03-24 01:23:40 +05:30
Dan Helfman 8b61225b13
Copy to clipboard support in documentation.
continuous-integration/drone/push Build is passing Details
Merge pull request #58 from diivi/docs/copy-to-clipboard-support
2023-03-23 12:39:41 -07:00
Divyansh Singh 66d2f49f18 docs: copy to clipboard support 2023-03-23 14:45:23 +05:30
Dan Helfman 0a72c67c6c Add missing source directory error fix to NEWS (#655).
continuous-integration/drone/push Build is passing Details
2023-03-22 13:02:22 -07:00
Dan Helfman ab64b7ef67
Fix error when a source directory doesn't exist and databases are configured (#655).
Merge pull request #56 from diivi/fix/no-error-on-database-backup-without-source-dirs
2023-03-22 12:59:01 -07:00
Divyansh Singh 1e3a3bf1e7 review 2023-03-23 01:18:06 +05:30
Divyansh Singh 7a2f287918 reformat base 2023-03-23 01:08:30 +05:30
Divyansh Singh 8a63c49498 feat: tag repos 2023-03-23 01:01:26 +05:30
Divyansh Singh 3b5ede8044 remove extra parameter from function call 2023-03-22 23:11:44 +05:30
Divyansh Singh bd235f0426 use exit_code_indicates_error and modify it to accept a command 2023-03-22 16:23:53 +05:30
Divyansh Singh 09183464cd fix: no error on database backups without source dirs 2023-03-22 09:41:39 +05:30
Dan Helfman ca6fd6b061 Add confusing error message fix to NEWS (#623).
continuous-integration/drone/push Build is passing Details
2023-03-21 14:25:20 -07:00
Dan Helfman dd9a64f4b6
Fix confusing message when an error occurs running actions for a configuration file (#623).
Merge pull request #55 from diivi/fix/rephrase-error-message
2023-03-21 14:23:09 -07:00
Divyansh Singh 23e7f27ee4 fix: rephrase error when running from config
to avoid confusion, as the user might think the problem is with their config file
2023-03-22 02:22:43 +05:30
Dan Helfman f9ef52f9a5 Remove unused module and outdated test expectations (#576).
continuous-integration/drone/push Build is passing Details
2023-03-21 10:29:17 -07:00
Dan Helfman 3f17c355ca Add "file://" paths to NEWS (#576). 2023-03-21 10:24:51 -07:00
Dan Helfman c83fae5e5b
Support file:// paths for repositories (#576).
Merge pull request #54 from diivi/feat/file-urls-support
2023-03-21 10:22:39 -07:00
Divyansh Singh 39ad8f64c4 add tests and remove magic number 2023-03-21 17:06:03 +05:30
Divyansh Singh e86d223bbf Merge branch 'master' of https://github.com/diivi/borgmatic into feat/file-urls-support 2023-03-21 16:55:05 +05:30
Divyansh Singh 86587ab2dc send repo directly to extract and export_tar 2023-03-20 21:51:45 +05:30
Divyansh Singh 58c95d8015 feat: file:// URLs support 2023-03-20 02:43:23 +05:30
Dan Helfman 6351747da5 Add NixOS package link to installation docs.
continuous-integration/drone/push Build is passing Details
2023-03-19 09:02:47 -07:00
Dan Helfman 55c153409e Add "source_directories_must_exist" option to NEWS (#501).
continuous-integration/drone/push Build is passing Details
2023-03-18 14:07:38 -07:00
Dan Helfman b115fb2fbe Merge branch 'master' of github.com:borgmatic-collective/borgmatic 2023-03-18 14:01:52 -07:00
Dan Helfman 31d04d9ee3
Optionally error if a source directory does not exist.
feat: add optional check for existence of source directories
2023-03-18 13:59:20 -07:00
Divyansh Singh f803836416 reformat 2023-03-18 17:27:33 +05:30
Divyansh Singh 997f60b3e6 add tests 2023-03-18 17:24:21 +05:30
Dan Helfman c84b26499b Add "borg_files_cache_ttl" option to NEWS.
continuous-integration/drone/push Build is passing Details
2023-03-17 19:29:10 -07:00
Dan Helfman 214ae81cbb Add option to set borg_files_cache_ttl in config (#618).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #654
2023-03-18 02:24:41 +00:00
Divyansh Singh d17b2c74db feat: add optional check for existence of source directories 2023-03-18 04:35:55 +05:30
Soumik Dutta fb9677230b add test to ensure integers are converted to string
before setting them up to be environment variable values

Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-18 02:57:56 +05:30
Soumik Dutta 0db137efdf add option to set borg_files_cache_ttl in config
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-18 01:48:24 +05:30
Dan Helfman e6605c868d Clarify check frequency default behavior (#653).
continuous-integration/drone/push Build is passing Details
2023-03-17 10:09:36 -07:00
Dan Helfman bdfe4b61eb Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2023-03-16 13:42:15 -07:00
Dan Helfman ca4461820d Add support for Python 3.11.
continuous-integration/drone/push Build is passing Details
2023-03-16 13:29:37 -07:00
Dan Helfman 7605838bfe Add "--repository" flag to all actions where it makes sense (#564).
continuous-integration/drone/push Build is passing Details
2023-03-16 13:27:08 -07:00
Dan Helfman 7a784b8eba Add "--repository" flag to common actions (where it makes sense) (#652).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #652
2023-03-16 20:21:40 +00:00
Nain 3e22414613 Update tests
Make them more explicit. Also formatting.
2023-03-16 14:01:29 -04:00
Nain 5f87ea3ec5 Add "--repository" flag to the "create" action 2023-03-16 13:15:49 -04:00
Nain a8aeace5b5 Add "--repository" flag to the "compact" action 2023-03-16 11:13:45 -04:00
Nain 480addd7ce Add "--repository" flag to the "check" action 2023-03-16 10:41:13 -04:00
Nain ce0ce4cd1c Merge mostly repetetive tests 2023-03-16 08:23:21 -04:00
Nain 7de9260b0d Remove test now that --repository isn't expected to error
As discussed #652#issuecomment-5579
2023-03-15 14:59:12 -04:00
Nain cdbe6cdf3a Add "--repository" flag to the "prune" action
part of ticket #564
2023-03-15 14:43:17 -04:00
Dan Helfman 95dcc20d5f Better indicate position of additional docs on page (#651).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #651
2023-03-15 18:13:27 +00:00
Dan Helfman 49e0494924 Fix --editable (mode) option given --user as arg (#648).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #650
2023-03-15 18:06:46 +00:00
Nain 5fad2bd408 Better indicate position of additional docs on page
On wide screens, the position of the documentation (how-to and reference guide)
is at same level as #it's-your-data.-keep-it-that-way.

So the jump due to anchor link makes it seem like we're taken to top aka
main content. Indicate that links are to the left so reader doesn't recurse.
2023-03-15 07:54:49 -04:00
Nain c6829782a3 Fix --editable (mode) option given --user as arg
--user option should be before, or after `--editable .` not in between.
Before seems better.
2023-03-15 06:50:47 -04:00
Dan Helfman 8cec7c74d8 Add "--strip-components all" on the "extract" action to remove leading path components (#647).
continuous-integration/drone/push Build is passing Details
2023-03-09 10:09:16 -08:00
Dan Helfman d3086788eb Document how to list database dumps in an archive.
continuous-integration/drone/push Build is passing Details
2023-03-08 16:09:41 -08:00
Dan Helfman 8d860ea02c
Enhanced docs with info on fetching mysql database size
Merge pull request #46 from Jelle-SamsonIT/patch-3
2023-03-08 15:52:28 -08:00
Dan Helfman b343363bb8 Change the default action order to: "create", "prune", "compact", "check" (#304).
continuous-integration/drone/push Build is passing Details
2023-03-08 14:05:06 -08:00
Dan Helfman 9db31bd1e9 Run any command-line actions in the order specified instead of using a fixed ordering (#304).
continuous-integration/drone/push Build is passing Details
2023-03-08 13:19:41 -08:00
Dan Helfman d88bcc8be9 Add Healthchecks "log" state feature to NEWS.
continuous-integration/drone/push Build is passing Details
2023-03-07 15:45:23 -08:00
Dan Helfman 332f7c4bb6 Add support for healthchecks "log" feature (#628).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #645
2023-03-07 22:21:30 +00:00
Dan Helfman 5d19d86e4a Add flake8-quotes to complain about incorrect quoting so I don't have to!
continuous-integration/drone/push Build is passing Details
2023-03-07 14:08:35 -08:00
Soumik Dutta 044ae7869a fix tests
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-08 03:30:12 +05:30
Dan Helfman 62ae82f2c0 Mention searching for files in the extract a backup guide.
continuous-integration/drone/push Build is passing Details
2023-03-06 22:59:34 -08:00
Dan Helfman 66194b7304 Update dates in documentation examples.
continuous-integration/drone/push Build is passing Details
2023-03-06 22:41:43 -08:00
Soumik Dutta 98e429594e added tests to make sure unsupported log states are detected
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 20:31:00 +05:30
Soumik Dutta 4fcfddbe08 return early if unsupported state is passed
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 19:58:57 +05:30
Soumik Dutta f442aeae9c fix logs_monitor_start_error()
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 05:21:56 +05:30
Soumik Dutta e211863cba update test_borgmatic.py
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 05:12:24 +05:30
Soumik Dutta 45256ae33f add test for healthchecks
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-06 03:38:08 +05:30
Soumik Dutta 1573d68fe2 update schema.yaml description
also add monitor.State.LOG to cronitor.

Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-05 21:57:13 +05:30
Soumik Dutta 69f6695253 Add support for healthchecks "log" feature #628
Signed-off-by: Soumik Dutta <shalearkane@gmail.com>
2023-03-05 19:27:32 +05:30
Dan Helfman a7c055264d
Fix incorrect documentation TOC background by removing extra dark mode styles.
continuous-integration/drone/push Build is passing Details
Merge pull request #52 from diivi/fix/remove-special-dark-mode-attributes
2023-03-04 16:18:04 -08:00
Divyansh Singh db18364a73 fix: remove extra dark mode styles 2023-03-05 03:16:46 +05:30
Dan Helfman 22498ebd4c In the documentation, mention what version of borgmatic introduced SQLite support.
continuous-integration/drone/push Build is passing Details
2023-03-04 10:50:28 -08:00
Dan Helfman e1f02d9fa5 Add SQLite feature to NEWS and also integrations.
continuous-integration/drone/push Build is passing Details
2023-03-04 09:59:16 -08:00
Dan Helfman 9ec220c600
Add SQLite database dump/restore hook (#295).
feat: add dump-restore support for sqlite databases
2023-03-04 09:47:21 -08:00
Divyansh Singh cf0275a3ed remove test path 2023-03-04 23:00:57 +05:30
Divyansh Singh c71eb60cd2 mock os.remove instead of actually removing a file 2023-03-04 13:08:30 +05:30
Divyansh Singh 675e54ba9f use os.remove and improve tests 2023-03-04 12:43:07 +05:30
Divyansh Singh 1793ad74bd add sqlite for e2e tests 2023-03-04 02:41:14 +05:30
Divyansh Singh 767a7d900b e2e tests schema update 2023-03-04 01:29:01 +05:30
Divyansh Singh 903507bd03 code review 2023-03-04 01:27:07 +05:30
Dan Helfman b6cf7d2adc Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2023-03-02 15:34:22 -08:00
Dan Helfman a071e02d20 With the "create" action and the "--list" ("--files") flag, only show excluded files at verbosity 2 (#620).
continuous-integration/drone/push Build is failing Details
2023-03-02 15:33:42 -08:00
Divyansh Singh 3aa88085ed formatting fix 2023-03-03 00:01:52 +05:30
Divyansh Singh af1cc27988 feat: add dump-restore support for sqlite databases 2023-03-02 23:55:16 +05:30
Dan Helfman dbf8301c19 Add "checkpoint_volume" configuration option to creates checkpoints every specified number of bytes.
continuous-integration/drone/push Build is passing Details
2023-02-27 10:47:17 -08:00
Dan Helfman 2a306bef12 Fix tests.
continuous-integration/drone/push Build is passing Details
2023-02-26 23:34:17 -08:00
Dan Helfman 2a36a2a312 Add "--repository" flag to the "rcreate" action. Add "--progress" flag to the "transfer" action.
continuous-integration/drone/push Build is failing Details
2023-02-26 23:22:23 -08:00
Dan Helfman d7a07f0428 Support status character changes in Borg 2.0.0b5 when filtering out special files that cause Borg to hang.
continuous-integration/drone/push Build is passing Details
2023-02-26 22:36:13 -08:00
Dan Helfman da321e180d Fix the "create" action with the "--dry-run" flag querying for databases when a PostgreSQL/MySQL "all" database is configured.
continuous-integration/drone/push Build is passing Details
2023-02-26 22:15:12 -08:00
Dan Helfman c6582e1171 Internally support new Borg 2.0.0b5 "--filter" status characters / item flags for the "create" action.
continuous-integration/drone/push Build is passing Details
2023-02-26 17:17:25 -08:00
Dan Helfman 9b83afe491 With the "create" action, only one of "--list" ("--files") and "--progress" flags can be used.
continuous-integration/drone/push Build is passing Details
2023-02-26 17:05:56 -08:00
Dan Helfman 2814ac3642 Update Borg 2.0 documentation links.
continuous-integration/drone/push Build is passing Details
2023-02-26 16:44:43 -08:00
Dan Helfman 8a9d5d93f5 Add ntfy authentication to NEWS.
continuous-integration/drone/push Build is passing Details
2023-02-25 14:23:42 -08:00
Dan Helfman 783a6d3b45 Add authentication to the ntfy hook (#621).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #644
2023-02-25 22:04:37 +00:00
Tom Hubrecht 95575c3450 Add auth test for the ntfy hook 2023-02-25 20:04:39 +01:00
Tom Hubrecht 9b071ff92f Make the auth logic more explicit and warnings if necessary 2023-02-25 20:04:39 +01:00
Tom Hubrecht d80e716822 Add authentication to the ntfy hook 2023-02-24 17:35:53 +01:00
Dan Helfman 418ebc8843 Add MySQL database hook "add_drop_database" configuration option to control whether dumped MySQL databases get dropped right before restore (#642).
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2023-02-20 15:32:47 -08:00
Dan Helfman f5a448c7c2 Fix for potential data loss (data not getting backed up) when dumping large "directory" format PostgreSQL/MongoDB databases (#643).
continuous-integration/drone/push Build is passing Details
2023-02-20 15:18:51 -08:00
Dan Helfman 37ac542b31 Merge pull request 'setup: Add link to MacPorts package' (#641) from neverpanic/borgmatic:cal-docs-macports-port into master
continuous-integration/drone/push Build is passing Details
Reviewed-on: #641
2023-02-15 17:31:03 +00:00
Clemens Lang 8c7d7e3e41 setup: Add link to MacPorts package 2023-02-15 10:47:59 +01:00
Dan Helfman b811f125b2 Clarify "checks" configuration documentation for older versions of borgmatic (#639).
continuous-integration/drone/push Build is passing Details
2023-02-12 21:42:43 -08:00
Dan Helfman 061f3e7917 Remove related documentation links.
continuous-integration/drone/push Build is passing Details
2023-01-26 16:12:01 -08:00
Dan Helfman 6055918907 Upgrade documentation image dependencies. 2023-01-26 16:11:41 -08:00
Dan Helfman 4a90e090ad Clarify NEWS on database "all" dump feature applying to MySQL as well.
continuous-integration/drone/push Build is passing Details
2023-01-26 15:28:17 -08:00
Dan Helfman 301b29ee11 Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2023-01-26 15:17:19 -08:00
Dan Helfman c1eb210253 Fix code style flake issue.
continuous-integration/drone/push Build is passing Details
2023-01-26 15:09:35 -08:00
Dan Helfman 30cca62d09 Add configuration options for database command customization (#630).
continuous-integration/drone/push Build is failing Details
2023-01-26 14:59:17 -08:00
Dan Helfman 113c0e7616 Update documentation about changes to "all" database restores (#438, #560).
continuous-integration/drone/push Build is passing Details
2023-01-26 10:53:58 -08:00
Dan Helfman 0e6b2c6773 Optionally dump "all" PostgreSQL databases to separate files instead of one combined dump file (#438, #560).
continuous-integration/drone/push Build is passing Details
2023-01-25 23:31:07 -08:00
Dan Helfman 22c750b949 Mention "before_actions" command hook in soft failure documentation (#631).
continuous-integration/drone/push Build is passing Details
2023-01-25 13:01:52 -08:00
Dan Helfman 504cce39a1 Add NEWS entry for #629.
continuous-integration/drone/push Build is passing Details
2023-01-14 09:17:27 -08:00
Dan Helfman 6c4abb6803 Merge pull request 'Log warning for excluding special files only if list is not empty' (#629) from palto42/borgmatic:special_files_warn into master
continuous-integration/drone/push Build is passing Details
Reviewed-on: #629
2023-01-14 17:15:01 +00:00
palto42 fd7ad86daa
conditional warning for excluding special files 2023-01-03 21:53:51 +01:00
Dan Helfman 6f3b23c79d Lowercase borgmatic in documentation.
continuous-integration/drone/push Build is passing Details
2022-12-23 14:12:48 -08:00
Dan Helfman 4838f5e810 Add borgmatic minimum version to compact docs (#624).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #625
2022-12-23 22:11:45 +00:00
Macguire Rintoul 116f1ab989 add borgmatic minimum version to compact docs 2022-12-23 13:32:01 -08:00
Dan Helfman 5e15c9f2bc Fix traceback when include merging on ARM64 (#622).
continuous-integration/drone/push Build is passing Details
2022-12-23 10:07:53 -08:00
Dan Helfman 442641f9f6 Update borgmatic social links.
continuous-integration/drone/push Build is passing Details
2022-12-16 11:39:05 -08:00
Dan Helfman f67c544be6 Optionally dump "all" PostgreSQL databases to separate files instead of one combined dump file (#438, #560).
continuous-integration/drone/push Build is passing Details
2022-12-15 22:59:42 -08:00
Dan Helfman 437fd4dbae Update developer constributing instructions as well.
continuous-integration/drone/push Build is passing Details
2022-12-13 23:56:32 -08:00
Dan Helfman 36873252d6 Update developer instructions.
continuous-integration/drone/push Build is passing Details
2022-12-13 23:44:27 -08:00
Dan Helfman 1ef82a27fa Clarify data/archives check implicit enabling.
continuous-integration/drone/push Build is passing Details
2022-12-12 16:03:05 -08:00
Dan Helfman 6837dcbf42 Clarify documentation about transferring archives between related repositories.
continuous-integration/drone/push Build is passing Details
2022-12-10 12:59:44 -08:00
Dan Helfman c657764367 Fix logs that interfere with JSON output by making warnings go to stderr instead of stdout (#602).
continuous-integration/drone/push Build is passing Details
2022-12-02 12:12:10 -08:00
Dan Helfman f79286fc91 Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2022-11-27 09:00:40 -08:00
Dan Helfman 694d376d15 Clarify documentation about multiple repositories and separate configuration files (#613).
continuous-integration/drone/push Build is passing Details
2022-11-21 13:33:01 -08:00
Dan Helfman ab4c08019c Upgrade pytest test dependency (security).
continuous-integration/drone/push Build is passing Details
2022-11-18 11:13:51 -08:00
Dan Helfman fd39f54df7 Code formatting.
continuous-integration/drone/push Build is passing Details
2022-11-18 08:35:01 -08:00
Dan Helfman ca7e18bb29
Override PostgreSQL dump/restore commands via configuration options (#311).
Merge pull request #49 from jpaniagualaconich/specify-pg-dump-restore-commands
2022-11-18 08:33:14 -08:00
Dan Helfman 6975a5b155 Fix "data" consistency check to support "check_last" and consistency "prefix" options (#611).
continuous-integration/drone/push Build is passing Details
2022-11-17 10:19:48 -08:00
Dan Helfman b627d00595 More consistency checks documentation edits.
continuous-integration/drone/push Build is passing Details
2022-11-14 15:13:47 -08:00
Dan Helfman 9bd8f1a6df Clarify consistency check configuration.
continuous-integration/drone/push Build is passing Details
2022-11-14 14:58:42 -08:00
Javier Paniagua faf682ca35 specify pg dump/restore commands (#311) 2022-11-06 11:12:53 +01:00
Dan Helfman 6aeb74550d Clarify examples in include merging and deep merging documentation (#607).
continuous-integration/drone/push Build is passing Details
2022-10-28 19:33:19 -07:00
Dan Helfman 89500df429 Fix traceback when a configuration section is present but lacking any options (#604).
continuous-integration/drone/push Build is passing Details
2022-10-23 13:56:03 -07:00
Dan Helfman 82b072d0b7 Update documentation to mention using blake2 with "transfer" action.
continuous-integration/drone/push Build is passing Details
2022-10-17 15:04:30 -07:00
Dan Helfman 018c0296fd Document that special file exclusion also excludes symlinks to special files (#596).
continuous-integration/drone/push Build is passing Details
2022-10-15 10:14:46 -07:00
Dan Helfman 9c42e7e817 Fix regression in which "check" action errored on certain systems (#597, #598).
continuous-integration/drone/push Build is failing Details
continuous-integration/drone/tag Build is passing Details
2022-10-14 16:19:26 -07:00
Dan Helfman 953277a066 Fix special file detection when broken symlinks are encountered (#596).
continuous-integration/drone/push Build is passing Details
2022-10-14 09:41:08 -07:00
Dan Helfman e2002b5488 Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2022-10-12 10:59:54 -07:00
Dan Helfman c9742e1d04 Code formatting.
continuous-integration/drone/push Build is passing Details
2022-10-12 10:52:32 -07:00
Dan Helfman 906da838ef Add missing break-lock action command-line help (#357).
continuous-integration/drone/push Build is failing Details
2022-10-12 10:48:10 -07:00
Dan Helfman d7f1c10c8c To prevent Borg hangs, unconditionally delete stale named pipes before dumping databases (#360).
continuous-integration/drone/push Build is passing Details
2022-10-12 10:26:09 -07:00
Dan Helfman e8e4d17168 Clean up changelog for the current dev release.
continuous-integration/drone/push Build is passing Details
2022-10-06 22:06:03 -07:00
Dan Helfman a31ce337e9 Skip auto-exclusion of special files when user explicitly sets read_special to true (#587).
continuous-integration/drone/push Build is passing Details
2022-10-06 11:07:43 -07:00
Dan Helfman 902730df46 Update sample systemd file to allow system idle (#589).
continuous-integration/drone/push Build is passing Details
2022-10-05 10:20:25 -07:00
Dan Helfman c969c822ee Do not inhibit idle in borgmatic.service (#589).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #589
2022-10-05 17:14:19 +00:00
Dan Helfman c31702d092 Fix for potential data loss with "patterns_from". Also, display excluded files (#590).
continuous-integration/drone/push Build is passing Details
2022-10-04 22:57:18 -07:00
Dan Helfman ba8fbe7a44 Add "break-lock" action for removing any repository and cache locks leftover from Borg aborting (#357).
continuous-integration/drone/push Build is passing Details
2022-10-04 13:42:18 -07:00
Dan Helfman 2774c2e4c0 Add support for Borg 2's "--match-archives" flag (replaces "--glob-archives") (#591).
continuous-integration/drone/push Build is passing Details
2022-10-03 22:50:37 -07:00
Dan Helfman ae036aebd7 When the "read_special" option is true or database hooks are enabled, auto-exclude special files for a "create" action to prevent Borg from hanging (#587).
continuous-integration/drone/push Build is passing Details
2022-10-03 12:58:13 -07:00
LaserEyess 2e9f70d496 Do not inhibit idle in borgmatic.service
When backing up a machine with a monitor using logind to control idle
timeout and things like DPMS, borgmatic can block the screen from
turning on/off with systemd-inhibit. This is because by default
systemd-inhibit will block "idle:sleep:shutdown". Borgmatic does not
need to care about idle, only about suspend and shutdown. So, add an
explicit `--what` flag for what borgmatic should inhibit.

For more information see systemd-inhibit(1).
2022-10-01 09:33:38 -04:00
Dan Helfman 90be5b84b1 Fix changelog development version.
continuous-integration/drone/push Build is passing Details
2022-09-20 14:02:48 -07:00
Dan Helfman 80e95f20a3 Add "borgmatic borg" documentation note about interactive prompts. 2022-09-20 14:01:47 -07:00
Dan Helfman ac7c7d4036 Warn when ignoring a configured "read_special" value of false, as true is needed when database hooks are enabled (#587).
continuous-integration/drone/push Build is failing Details
2022-09-20 13:52:13 -07:00
Dan Helfman 858b0b9fbe Note version of borgmatic needed for "borgmatic borg" action (#586).
continuous-integration/drone/push Build is passing Details
2022-09-13 09:05:18 -07:00
Dan Helfman 9cc043f60e Update "find" command in documentation to work on BSDs and not just Linux (#583).
continuous-integration/drone/push Build is passing Details
2022-09-11 20:02:30 -07:00
Dan Helfman 276a27d485 Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2022-09-08 10:29:44 -07:00
Dan Helfman 679bb839d7 Fix hang when database hooks are enabled and "patterns" contains a parent directory of "~/.borgmatic" (#582).
continuous-integration/drone/push Build is passing Details
2022-09-08 10:16:42 -07:00
Dan Helfman 9e64d847ef Fix regression in which "borgmatic info --archive ..." showed repository info instead of archive info with Borg 1 (#577).
continuous-integration/drone/push Build is passing Details
2022-08-30 20:42:42 -07:00
Dan Helfman 61fb275896 Fix duplicate-appearing log entries for "list" action. 2022-08-30 20:29:26 -07:00
Dan Helfman ca0c79c93c Fix duplicate bind path in sample systemd service.
continuous-integration/drone/push Build is running Details
2022-08-28 14:49:23 -07:00
Dan Helfman 87c97b7568 Fixed spurious, intermittent test failures related to command execution and logging.
continuous-integration/drone/push Build is passing Details
2022-08-28 09:06:06 -07:00
Dan Helfman 80b8c25bba Update docs about "source_directories" being optional.
continuous-integration/drone/push Build is passing Details
2022-08-25 13:24:26 -07:00
Dan Helfman d1837cd1d3 Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2022-08-25 11:58:06 -07:00
Dan Helfman c46f2b8508 Fix conflict between "patterns" and "source_directories" (#574), make "source_directories" optional (#542). 2022-08-25 11:55:34 -07:00
Dan Helfman a274c0dbf7 In generate-borgmatic-config, indicate that the example options are exhaustive.
continuous-integration/drone/push Build is passing Details
2022-08-24 09:53:54 -07:00
Dan Helfman ef7e95e22a Fix end-to-end tests.
continuous-integration/drone/push Build is passing Details
2022-08-21 23:29:13 -07:00
Dan Helfman 3be99de5b1 Update "repositories" examples in configuration to use ssh:// style syntax.
continuous-integration/drone/push Build is failing Details
2022-08-21 22:40:31 -07:00
Dan Helfman e7b7560477 Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2022-08-21 21:54:13 -07:00
Dan Helfman 317dc7fbce Add "before_actions" and "after_actions" command hooks that run before/after all the actions for each repository, update docs to cover per-repository configurations (#463).
continuous-integration/drone/push Build is passing Details
2022-08-21 21:48:37 -07:00
Dan Helfman 97fad15009 Switch to more accessible header permalink anchors in documentation. 2022-08-21 21:48:07 -07:00
Dan Helfman 462326406e Drop only-style actions like "--create", rename "prune --files" to "prune --list", and add "--list" alias to "create" and "export-tar" (#571).
continuous-integration/drone/push Build is passing Details
2022-08-21 14:25:16 -07:00
Dan Helfman bbdf4893d1 Clarify --format flag in documentation.
continuous-integration/drone/push Build is passing Details
2022-08-19 15:27:03 -07:00
Dan Helfman ef6617cfe6
Add link to Borg list --format documentation.
continuous-integration/drone/push Build is passing Details
2022-08-19 15:16:56 -07:00
Dan Helfman dbef0a440f
Merge branch 'master' into patch-2 2022-08-19 15:16:17 -07:00
Dan Helfman 22628ba5d4 Update ssh:// examples in documentation to use relative paths on the remote machine (#557).
continuous-integration/drone/push Build is passing Details
2022-08-19 12:00:40 -07:00
Dan Helfman 8576ac86b9 Fix incorrect version in documentation (#557).
continuous-integration/drone/push Build is passing Details
2022-08-19 09:44:31 -07:00
Dan Helfman 540f9f6b72 Add missing test for "transfer" action (#557).
continuous-integration/drone/push Build is passing Details
2022-08-19 09:40:29 -07:00
Dan Helfman f9d7faf884 Fix mount action to work without archive again (#557).
continuous-integration/drone/push Build is passing Details
2022-08-18 23:33:05 -07:00
Dan Helfman 7dee6194a2 Add new "transfer" action for Borg 2 (#557).
continuous-integration/drone/push Build is passing Details
2022-08-18 23:06:51 -07:00
Dan Helfman 68f9c1b950 Add generate-borgmatic-config end-to-end test.
continuous-integration/drone/push Build is passing Details
2022-08-18 14:28:46 -07:00
Dan Helfman 43d711463c Add additional command-line flags to rcreate action (#557). 2022-08-18 14:28:12 -07:00
Dan Helfman 00255a2437 Various documentation edits for Borg 2 (#557).
continuous-integration/drone/push Build is passing Details
2022-08-18 10:19:11 -07:00
Dan Helfman b40e9b7da2 Ignore archive filter parameters passed to list action when --archive is given (#557).
continuous-integration/drone Build is passing Details
2022-08-18 09:59:48 -07:00
Dan Helfman 89d201c8ff Fleshing out NEWS for the Borg 2 changes.
continuous-integration/drone/push Build is passing Details
2022-08-17 21:54:00 -07:00
Dan Helfman f47c98c4a5 Rename several configuration options to match Borg 2 (#557).
continuous-integration/drone/push Build is passing Details
2022-08-17 21:14:58 -07:00
Dan Helfman 3b6ed06686 Add --other-repo flag to rcreate action (#557).
continuous-integration/drone/push Build is passing Details
2022-08-17 17:33:09 -07:00
Dan Helfman 57009e22b5 Use flag-related utility functions in info action (#557).
continuous-integration/drone/push Build is running Details
2022-08-17 17:11:02 -07:00
Dan Helfman 3ab7a3b64a Replace use of --prefix with --glob-archives in info action (#557).
continuous-integration/drone/push Build is passing Details
2022-08-17 15:36:19 -07:00
Dan Helfman 596dd49cf5 Use --glob-archives instead of --prefix on rlist command (#557).
continuous-integration/drone/push Build is running Details
2022-08-17 14:26:35 -07:00
Dan Helfman 28d847b8b1 Warn and tranform on non-ssh://-style repositories (#557).
continuous-integration/drone/push Build is passing Details
2022-08-17 10:13:11 -07:00
Dan Helfman 2a1c6b1477 Update documentation with newly required ssh:// repository syntax for Borg 2 (#557).
continuous-integration/drone/push Build is passing Details
2022-08-16 11:41:35 -07:00
Dan Helfman 30abd0e3de Update borg action for Borg 2 support (#557).
continuous-integration/drone/push Build is passing Details
2022-08-16 09:30:00 -07:00
Dan Helfman f36e38ec20 Update mount action for Borg 2 support (#557).
continuous-integration/drone/push Build is passing Details
2022-08-15 19:32:37 -07:00
Dan Helfman d807ce095e Update export-tar action for Borg 2 support (#557).
continuous-integration/drone/push Build is passing Details
2022-08-15 17:34:12 -07:00
Dan Helfman 7626fe1189 Disallow borg list --json with --archive or --find (#557).
continuous-integration/drone/push Build is passing Details
2022-08-15 15:40:28 -07:00
Dan Helfman cc04bf57df Update list action for Borg 2 support, add rinfo action, and update extract consistency check for Borg 2.
continuous-integration/drone/push Build is passing Details
2022-08-15 15:04:40 -07:00
Dan Helfman cce6d56661 Update extract action for Borg 2 support (#557).
continuous-integration/drone/push Build is passing Details
2022-08-13 23:07:29 -07:00
Dan Helfman a05d0f378e Factor out repository/archive flags formatting code from create action (#557).
continuous-integration/drone/push Build is passing Details
2022-08-13 22:50:14 -07:00
Dan Helfman 94321aec7a Update compact action for Borg 2 support (#557).
continuous-integration/drone/push Build is passing Details
2022-08-13 22:07:15 -07:00
Dan Helfman 4a55749bd2 Update prune action for Borg 2 support (#557).
continuous-integration/drone/push Build is passing Details
2022-08-13 17:26:51 -07:00
Dan Helfman 2898e63166 Update create action for Borg 2 support (#557).
continuous-integration/drone/push Build is passing Details
2022-08-12 23:54:13 -07:00
Dan Helfman c7176bd00a Add rinfo action for Borg 2 support (#557).
continuous-integration/drone/push Build is passing Details
2022-08-12 23:06:56 -07:00
Dan Helfman 647ecdac29 Borg 2 support in borgmatic check action (#557).
continuous-integration/drone/push Build is passing Details
2022-08-12 15:46:33 -07:00
Dan Helfman e7a8acfb96 Add missing rinfo action source files (#557).
continuous-integration/drone/push Build is passing Details
2022-08-12 14:59:03 -07:00
Dan Helfman 622caa0c21 Support for Borg 2's rcreate and rinfo sub-commands (#557).
continuous-integration/drone/push Build is failing Details
2022-08-12 14:53:20 -07:00
Dan Helfman 22149c6401 Switch to self-hosted container registry for borgmatic documentation image.
continuous-integration/drone/push Build is passing Details
2022-08-01 21:17:59 -07:00
Dan Helfman 9aece3936a Modify "mount" and "extract" actions to require the "--repository" flag when multiple repositories are configured (#566).
continuous-integration/drone/push Build is passing Details
2022-07-25 11:30:02 -07:00
Dan Helfman c7e4e6f6c9 Add Healthchecks "verify_tls" option to NEWS.
continuous-integration/drone/push Build is passing Details
2022-07-23 23:16:06 -07:00
Dan Helfman bcad0de1a4
Add verify_tls option for Healthchecks to optionally disable TLS verification. 2022-07-23 23:11:41 -07:00
Uli 5c6407047f feat: add verify_tls flag for Healthchecks 2022-07-24 07:37:00 +02:00
Dan Helfman 6ddae20fa1 Fix handling of "repository" and "data" consistency checks to prevent invalid Borg flags (#565).
continuous-integration/drone/push Build is passing Details
2022-07-23 21:02:21 -07:00
Dan Helfman 23feac2f4c Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2022-07-19 20:32:41 -07:00
Dan Helfman 16066942e3 Fix traceback with "create" action and "--json" flag when a database hook is configured (#563).
continuous-integration/drone/push Build is passing Details
2022-07-19 10:25:10 -07:00
Jelle @ Samson-IT 3720f22234
reworded and added 'all' caveat 2022-07-13 22:03:51 +02:00
Jelle @ Samson-IT f7c8e89a9f
update format specifier syntax link to use anchor 2022-07-13 21:52:21 +02:00
Jelle @ Samson-IT ba377952fd
Added link to borgbackup list --format docs
I kept searching for this link, so it's time to add it to official docs.
2022-07-13 13:52:48 +02:00
Jelle @ Samson-IT 1fdec480d6
Added some info about fetching mysql database size 2022-07-13 13:29:45 +02:00
Dan Helfman e85d551eac Fix all database hooks to error when the requested database to restore isn't present in the Borg archive (#560).
continuous-integration/drone/push Build is passing Details
2022-07-06 23:21:24 -07:00
Dan Helfman 2b23a63a08 Add end-to-end test for overrides.
continuous-integration/drone/push Build is passing Details
2022-07-06 18:20:51 -07:00
Dan Helfman c0f48e1071 Fix command-line "--override" flag to continue supporting old configuration file formats (#561).
continuous-integration/drone/push Build is passing Details
2022-07-06 18:14:44 -07:00
Dan Helfman 6005426684 Update documentation about configuring multiple consistency checks or multiple databases (#559).
continuous-integration/drone/push Build is passing Details
2022-07-03 22:24:25 -07:00
Dan Helfman 673ed1a2d3 Clarify check frequency documentation in regards to multiple configuration files.
continuous-integration/drone/push Build is passing Details
2022-07-02 09:40:49 -07:00
Dan Helfman 992f62edd2 Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2022-06-30 22:14:41 -07:00
Dan Helfman f1ffa1da1d Add another recommended flag to the backup documentation (#554).
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2022-06-30 16:54:22 -07:00
Dan Helfman 457ed80744 Fix environment variable plumbing so options in one configuration file aren't used for others (#555).
continuous-integration/drone/push Build is passing Details
2022-06-30 13:42:17 -07:00
Dan Helfman 1fc028ffae In documentation, be more explicit about default actions (#554).
continuous-integration/drone/push Build is passing Details
2022-06-29 21:32:00 -07:00
Dan Helfman 10723efc68 Fix all monitoring hooks to warn if the server returns an HTTP 4xx error (#554).
continuous-integration/drone/push Build is passing Details
2022-06-29 21:19:40 -07:00
Dan Helfman 2e0b2a308f Clarify --files flag action in documentation (#554).
continuous-integration/drone/push Build is passing Details
2022-06-29 09:20:13 -07:00
Dan Helfman bd4d109009 Fix logging to include the full traceback when Borg experiences an internal error (#553).
continuous-integration/drone/push Build is passing Details
2022-06-28 13:38:24 -07:00
Dan Helfman ae25386336 Update release script to abort if there are local changes. Prevents accidentally tagging a .dev0 changeset for release.
continuous-integration/drone/push Build is passing Details
2022-06-25 09:42:05 -07:00
Dan Helfman d929313d45 Bump version.
continuous-integration/drone/push Build is passing Details
2022-06-24 10:18:01 -07:00
Dan Helfman d372a86fe6 Code formatting.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build encountered an error Details
2022-06-23 10:41:04 -07:00
Dan Helfman e306f03e1d Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic
continuous-integration/drone/push Build is failing Details
2022-06-23 10:28:09 -07:00
Dan Helfman 8336165f23 Update documentation with environment variable escaping (#546). 2022-06-23 10:25:46 -07:00
Dan Helfman c664c6b17b Fix escaped environment variable in configuration (#546).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #549
2022-06-23 17:16:09 +00:00
Sébastien MB b63c854509 Fix escaped environment variable in configuration
- when an env variable is escaped in the configuration file, we expect
  not to resolve it and remove the escape char `\`
2022-06-17 09:50:56 +02:00
Dan Helfman aa013af25e Remove some whitespace around "New in version ..." documentation labels.
continuous-integration/drone/push Build is passing Details
2022-06-16 20:49:15 -07:00
Dan Helfman cc32f0018b Start formalizing how new features are flagged by version in documentation.
continuous-integration/drone/push Build is passing Details
2022-06-16 20:23:16 -07:00
Dan Helfman dfc4db1860 Document environment variable interpolation (#546).
continuous-integration/drone/push Build is passing Details
2022-06-16 15:30:53 -07:00
Dan Helfman 35706604ea Upgrade documentation base images. 2022-06-16 15:22:59 -07:00
Dan Helfman 6d76e8e5cb Code formatting.
continuous-integration/drone/push Build is passing Details
2022-06-16 14:21:18 -07:00
Dan Helfman aecb6fcd74 Code style, rename command-line flag, and move new code into its own file (#546)
continuous-integration/drone/push Build is failing Details
2022-06-16 11:35:24 -07:00
Dan Helfman ea45f6c4c8 Environment variable resolution in configuration file (#546).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #548
2022-06-16 18:18:12 +00:00
Sébastien MB 97b5cd089d Allow environment variable resolution in configuration file
- all string fields containing an environment variable like ${FOO} will
  be resolved
- supported format ${FOO}, ${FOO:-bar} and ${FOO-bar} to allow default
  values if variable is not present in environment
- add --no-env argument for CLI to disable the feature which is enabled
  by default

Resolves: #546
2022-06-16 18:52:54 +02:00
Dan Helfman f2c2f3139e Add periods to ntfy config descriptions.
continuous-integration/drone/push Build is passing Details
2022-06-10 09:42:41 -07:00
Dan Helfman dc4e7093e5 Remove link to related software that hasn't seen updates in the past couple years.
continuous-integration/drone/push Build is passing Details
2022-06-09 19:31:50 -07:00
Dan Helfman b6f1025ecb Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2022-06-09 16:38:34 -07:00
Dan Helfman 65b2fe86c6 Fix Bash completion script to no longer alter your shell's settings.
continuous-integration/drone/push Build is passing Details
2022-06-09 16:29:54 -07:00
Dan Helfman 0e90a80680 Add links in documentation for ntfy monitoring hook (#543).
continuous-integration/drone/push Build is passing Details
2022-06-09 13:41:22 -07:00
Dan Helfman 7648bcff39 Add a hook for sending push notifications via ntfy.sh.
continuous-integration/drone/push Build is passing Details
Reviewed-on: #543
2022-06-09 20:26:06 +00:00
Gavin Chappell a8b8d507b6
add a hook for sending push notifications via ntfy.sh 2022-06-09 21:10:38 +01:00
Dan Helfman 3561c93d74 Fix Healthchecks tests that leak global state, breaking downstream tests (discovered in #543).
continuous-integration/drone/push Build is passing Details
2022-06-09 11:05:44 -07:00
Dan Helfman 331a503a25 Document the borgmatic version in which "borgmatic list --find" is available (#541).
continuous-integration/drone/push Build is passing Details
2022-06-03 16:55:54 -07:00
Dan Helfman 9aefb5179f Fix None find paths (#541).
continuous-integration/drone/push Build is passing Details
2022-06-03 15:20:05 -07:00
Dan Helfman d14f22e121 Add "borgmatic list --find" flag for searching for files across multiple archives (#541).
continuous-integration/drone/push Build is failing Details
2022-06-03 15:12:14 -07:00
Dan Helfman b6893f6455 Exclude deprecated "borg list --successful" flag from getting passed to Borg.
continuous-integration/drone/push Build is passing Details
2022-06-02 21:14:25 -07:00
Dan Helfman 80ec3e7d97 Deprecate "borgmatic list --successful" flag, as listing only non-checkpoint (successful) archives is now the default in newer versions of Borg.
continuous-integration/drone/push Build is passing Details
2022-06-02 20:35:39 -07:00
Dan Helfman cd834311eb Clarify completion docs.
continuous-integration/drone/push Build is passing Details
2022-06-01 10:57:23 -07:00
Dan Helfman d751cceeb0 Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic 2022-06-01 10:38:05 -07:00
Dan Helfman ce78b07e4b Add macOs to install and Bash completion documentation.
continuous-integration/drone/push Build is passing Details
Reviewed-on: #540
2022-06-01 17:37:51 +00:00
adidalal 87f3c50931 setup: add macOS 2022-06-01 15:56:40 +00:00
Dan Helfman 8e9e06afe6 Bump version for release.
continuous-integration/drone/tag Build is passing Details
2022-05-31 09:41:20 -07:00
Dan Helfman 2bc91ac3d2 Add "generate-borgmatic-config --overwrite" flag to replace an existing destination file (#539).
continuous-integration/drone/push Build is passing Details
2022-05-29 16:03:55 -07:00
Dan Helfman 5b615d51a4 Add support for "borgmatic borg debug" command (#538).
continuous-integration/drone/push Build is passing Details
2022-05-29 15:43:03 -07:00
Dan Helfman c7f5d5fd0b Fix broken Bash completion of filenames, as in "-c config.yaml".
continuous-integration/drone/push Build is passing Details
2022-05-29 10:49:33 -07:00
Dan Helfman 6ef7538eb0 Fix typo in Bash completions script.
continuous-integration/drone/push Build is passing Details
2022-05-28 19:34:13 -07:00
Dan Helfman 8fa90053cf Add "borgmatic check --force" flag to ignore configured check frequencies (#523). 2022-05-28 19:29:33 -07:00
Dan Helfman b3682b61d1 Add another note about the consistency checks schema in old versions (#523). 2022-05-28 19:03:45 -07:00
Dan Helfman ad0e2e0d7c Tweak default check frequency to 1 month (#523). 2022-05-28 15:49:50 -07:00
Dan Helfman 6629f40cab In bash completion script, warn when script is out of date using script contents instead of version. (Fewer spurious warnings that way.) 2022-05-28 15:27:11 -07:00
Dan Helfman e76bfa555f Reduce the default consistency check frequency and support configuring the frequency independently for each check (#523). 2022-05-28 14:42:19 -07:00
Dan Helfman 8ddb7268eb Reuse "borg info" function.
continuous-integration/drone/push Build is passing Details
2022-05-27 13:51:11 -07:00
Dan Helfman cb5fe02ebd Fix broken Bash completion end-to-end test.
continuous-integration/drone/push Build is passing Details
2022-05-26 11:18:46 -07:00
Dan Helfman 77b84f8a48 Add Bash completion script so you can tab-complete the borgmatic command-line.
continuous-integration/drone/push Build is failing Details
2022-05-26 10:27:53 -07:00
Dan Helfman 691ec96909 Fix python_requires to support all versions of 3.7 (#537).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #537
2022-05-26 15:51:46 +00:00
Steve Atwell 29b4666205 Fix python_requires to support all versions of 3.7
This is the standard way to support "Python 3.7 and newer" and it also
fixes use of borgmatic with some tools that do custom dependency
resolution.  E.g., using pex with --platform.
2022-05-26 07:05:04 -07:00
Dan Helfman 316a22701f Add documentation note about multiple merge limitation (#380).
continuous-integration/drone/push Build is passing Details
2022-05-25 23:12:42 -07:00
Dan Helfman be59a3e574 Fix generate-borgmatic-config with "--source" flag to support more complex schema changes like the new Healthchecks configuration options (#536).
continuous-integration/drone/push Build is passing Details
2022-05-25 10:26:26 -07:00
Dan Helfman 37327379bc Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic
continuous-integration/drone/push Build is passing Details
2022-05-24 17:50:57 -07:00
Dan Helfman 22c2f13611 Remove trailing whitespace (#535).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #535
2022-05-25 00:50:12 +00:00
polyzen 8708ca07f4 Remove trailing whitespace 2022-05-25 00:43:40 +00:00
Dan Helfman 634d9e4946 Bump version for release.
continuous-integration/drone/tag Build is passing Details
2022-05-24 16:22:37 -07:00
Dan Helfman 54933ebef5 Change connection failures for monitoring hooks to be warnings instead of errors (#439).
continuous-integration/drone/push Build is passing Details
2022-05-24 15:50:04 -07:00
Dan Helfman 157e59ac88 Add Healthchecks monitoring hook "send_logs" option to enable/disable sending borgmatic logs to the Healthchecks server (#460).
continuous-integration/drone/push Build is passing Details
2022-05-24 14:44:33 -07:00
Dan Helfman 666f0dd751 Add missing Healthchecks "states" option example in configuration schema (#525).
continuous-integration/drone/push Build is passing Details
2022-05-24 14:17:19 -07:00
Dan Helfman 8b179e4647 Reverse logic of Healtchecks "skip_states" option to just "states" (#525).
continuous-integration/drone/push Build is failing Details
2022-05-24 14:09:42 -07:00
Dan Helfman 865eff7d98 Add Healthchecks monitoring hook "skip_states" option to disable pinging for particular monitoring states (#525).
continuous-integration/drone/push Build is failing Details
2022-05-24 13:59:28 -07:00
Dan Helfman b9741f4d0b Add Healthchecks monitoring hook "ping_body_limit" option to configure how many bytes of logs to send to the Healthchecks server (#294).
continuous-integration/drone/push Build is passing Details
2022-05-24 12:23:38 -07:00
Dan Helfman 02781662f8 Change monitoring hooks to specify the ping URL / integration key as a named option.
continuous-integration/drone/push Build is passing Details
2022-05-23 20:02:10 -07:00
Dan Helfman 32a1043468 Remove the error when "archive_name_format" is specified but a retention prefix isn't (#402).
continuous-integration/drone/push Build is passing Details
2022-05-23 16:11:24 -07:00
Dan Helfman 3e4aeec649 Warn when an unsupported variable is used in a hook command (#420).
continuous-integration/drone/push Build is passing Details
2022-05-23 15:27:54 -07:00
Dan Helfman b98b827594 Remove stale comment.
continuous-integration/drone/push Build is passing Details
2022-05-23 10:59:56 -07:00
Dan Helfman 255cc6ec23 When deep merging common configuration, merge colliding list values by appending them (#531).
continuous-integration/drone/push Build is passing Details
2022-05-20 15:28:28 -07:00
Dan Helfman 51fc37d57a Improve the error message when a configuration override contains an invalid value (#528).
continuous-integration/drone/push Build is passing Details
2022-05-20 13:38:53 -07:00
Dan Helfman 1921f55a9d Add emojis to documentation table of contents to make it easier to find particular how-to and reference guides at a glance.
continuous-integration/drone/push Build is passing Details
2022-05-20 11:11:35 -07:00
Dan Helfman fbd381fcc1 Clarify manual database extraction documentation.
continuous-integration/drone/push Build is passing Details
2022-05-20 10:06:19 -07:00
Dan Helfman cd88f9f2ea Better explain where to find the dump file when doing a manual restore (#510).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #510
2022-05-20 16:33:21 +00:00
Dan Helfman 788281cfb9 When a configuration include is a relative path, load it from either the current working directory or from the directory containing the file doing the including (#532).
continuous-integration/drone/push Build is passing Details
2022-05-19 17:15:05 -07:00
Dan Helfman cd234b689d Link to additional borgmatic Docker image.
continuous-integration/drone/push Build is passing Details
2022-05-12 12:00:12 -07:00
Dan Helfman 92354a77ee Mention that database dumps consumed disk space prior to borgmatic 1.5.3.
continuous-integration/drone/push Build is passing Details
2022-05-09 16:08:47 -07:00
Dan Helfman 48ff3e70d1 Clarify documentation about include merging mappings vs. values.
continuous-integration/drone/push Build is passing Details
2022-05-08 14:48:42 -07:00
Dan Helfman 7e9adfb899 Add NEWS entry for randomized systemd timer delay.
continuous-integration/drone/push Build is passing Details
2022-05-07 23:11:26 -07:00
Dan Helfman e238e256f7
Add randomized delay to systemd timer.
Merge pull request from Daniel15/patch-1
2022-05-07 23:08:02 -07:00
Daniel Lo Nigro 3ecb92a8d2
Add randomized delay to systemd timer 2022-05-07 16:42:06 -07:00
Dan Helfman d58d450628 Remove stale borgmatic binary link.
continuous-integration/drone/push Build is passing Details
2022-04-30 09:50:40 -07:00
Dan Helfman dee9c6e293 Remove link to stale borgmatic Docker image.
continuous-integration/drone/push Build is passing Details
2022-04-30 09:46:08 -07:00
Dan Helfman 897c4487de Add mention in documentation about multiple backup scheduling needs (#511).
continuous-integration/drone/push Build is passing Details
2022-04-28 11:16:31 -07:00
Dan Helfman 48b50b5209 Add documentation link to NEWS.
continuous-integration/drone/push Build is passing Details
2022-04-26 10:24:25 -07:00
Dan Helfman 13bae8c23b Typo.
continuous-integration/drone/push Build is passing Details
2022-04-26 10:12:02 -07:00
Dan Helfman 4a48e6aa04 Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2022-04-26 10:07:04 -07:00
Dan Helfman 525266ede6 Deep merging when including common configuration (#381).
continuous-integration/drone/push Build is passing Details
2022-04-25 21:18:37 -07:00
Dan Helfman d045eb55ac Add mention of sudo's "secure_path" option in borgmatic installation documentation (#513).
continuous-integration/drone/push Build is passing Details
2022-04-23 14:29:55 -07:00
Dan Helfman 0e6b425ac5 Fix "borgmatic borg key ..." to pass parameters to Borg in correct order (#515).
continuous-integration/drone/push Build is passing Details
2022-04-23 14:03:15 -07:00
Dan Helfman bdc26f2117 Add note about old, pre-1.6.0 hooks behavior.
continuous-integration/drone/push Build is passing Details
2022-04-22 19:58:28 -07:00
Dan Helfman ed7fe5c6d0 Instead of executing "before" command hooks before all borgmatic actions run (and "after" hooks after), execute these hooks right before/after the corresponding action (#473).
continuous-integration/drone/push Build is passing Details
2022-04-21 22:08:25 -07:00
Dan Helfman cbce6707f4 Clarify one_file_system behavior in schema comment (#520).
continuous-integration/drone/push Build is passing Details
2022-04-12 11:05:22 -07:00
Dan Helfman e40e726687 Change Healthchecks logs truncation size from 10k bytes to 100k bytes, corresponding to that same change on Healthchecks.io.
continuous-integration/drone/push Build is passing Details
2022-04-06 22:00:18 -07:00
Dan Helfman 0c027a3050 Fix handling of TERM signal to exit borgmatic, not just forward the signal to Borg (#516).
continuous-integration/drone/push Build is passing Details
2022-04-03 13:12:48 -07:00
Dan Helfman 9f44bbad65 Fix borgmatic exit code (so it's zero) when initial Borg calls fail but later retries succeed (#517).
continuous-integration/drone/push Build is passing Details
2022-04-02 22:28:41 -07:00
Dan Helfman 413a079f51 Clarify Python version support.
continuous-integration/drone/push Build is passing Details
2022-03-28 21:57:40 -07:00
gerdneuman 6f3accf691 Better explain where to find the dump file
continuous-integration/drone/pr Build is passing Details
I really had problem finding the dump file with the explanation as give before. I thought that the `~/.borgmatic/` would be my current user. So looked into `/home/gerd/.borgmatic` (wrong). Then I looked into `<EXTRACTED_DESTINATION_PATH/.borgmatic` (again wrong). Then finally (1h later and after having already prepared a bug ticketI figured out that the dump file is within `<EXTRACTED_DESTINATION_PATH/root/.borgmatic`. Hard to find because of course I d not only have `root` within `<EXTRACTED_DESTINATION_PATH/` but also all other backup'ed directories (including /etc/, /home/ on so on...)
2022-03-17 04:51:47 +00:00
Dan Helfman 5b3cfc542d Switch to PyPI API token.
continuous-integration/drone/push Build is passing Details
2022-03-14 14:00:03 -07:00
Dan Helfman c838c1d11b Fix header placement in documentation guide.
continuous-integration/drone/push Build is passing Details
2022-03-14 13:50:22 -07:00
Dan Helfman 4d1d8d7409 Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2022-03-14 13:43:24 -07:00
Dan Helfman db7499db82 Document "repositories" context to for "before_*" and "after_*" command action hooks (#469).
continuous-integration/drone/push Build is passing Details
2022-03-14 13:34:14 -07:00
Dan Helfman 6b500c2a8b Add repositories context for command hooks.
continuous-integration/drone/push Build is passing Details
Reviewed-on: #469
2022-03-14 20:13:15 +00:00
Dan Helfman 95c518e59b Documentation tip about dealing with hangs when database hook is enabled.
continuous-integration/drone/push Build is passing Details
2022-03-12 13:17:32 -08:00
Dan Helfman 976516d0e1 When loading a configuration file that is unreadable due to file permissions, warn instead of erroring (#444).
continuous-integration/drone/push Build is passing Details
2022-03-08 10:19:36 -08:00
Dan Helfman 574eb91921 Fix Borg usage error in the "compact" action when running "borgmatic --dry-run". Now, skip "compact" entirely during a dry run (#507).
continuous-integration/drone/push Build is passing Details
2022-03-07 21:46:12 -08:00
Dan Helfman 28fef3264b Fix handling of "patterns_from" and "exclude_from" options to error instead of warning when referencing unreadable files and running "create" action (#486).
continuous-integration/drone/push Build is passing Details
2022-03-07 15:32:07 -08:00
Dan Helfman 9161dbcb7d Removing unnecessary leading underscores from functions.
continuous-integration/drone/push Build is passing Details
2022-03-07 11:58:29 -08:00
Dan Helfman 4b3027e4fc Add test for new working_directory option (#431).
continuous-integration/drone/push Build is passing Details
2022-03-03 11:48:18 -08:00
Dan Helfman 0eb2634f9b Working directory option to support source directories with relative paths (#431).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #477
2022-03-03 19:28:17 +00:00
Dan Helfman 7c5b68c98f Bump version for release.
continuous-integration/drone/tag Build is passing Details
continuous-integration/drone/push Build is passing Details
2022-02-10 10:29:18 -08:00
Dan Helfman 9317cbaaf0 Code formatting.
continuous-integration/drone/push Build is passing Details
2022-02-10 10:23:34 -08:00
Dan Helfman 1b5f04b79f When using the "remote_rate_limit" option, tailor the flags passed to Borg depending on the Borg version (#394).
continuous-integration/drone/push Build is failing Details
2022-02-10 10:16:09 -08:00
Dan Helfman 948c86f62c When using the "numeric_owner" option with the "extract" action, tailor the flags passed to Borg depending on the Borg version (#394).
continuous-integration/drone/push Build is passing Details
2022-02-10 10:09:18 -08:00
Dan Helfman 7e7209322a When using the "numeric_owner" option, tailor the flags passed to Borg depending on the Borg version (#394).
continuous-integration/drone/push Build is passing Details
2022-02-10 09:51:13 -08:00
Dan Helfman 00a57fd947 Code formatting.
continuous-integration/drone/push Build is passing Details
2022-02-09 21:20:28 -08:00
Dan Helfman 6bf6ac310b When using the "bsd_flags" option, tailor the flags passed to Borg depending on the Borg version (#394).
continuous-integration/drone/push Build is failing Details
2022-02-09 21:11:00 -08:00
Dan Helfman 4b5af2770d When the "atime" option is used, tailor the flags passed to Borg depending on version (#394).
continuous-integration/drone/push Build is passing Details
2022-02-09 16:54:35 -08:00
Dan Helfman b525e70e1c Run "compact" action by default when no actions are specified (#394). 2022-02-09 14:33:12 -08:00
Dan Helfman 4498671233 Remove references to removed long-deprecated options (#394).
continuous-integration/drone/push Build is passing Details
2022-02-09 11:08:02 -08:00
Dan Helfman 9997aa9a92 Fix capitalization on compact help.
continuous-integration/drone/push Build is passing Details
2022-02-08 15:58:09 -08:00
Dan Helfman cbf7284f64 Add compact action to command-line reference documentation.
continuous-integration/drone/push Build is passing Details
2022-02-08 15:37:24 -08:00
Dan Helfman ee466f870d Fixing ruamel.yaml.clib breakages harder.
continuous-integration/drone/push Build is passing Details
2022-02-08 13:21:11 -08:00
Dan Helfman e3f4bf0293 Build fix for ruamel.yaml.clib error.
continuous-integration/drone/push Build is failing Details
2022-02-08 12:52:45 -08:00
Dan Helfman 46688f10b1 Merge branch 'master' of ssh://projects.torsion.org:3022/borgmatic-collective/borgmatic
continuous-integration/drone/push Build is failing Details
2022-02-08 12:10:57 -08:00
Dan Helfman 48f44d2f3d Add tests for compact action (#394). 2022-02-08 12:05:02 -08:00
Dan Helfman bff1347ba3 Fix some test failures (#394).
continuous-integration/drone/push Build is failing Details
2022-02-08 09:35:03 -08:00
Dan Helfman 9582324c88 Compact repository segments with new "borgmatic compact" action (#394).
continuous-integration/drone/push Build is failing Details
2022-02-07 23:29:44 -08:00
Dan Helfman bb0716421d Add comment about systemd service setting that may interfere with external commands in hooks (#492).
continuous-integration/drone/push Build is passing Details
2022-01-25 09:26:11 -08:00
Dan Helfman bec73245e9 Fix traceback when a YAML validation error occurs (#480, #482).
continuous-integration/drone/push Build is passing Details
2022-01-19 20:39:03 -08:00
Dan Helfman dcead12e86 Attempt to fix documentation build error introduced by Eleventy upgrade.
continuous-integration/drone/push Build is passing Details
2022-01-09 14:21:27 -08:00
Dan Helfman 0119514c11 Add Python version requirements to setup.py.
continuous-integration/drone/push Build is failing Details
2022-01-09 10:19:53 -08:00
fabianschilling b39f08694d Merge branch 'master' into pr-working-directory
continuous-integration/drone/pr Build is passing Details
2022-01-05 09:30:27 +00:00
Dan Helfman 80bdf1430b Bump version for release.
continuous-integration/drone/tag Build is passing Details
2022-01-04 20:20:13 -08:00
Dan Helfman 2ee75546f5 Add MongoDB database hook documentation.
continuous-integration/drone/push Build is passing Details
2022-01-04 16:26:38 -08:00
Dan Helfman 07d7ae60d5 Add MongoDB database hook (#288).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #483
2022-01-04 23:50:25 +00:00
Andrea Ghensi 87001337b4 Merge master into mongodb_hook
continuous-integration/drone/pr Build is passing Details
2022-01-04 22:20:44 +01:00
Dan Helfman 2e9964c200 Remove references to Lima Labs (shut down their storage business).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #488
2022-01-03 17:34:38 +00:00
Ian Kerins 3ec3d8d045 Remove references to Lima Labs
continuous-integration/drone/pr Build is passing Details
From their homepage:
> Lima Labs is shutting down our storage business. We will try to keep data available as long as possible. No promises but we are targeting 3/1/2022 to bring down Archive and Canada.
2022-01-03 02:29:38 -05:00
Dan Helfman 96384d5ee1 Attempt to fix typed-ast build issue by relaxing version requirements in test.
continuous-integration/drone/push Build is passing Details
2022-01-02 23:22:24 -08:00
Dan Helfman 8ed5467435 Drop support for Python 3.6. Add support for 3.10.
continuous-integration/drone/push Build is failing Details
2022-01-02 23:17:57 -08:00
Andrea Ghensi 7c6ce9399c fix integration tests and mongodb auth
continuous-integration/drone/pr Build is failing Details
2021-12-29 22:18:50 +01:00
Andrea Ghensi 6b7653484b Add mongodb dump hook
continuous-integration/drone/pr Build is failing Details
2021-12-26 01:00:58 +01:00
Fabian Schilling 85e0334826 Add missing working_directory arg to pass tests
continuous-integration/drone/pr Build is passing Details
2021-12-10 18:24:41 +01:00
Fabian Schilling 2a80e48a92 Pass working directory to execute functions 2021-12-10 18:23:44 +01:00
Fabian Schilling 5821c6782e Add defaults to not set in schema 2021-12-10 18:23:08 +01:00
Fabian Schilling f15498f6d9 Add working_directory to borgmatic schema 2021-12-10 17:58:27 +01:00
Dan Helfman a1673d1fa1 Fix unicode error when restoring particular MySQL databases (#476).
continuous-integration/drone/push Build is passing Details
2021-12-08 16:40:25 -08:00
Dan Helfman 2e99a1898c Fix f-string with missing expression.
continuous-integration/drone/push Build is passing Details
2021-11-29 14:05:36 -08:00
Dan Helfman 7a086d8430 Fix import ordering.
continuous-integration/drone/push Build was killed Details
2021-11-29 14:00:14 -08:00
Dan Helfman 0e8e9ced64 When command-line configuration override produces a parse error, error cleanly (#471).
continuous-integration/drone/push Build is failing Details
2021-11-29 12:49:21 -08:00
Dan Helfman f34951c088 Add MySQL dump command adjustment to NEWS.
continuous-integration/drone/push Build is passing Details
2021-11-29 12:10:04 -08:00
Dan Helfman c6f47d4d56 Move mysqldump options to the beginning of the command due to MySQL bug 30994 (#470).
continuous-integration/drone/push Build is failing Details
Reviewed-on: #470
2021-11-29 20:08:59 +00:00
nebulon42 c3e76585fc
move mysqldump options to the beginning of the command due to MySQL bug 30994.
continuous-integration/drone/pr Build is passing Details
2021-11-26 17:16:03 +01:00
Chen Yufei 0014b149f8 remove configuration_filename as it's already set.
continuous-integration/drone/pr Build is passing Details
2021-11-26 11:38:58 +08:00
Chen Yufei 091c07bbe2 Add context for various hooks.
continuous-integration/drone/pr Build is passing Details
2021-11-26 11:35:10 +08:00
Dan Helfman 240547102f Enable auto-play on linked asciicast.
continuous-integration/drone/push Build is passing Details
2021-11-25 13:09:55 -08:00
Dan Helfman 2bbd53e25a
Merge pull request #43 from acsfer/patch-1
Github doesn't allow script embedding
2021-11-25 13:06:43 -08:00
acsfer 58f2f63977
Switch to HTML 2021-11-25 22:03:26 +01:00
acsfer 7df6a78c30
Github doesn't allow script embedding 2021-11-25 21:36:31 +01:00
Dan Helfman c646edf2c7 Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2021-11-22 13:19:15 -08:00
Dan Helfman bcc820d646 Add list_options setting (#306).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #464
2021-11-22 21:14:02 +00:00
nebulon42 3729ba5ca3
add list_options setting, fixes #306
continuous-integration/drone/pr Build is passing Details
2021-11-20 15:43:58 +01:00
Dan Helfman 9c19591768 Revise hosting provider links.
continuous-integration/drone/push Build is passing Details
2021-11-15 20:06:09 -08:00
Dan Helfman 38ebfd2969 Rename retry_timeout to retry_wait and standardize log formatting (#28).
continuous-integration/drone/push Build is passing Details
2021-11-15 11:51:17 -08:00
Dan Helfman 180018fd81 Retry failing backups (#28, #432).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #432
2021-11-15 19:34:24 +00:00
Dan Helfman 794ae94ac4 Attempt to limit documentation pushing to commits (so, not pull requests).
continuous-integration/drone/push Build is passing Details
2021-11-15 11:08:26 -08:00
Dan Helfman 4eb6359ed3 Remove now-unneeded build image workaround.
continuous-integration/drone/push Build is passing Details
2021-11-15 10:56:12 -08:00
cadamswaite 976a877a25 Formatting
continuous-integration/drone/pr Build is failing Details
2021-11-14 22:37:42 +00:00
cadamswaite b4117916b8 Add timeout and tests 2021-11-14 22:15:22 +00:00
cadamswaite 19cad89978 Add some tests for retry logic 2021-11-14 21:35:23 +00:00
cadamswaite 6b182c9d2d Merge branch 'master' into master
continuous-integration/drone/pr Build is failing Details
2021-11-14 18:24:17 +00:00
Dan Helfman 4d6ed27f73 Add to changelog: Add support for old version (2.x) of jsonschema library.
continuous-integration/drone/push Build is passing Details
2021-10-23 09:49:16 -07:00
Dan Helfman 745a8f9b8a Add support for both jsonschema v3 and old v2 (#459).
continuous-integration/drone/push Build is passing Details
Reviewed-on: #459
2021-10-23 16:47:53 +00:00
Dan Helfman 6299d8115d Limit documentation build to master of main repo, as it pushes a Docker image.
continuous-integration/drone/push Build is passing Details
2021-10-23 09:45:17 -07:00
Kim B. Heino 717cfd2d37 validate: add support for both jsonschema v3 and old v2
continuous-integration/drone/pr Build is failing Details
RHEL8 and RHEL7 have old jsonschema v2. Try v3 (Draft7) first but
fallback to v2 (Draft4) if needed.
2021-10-23 15:04:07 +03:00
Dan Helfman 7881327004 Upgrade CI test dependencies.
continuous-integration/drone/push Build is passing Details
2021-10-22 14:07:14 -07:00
Dan Helfman 549aa9a25f Update editable link.
continuous-integration/drone/push Build is passing Details
2021-10-22 14:06:27 -07:00
Dan Helfman 1c6890492b Bump version for release.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2021-10-11 17:02:32 -07:00
Dan Helfman a7c8e7c823 Bump version for release.
continuous-integration/drone/push Build is passing Details
2021-10-11 11:13:32 -07:00
Dan Helfman c8fcf6b336 Mention changing borgmatic path in cron documentation (#455).
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2021-10-11 11:02:08 -07:00
Dan Helfman 449896f661 Fix error when configured source directories are not present on the filesystem at the time of backup (#387).
continuous-integration/drone/push Build is passing Details
2021-10-11 10:40:10 -07:00
Dan Helfman 1004500d65 Update sample systemd service file comments about more granular read-only filesystem settings.
continuous-integration/drone/push Build is passing Details
2021-10-11 09:33:07 -07:00
Dan Helfman 0a8d4e5dfb
Add more strict ProtectHome to systemd sample configuration.
Merge pull request #42 from VTimofeenko/systemd_protecthome
2021-10-11 09:26:28 -07:00
Dan Helfman 38e35bdb12 Skip TLS verify in documentation build clone to work around old drone/git CA certs.
continuous-integration/drone/push Build is passing Details
2021-10-04 14:31:15 -07:00
Dan Helfman 65503e38b6 Sigh.
continuous-integration/drone/push Build is failing Details
2021-10-04 13:14:19 -07:00
Dan Helfman d0c5bf6f6f Another attempt to unbreak build.
continuous-integration/drone/push Build is failing Details
2021-10-04 13:13:35 -07:00
Dan Helfman f129e4c301 Attempt to work-around outdated CA certificates in drone/git Docker image.
continuous-integration/drone/push Build is failing Details
2021-10-04 13:09:44 -07:00
Dan Helfman fbbb096cec Note in documentation that borgmatic requires Python 3.6+.
continuous-integration/drone/push Build is failing Details
2021-10-04 11:15:51 -07:00
Dan Helfman 77980511c6 Add another glob pattern example to exclude patterns.
continuous-integration/drone/push Build is passing Details
2021-09-16 09:51:40 -07:00
Dan Helfman 4ba206f8f4 Update build server URL to new organization namespace.
continuous-integration/drone/push Build is passing Details
2021-09-14 11:35:34 -07:00
Dan Helfman ecc849dd07 Move Gitea hosting from a personal namespace to an organization. 2021-09-14 11:32:01 -07:00
Dan Helfman 7ff6066d47 Move GitHub hosting from a personal namespace to an organization.
continuous-integration/drone/push Build is passing Details
2021-09-14 10:18:10 -07:00
Dan Helfman 2bb1fc9826 Mention Docker Compose under installation options.
continuous-integration/drone/push Build is passing Details
2021-09-12 13:15:34 -07:00
Vladimir Timofeenko 6df6176f3a
Added more strict ProtectHome to systemd unit
This commit changes the comment in sample systemd service.

Using a combination of 'ProtectHome' and 'BindPaths' it's possible to
hide the irrelevant paths inside /root from borgmatic service when it is
run.

ReadWritePaths are suggested to be used only for paths that contain borg
repositories and the backup sources can be specified as ReadOnlyPaths.
2021-08-30 11:20:34 -07:00
cadamswaite 89baf757cf Sort imports
continuous-integration/drone/pr Build is failing Details
2021-07-14 23:17:35 +01:00
cadamswaite 4f36fe2b9f Run Black on changed file
continuous-integration/drone/pr Build is failing Details
2021-07-14 22:53:01 +01:00
cadamswaite 510449ce65 Change default retries to 0 2021-07-14 22:49:03 +01:00
cadamswaite 4cc4b8d484 Add queue based retry logic 2021-07-14 22:46:02 +01:00
221 changed files with 25059 additions and 5194 deletions

View File

@ -1,86 +1,86 @@
---
kind: pipeline
name: python-3-6-alpine-3-9
services:
- name: postgresql
image: postgres:11.9-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.3
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: alpine:3.9
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-7-alpine-3-10
services:
- name: postgresql
image: postgres:11.9-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: mysql
image: mariadb:10.3
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
steps:
- name: build
image: alpine:3.10
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: python-3-8-alpine-3-13
services:
- name: postgresql
image: postgres:13.1-alpine
image: docker.io/postgres:13.1-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: postgresql2
image: docker.io/postgres:13.1-alpine
environment:
POSTGRES_PASSWORD: test2
POSTGRES_DB: test
POSTGRES_USER: postgres2
commands:
- docker-entrypoint.sh -p 5433
- name: mysql
image: mariadb:10.5
image: docker.io/mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
- name: mysql2
image: docker.io/mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: test2
MYSQL_DATABASE: test
commands:
- docker-entrypoint.sh --port=3307
- name: mongodb
image: docker.io/mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: test
- name: mongodb2
image: docker.io/mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root2
MONGO_INITDB_ROOT_PASSWORD: test2
commands:
- docker-entrypoint.sh --port=27018
clone:
skip_verify: true
steps:
- name: build
image: alpine:3.13
image: docker.io/alpine:3.13
environment:
TEST_CONTAINER: true
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: documentation
type: exec
platform:
os: linux
arch: amd64
clone:
skip_verify: true
steps:
- name: build
#image: plugins/docker
# Temporary work-around for https://github.com/drone-plugins/drone-docker/pull/327
image: techknowlogick/drone-docker
settings:
username:
environment:
USERNAME:
from_secret: docker_username
password:
PASSWORD:
from_secret: docker_password
repo: witten/borgmatic-docs
dockerfile: docs/Dockerfile
IMAGE_NAME: projects.torsion.org/borgmatic-collective/borgmatic:docs
commands:
- podman login --username "$USERNAME" --password "$PASSWORD" projects.torsion.org
- podman build --tag "$IMAGE_NAME" --file docs/Dockerfile --storage-opt "overlay.mount_program=/usr/bin/fuse-overlayfs" .
- podman push "$IMAGE_NAME"
trigger:
repo:
- borgmatic-collective/borgmatic
branch:
- master
- main
event:
- push

View File

@ -1,4 +1,5 @@
const pluginSyntaxHighlight = require("@11ty/eleventy-plugin-syntaxhighlight");
const codeClipboard = require("eleventy-plugin-code-clipboard");
const inclusiveLangPlugin = require("@11ty/eleventy-plugin-inclusive-language");
const navigationPlugin = require("@11ty/eleventy-navigation");
@ -6,6 +7,7 @@ module.exports = function(eleventyConfig) {
eleventyConfig.addPlugin(pluginSyntaxHighlight);
eleventyConfig.addPlugin(inclusiveLangPlugin);
eleventyConfig.addPlugin(navigationPlugin);
eleventyConfig.addPlugin(codeClipboard);
let markdownIt = require("markdown-it");
let markdownItAnchor = require("markdown-it-anchor");
@ -23,8 +25,7 @@ module.exports = function(eleventyConfig) {
}
};
let markdownItAnchorOptions = {
permalink: true,
permalinkClass: "direct-link"
permalink: markdownItAnchor.permalink.headerLink()
};
eleventyConfig.setLibrary(
@ -32,10 +33,13 @@ module.exports = function(eleventyConfig) {
markdownIt(markdownItOptions)
.use(markdownItAnchor, markdownItAnchorOptions)
.use(markdownItReplaceLink)
.use(codeClipboard.markdownItCopyButton)
);
eleventyConfig.addPassthroughCopy({"docs/static": "static"});
eleventyConfig.setLiquidOptions({dynamicPartials: false});
return {
templateFormats: [
"md",

1
.flake8 Normal file
View File

@ -0,0 +1 @@
select = Q0

413
NEWS
View File

@ -1,3 +1,406 @@
1.8.0.dev0
* #575: BREAKING: For the "borgmatic borg" action, instead of implicitly injecting
repository/archive into the resulting Borg command-line, pass repository to Borg via an
environment variable and make archive available for explicit use in your commands. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/run-arbitrary-borg-commands/
* #719: Fix an error when running "borg key export" through borgmatic.
* #720: Fix an error when dumping a MySQL database and the "exclude_nodump" option is set.
* When merging two configuration files, error gracefully if the two files do not adhere to the same
format.
* BREAKING: Remove the deprecated (and silently ignored) "--successful" flag on the "list" action,
as newer versions of Borg list successful (non-checkpoint) archives by default.
* All deprecated configuration option values now generate warning logs.
* Remove the deprecated (and non-functional) "--excludes" flag in favor of excludes within
configuration.
1.7.15
* #326: Add configuration options and command-line flags for backing up a database from one
location while restoring it somewhere else.
* #399: Add a documentation troubleshooting note for MySQL/MariaDB authentication errors.
* #529: Remove upgrade-borgmatic-config command for upgrading borgmatic 1.1.0 INI-style
configuration.
* #529: Deprecate generate-borgmatic-config in favor of new "config generate" action.
* #529: Deprecate validate-borgmatic-config in favor of new "config validate" action.
* #697, #712, #716: Extract borgmatic configuration from backup via new "config bootstrap"
action—even when borgmatic has no configuration yet!
* #669: Add sample systemd user service for running borgmatic as a non-root user.
* #711, #713: Fix an error when "data" check time files are accessed without getting upgraded
first.
1.7.14
* #484: Add a new verbosity level (-2) to disable output entirely (for console, syslog, log file,
or monitoring), so not even errors are shown.
* #688: Tweak archive check probing logic to use the newest timestamp found when multiple exist.
* #659: Add Borg 2 date-based matching flags to various actions for archive selection.
* #703: Fix an error when loading the configuration schema on Fedora Linux.
* #704: Fix "check" action error when repository and archive checks are configured but the archive
check gets skipped due to the configured frequency.
* #706: Fix "--archive latest" on "list" and "info" actions that only worked on the first of
multiple configured repositories.
1.7.13
* #375: Restore particular PostgreSQL schemas from a database dump via "borgmatic restore --schema"
flag. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#restore-particular-schemas
* #678: Fix error from PostgreSQL when dumping a database with a "format" of "plain".
* #678: Fix PostgreSQL hook to support "psql_command" and "pg_restore_command" options containing
commands with arguments.
* #678: Fix calls to psql in PostgreSQL hook to ignore "~/.psqlrc", whose settings can break
database dumping.
* #680: Add support for logging each log line as a JSON object via global "--log-json" flag.
* #682: Fix "source_directories_must_exist" option to expand globs and tildes in source directories.
* #684: Rename "master" development branch to "main" to use more inclusive language. You'll need to
update your development checkouts accordingly.
* #686: Add fish shell completion script so you can tab-complete on the borgmatic command-line. See
the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#shell-completion
* #687: Fix borgmatic error when not finding the configuration schema for certain "pip install
--editable" development installs.
* #688: Fix archive checks being skipped even when particular archives haven't been checked
recently. This occurred when using multiple borgmatic configuration files with different
"archive_name_format"s, for instance.
* #691: Fix error in "borgmatic restore" action when the configured repository path is relative
instead of absolute.
* #694: Run "borgmatic borg" action without capturing output so interactive prompts and flags like
"--progress" still work.
1.7.12
* #413: Add "log_file" context to command hooks so your scripts can consume the borgmatic log file.
See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #666, #670: Fix error when running the "info" action with the "--match-archives" or "--archive"
flags. Also fix the "--match-archives"/"--archive" flags to correctly override the
"match_archives" configuration option for the "transfer", "list", "rlist", and "info" actions.
* #668: Fix error when running the "prune" action with both "archive_name_format" and "prefix"
options set.
* #672: Selectively shallow merge certain mappings or sequences when including configuration files.
See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#shallow-merge
* #672: Selectively omit list values when including configuration files. See the documentation for
more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#list-merge
* #673: View the results of configuration file merging via "validate-borgmatic-config --show" flag.
See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#debugging-includes
* Add optional support for running end-to-end tests and building documentation with rootless Podman
instead of Docker.
1.7.11
* #479, #588: BREAKING: Automatically use the "archive_name_format" option to filter which archives
get used for borgmatic actions that operate on multiple archives. Override this behavior with the
new "match_archives" option in the storage section. This change is "breaking" in that it silently
changes which archives get considered for "rlist", "prune", "check", etc. See the documentation
for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#archive-naming
* #479, #588: The "prefix" options have been deprecated in favor of the new "archive_name_format"
auto-matching behavior and the "match_archives" option.
* #658: Add "--log-file-format" flag for customizing the log message format. See the documentation
for more information:
https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#logging-to-file
* #662: Fix regression in which the "check_repositories" option failed to match repositories.
* #663: Fix regression in which the "transfer" action produced a traceback.
* Add spellchecking of source code during test runs.
1.7.10
* #396: When a database command errors, display and log the error message instead of swallowing it.
* #501: Optionally error if a source directory does not exist via "source_directories_must_exist"
option in borgmatic's location configuration.
* #576: Add support for "file://" paths within "repositories" option.
* #612: Define and use custom constants in borgmatic configuration files. See the documentation for
more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#constant-interpolation
* #618: Add support for BORG_FILES_CACHE_TTL environment variable via "borg_files_cache_ttl" option
in borgmatic's storage configuration.
* #623: Fix confusing message when an error occurs running actions for a configuration file.
* #635: Add optional repository labels so you can select a repository via "--repository yourlabel"
at the command-line. See the configuration reference for more information:
https://torsion.org/borgmatic/docs/reference/configuration/
* #649: Add documentation on backing up a database running in a container:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
* #655: Fix error when databases are configured and a source directory doesn't exist.
* Add code style plugins to enforce use of Python f-strings and prevent single-letter variables.
To join in the pedantry, refresh your test environment with "tox --recreate".
* Rename scripts/run-full-dev-tests to scripts/run-end-to-end-dev-tests and make it run end-to-end
tests only. Continue using tox to run unit and integration tests.
1.7.9
* #295: Add a SQLite database dump/restore hook.
* #304: Change the default action order when no actions are specified on the command-line to:
"create", "prune", "compact", "check". If you'd like to retain the old ordering ("prune" and
"compact" first), then specify actions explicitly on the command-line.
* #304: Run any command-line actions in the order specified instead of using a fixed ordering.
* #564: Add "--repository" flag to all actions where it makes sense, so you can run borgmatic on
a single configured repository instead of all of them.
* #628: Add a Healthchecks "log" state to send borgmatic logs to Healthchecks without signalling
success or failure.
* #647: Add "--strip-components all" feature on the "extract" action to remove leading path
components of files you extract. Must be used with the "--path" flag.
* Add support for Python 3.11.
1.7.8
* #620: With the "create" action and the "--list" ("--files") flag, only show excluded files at
verbosity 2.
* #621: Add optional authentication to the ntfy monitoring hook.
* With the "create" action, only one of "--list" ("--files") and "--progress" flags can be used.
This lines up with the new behavior in Borg 2.0.0b5.
* Internally support new Borg 2.0.0b5 "--filter" status characters / item flags for the "create"
action.
* Fix the "create" action with the "--dry-run" flag querying for databases when a PostgreSQL/MySQL
"all" database is configured. Now, these queries are skipped due to the dry run.
* Add "--repository" flag to the "rcreate" action to optionally select one configured repository to
create.
* Add "--progress" flag to the "transfer" action, new in Borg 2.0.0b5.
* Add "checkpoint_volume" configuration option to creates checkpoints every specified number of
bytes during a long-running backup, new in Borg 2.0.0b5.
1.7.7
* #642: Add MySQL database hook "add_drop_database" configuration option to control whether dumped
MySQL databases get dropped right before restore.
* #643: Fix for potential data loss (data not getting backed up) when dumping large "directory"
format PostgreSQL/MongoDB databases. Prior to the fix, these dumps would not finish writing to
disk before Borg consumed them. Now, the dumping process completes before Borg starts. This only
applies to "directory" format databases; other formats still stream to Borg without using
temporary disk space.
* Fix MongoDB "directory" format to work with mongodump/mongorestore without error. Prior to this
fix, only the "archive" format worked.
1.7.6
* #393, #438, #560: Optionally dump "all" PostgreSQL/MySQL databases to separate files instead of
one combined dump file, allowing more convenient restores of individual databases. You can enable
this by specifying the database dump "format" option when the database is named "all".
* #602: Fix logs that interfere with JSON output by making warnings go to stderr instead of stdout.
* #622: Fix traceback when include merging configuration files on ARM64.
* #629: Skip warning about excluded special files when no special files have been excluded.
* #630: Add configuration options for database command customization: "list_options",
"restore_options", and "analyze_options" for PostgreSQL, "restore_options" for MySQL, and
"restore_options" for MongoDB.
1.7.5
* #311: Override PostgreSQL dump/restore commands via configuration options.
* #604: Fix traceback when a configuration section is present but lacking any options.
* #607: Clarify documentation examples for include merging and deep merging.
* #611: Fix "data" consistency check to support "check_last" and consistency "prefix" options.
* #613: Clarify documentation about multiple repositories and separate configuration files.
1.7.4
* #596: Fix special file detection erroring when broken symlinks are encountered.
* #597, #598: Fix regression in which "check" action errored on certain systems ("Cannot determine
Borg repository ID").
1.7.3
* #357: Add "break-lock" action for removing any repository and cache locks leftover from Borg
aborting.
* #360: To prevent Borg hangs, unconditionally delete stale named pipes before dumping databases.
* #587: When database hooks are enabled, auto-exclude special files from a "create" action to
prevent Borg from hanging. You can override/prevent this behavior by explicitly setting the
"read_special" option to true.
* #587: Warn when ignoring a configured "read_special" value of false, as true is needed when
database hooks are enabled.
* #589: Update sample systemd service file to allow system "idle" (e.g. a video monitor turning
off) while borgmatic is running.
* #590: Fix for potential data loss (data not getting backed up) when the "patterns_from" option
was used with "source_directories" (or the "~/.borgmatic" path existed, which got injected into
"source_directories" implicitly). The fix is for borgmatic to convert "source_directories" into
patterns whenever "patterns_from" is used, working around a Borg bug:
https://github.com/borgbackup/borg/issues/6994
* #590: In "borgmatic create --list" output, display which files get excluded from the backup due
to patterns or excludes.
* #591: Add support for Borg 2's "--match-archives" flag. This replaces "--glob-archives", which
borgmatic now treats as an alias for "--match-archives". But note that the two flags have
slightly different syntax. See the Borg 2 changelog for more information:
https://borgbackup.readthedocs.io/en/2.0.0b3/changes.html#version-2-0-0b3-2022-10-02
* Fix for "borgmatic --archive latest" not finding the latest archive when a verbosity is set.
1.7.2
* #577: Fix regression in which "borgmatic info --archive ..." showed repository info instead of
archive info with Borg 1.
* #582: Fix hang when database hooks are enabled and "patterns" contains a parent directory of
"~/.borgmatic".
1.7.1
* #542: Make the "source_directories" option optional. This is useful for "check"-only setups or
using "patterns" exclusively.
* #574: Fix for potential data loss (data not getting backed up) when the "patterns" option was
used with "source_directories" (or the "~/.borgmatic" path existed, which got injected into
"source_directories" implicitly). The fix is for borgmatic to convert "source_directories" into
patterns whenever "patterns" is used, working around a Borg bug:
https://github.com/borgbackup/borg/issues/6994
1.7.0
* #463: Add "before_actions" and "after_actions" command hooks that run before/after all the
actions for each repository. These new hooks are a good place to run per-repository steps like
mounting/unmounting a remote filesystem.
* #463: Update documentation to cover per-repository configurations:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/
* #557: Support for Borg 2 while still working with Borg 1. This includes new borgmatic actions
like "rcreate" (replaces "init"), "rlist" (list archives in repository), "rinfo" (show repository
info), and "transfer" (for upgrading Borg repositories). For the most part, borgmatic tries to
smooth over differences between Borg 1 and 2 to make your upgrade process easier. However, there
are still a few cases where Borg made breaking changes. See the Borg 2.0 changelog for more
information: https://www.borgbackup.org/releases/borg-2.0.html
* #557: If you install Borg 2, you'll need to manually upgrade your existing Borg 1 repositories
before use. Note that Borg 2 stable is not yet released as of this borgmatic release, so don't
use Borg 2 for production until it is! See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-borg
* #557: Rename several configuration options to match Borg 2: "remote_rate_limit" is now
"upload_rate_limit", "numeric_owner" is "numeric_ids", and "bsd_flags" is "flags". borgmatic
still works with the old options.
* #557: Remote repository paths without the "ssh://" syntax are deprecated but still supported for
now. Remote repository paths containing "~" are deprecated in borgmatic and no longer work in
Borg 2.
* #557: Omitting the "--archive" flag on the "list" action is deprecated when using Borg 2. Use
the new "rlist" action instead.
* #557: The "--dry-run" flag can now be used with the "rcreate"/"init" action.
* #565: Fix handling of "repository" and "data" consistency checks to prevent invalid Borg flags.
* #566: Modify "mount" and "extract" actions to require the "--repository" flag when multiple
repositories are configured.
* #571: BREAKING: Remove old-style command-line action flags like "--create, "--list", etc. If
you're already using actions like "create" and "list" instead, this change should not affect you.
* #571: BREAKING: Rename "--files" flag on "prune" action to "--list", as it lists archives, not
files.
* #571: Add "--list" as alias for "--files" flag on "create" and "export-tar" actions.
* Add support for disabling TLS verification in Healthchecks monitoring hook with "verify_tls"
option.
1.6.6
* #559: Update documentation about configuring multiple consistency checks or multiple databases.
* #560: Fix all database hooks to error when the requested database to restore isn't present in the
Borg archive.
* #561: Fix command-line "--override" flag to continue supporting old configuration file formats.
* #563: Fix traceback with "create" action and "--json" flag when a database hook is configured.
1.6.5
* #553: Fix logging to include the full traceback when Borg experiences an internal error, not just
the first few lines.
* #554: Fix all monitoring hooks to warn if the server returns an HTTP 4xx error. This can happen
with Healthchecks, for instance, when using an invalid ping URL.
* #555: Fix environment variable plumbing so options like "encryption_passphrase" and
"encryption_passcommand" in one configuration file aren't used for other configuration files.
1.6.4
* #546, #382: Keep your repository passphrases and database passwords outside of borgmatic's
configuration file with environment variable interpolation. See the documentation for more
information: https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
1.6.3
* #541: Add "borgmatic list --find" flag for searching for files across multiple archives, useful
for hunting down that file you accidentally deleted so you can extract it. See the documentation
for more information:
https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#searching-for-a-file
* #543: Add a monitoring hook for sending push notifications via ntfy. See the documentation for
more information: https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook
* Fix Bash completion script to no longer alter your shell's settings (complain about unset
variables or error on pipe failures).
* Deprecate "borgmatic list --successful" flag, as listing only non-checkpoint (successful)
archives is now the default in newer versions of Borg.
1.6.2
* #523: Reduce the default consistency check frequency and support configuring the frequency
independently for each check. Also add "borgmatic check --force" flag to ignore configured
frequencies. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#check-frequency
* #536: Fix generate-borgmatic-config to support more complex schema changes like the new
Healthchecks configuration options when the "--source" flag is used.
* #538: Add support for "borgmatic borg debug" command.
* #539: Add "generate-borgmatic-config --overwrite" flag to replace an existing destination file.
* Add Bash completion script so you can tab-complete the borgmatic command-line. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#shell-completion
1.6.1
* #294: Add Healthchecks monitoring hook "ping_body_limit" option to configure how many bytes of
logs to send to the Healthchecks server.
* #402: Remove the error when "archive_name_format" is specified but a retention prefix isn't.
* #420: Warn when an unsupported variable is used in a hook command.
* #439: Change connection failures for monitoring hooks (Healthchecks, Cronitor, PagerDuty, and
Cronhub) to be warnings instead of errors. This way, the monitoring system failing does not block
backups.
* #460: Add Healthchecks monitoring hook "send_logs" option to enable/disable sending borgmatic
logs to the Healthchecks server.
* #525: Add Healthchecks monitoring hook "states" option to only enable pinging for particular
monitoring states (start, finish, fail).
* #528: Improve the error message when a configuration override contains an invalid value.
* #531: BREAKING: When deep merging common configuration, merge colliding list values by appending
them. Previously, one list replaced the other.
* #532: When a configuration include is a relative path, load it from either the current working
directory or from the directory containing the file doing the including. Previously, only the
working directory was used.
* Add a randomized delay to the sample systemd timer to spread out the load on a server.
* Change the configuration format for borgmatic monitoring hooks (Healthchecks, Cronitor,
PagerDuty, and Cronhub) to specify the ping URL / integration key as a named option. The intent
is to support additional options (some in this release). This change is backwards-compatible.
* Add emojis to documentation table of contents to make it easier to find particular how-to and
reference guides at a glance.
1.6.0
* #381: BREAKING: Greatly simplify configuration file reuse by deep merging when including common
configuration. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#include-merging
* #473: BREAKING: Instead of executing "before" command hooks before all borgmatic actions run (and
"after" hooks after), execute these hooks right before/after the corresponding action. E.g.,
"before_check" now runs immediately before the "check" action. This better supports running
timing-sensitive tasks like pausing containers. Side effect: before/after command hooks now run
once for each configured repository instead of once per configuration file. Additionally, the
"repositories" interpolated variable has been changed to "repository", containing the path to the
current repository for the hook. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #513: Add mention of sudo's "secure_path" option to borgmatic installation documentation.
* #515: Fix "borgmatic borg key ..." to pass parameters to Borg in the correct order.
* #516: Fix handling of TERM signal to exit borgmatic, not just forward the signal to Borg.
* #517: Fix borgmatic exit code (so it's zero) when initial Borg calls fail but later retries
succeed.
* Change Healthchecks logs truncation size from 10k bytes to 100k bytes, corresponding to that
same change on Healthchecks.io.
1.5.24
* #431: Add "working_directory" option to support source directories with relative paths.
* #444: When loading a configuration file that is unreadable due to file permissions, warn instead
of erroring. This supports running borgmatic as a non-root user with configuration in ~/.config
even if there is an unreadable global configuration file in /etc.
* #469: Add "repositories" context to "before_*" and "after_*" command action hooks. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #486: Fix handling of "patterns_from" and "exclude_from" options to error instead of warning when
referencing unreadable files and "create" action is run.
* #507: Fix Borg usage error in the "compact" action when running "borgmatic --dry-run". Now, skip
"compact" entirely during a dry run.
1.5.23
* #394: Compact repository segments and free space with new "borgmatic compact" action. Borg 1.2+
only. Also run "compact" by default when no actions are specified, as "prune" in Borg 1.2 no
longer frees up space unless "compact" is run.
* #394: When using the "atime", "bsd_flags", "numeric_owner", or "remote_rate_limit" options,
tailor the flags passed to Borg depending on the Borg version.
* #480, #482: Fix traceback when a YAML validation error occurs.
1.5.22
* #288: Add database dump hook for MongoDB.
* #470: Move mysqldump options to the beginning of the command due to MySQL bug 30994.
* #471: When command-line configuration override produces a parse error, error cleanly instead of
tracebacking.
* #476: Fix unicode error when restoring particular MySQL databases.
* Drop support for Python 3.6, which has been end-of-lifed.
* Add support for Python 3.10.
1.5.21
* #28: Optionally retry failing backups via "retries" and "retry_wait" configuration options.
* #306: Add "list_options" MySQL configuration option for passing additional arguments to MySQL
list command.
* #459: Add support for old version (2.x) of jsonschema library.
1.5.20
* Re-release with correct version without dev0 tag.
1.5.19
* #387: Fix error when configured source directories are not present on the filesystem at the time
of backup. Now, Borg will complain, but the backup will still continue.
* #455: Mention changing borgmatic path in cron documentation.
* Update sample systemd service file with more granular read-only filesystem settings.
* Move Gitea and GitHub hosting from a personal namespace to an organization for better
collaboration with related projects.
* 1k ★s on GitHub!
1.5.18
* #389: Fix "message too long" error when logging to rsyslog.
* #440: Fix traceback that can occur when dumping a database.
@ -26,7 +429,7 @@
* #398: Clarify canonical home of borgmatic in documentation.
* #406: Clarify that spaces in path names should not be backslashed in path names.
* #423: Fix error handling to error loudly when Borg gets killed due to running out of memory!
* Fix build so as not to attempt to build and push documentation for a non-master branch.
* Fix build so as not to attempt to build and push documentation for a non-main branch.
* "Fix" build failure with Alpine Edge by switching from Edge to Alpine 3.13.
* Move #borgmatic IRC channel from Freenode to Libera Chat due to Freenode takeover drama.
IRC connection info: https://torsion.org/borgmatic/#issues
@ -89,7 +492,7 @@
configuration schema descriptions.
1.5.6
* #292: Allow before_backup and similiar hooks to exit with a soft failure without altering the
* #292: Allow before_backup and similar hooks to exit with a soft failure without altering the
monitoring status on Healthchecks or other providers. Support this by waiting to ping monitoring
services with a "start" status until after before_* hooks finish. Failures in before_* hooks
still trigger a monitoring "fail" status.
@ -158,7 +561,7 @@
* For "list" and "info" actions, show repository names even at verbosity 0.
1.4.22
* #276, #285: Disable colored output when "--json" flag is used, so as to produce valid JSON ouput.
* #276, #285: Disable colored output when "--json" flag is used, so as to produce valid JSON output.
* After a backup of a database dump in directory format, properly remove the dump directory.
* In "borgmatic --help", don't expand $HOME in listing of default "--config" paths.
@ -530,7 +933,7 @@
* #77: Skip non-"*.yaml" config filenames in /etc/borgmatic.d/ so as not to parse backup files,
editor swap files, etc.
* #81: Document user-defined hooks run before/after backup, or on error.
* Add code style guidelines to the documention.
* Add code style guidelines to the documentation.
1.2.0
* #61: Support for Borg --list option via borgmatic command-line to list all archives.
@ -568,7 +971,7 @@
* #49: Support for Borg experimental --patterns-from and --patterns options for specifying mixed
includes/excludes.
* Moved issue tracker from Taiga to integrated Gitea tracker at
https://projects.torsion.org/witten/borgmatic/issues
https://projects.torsion.org/borgmatic-collective/borgmatic/issues
1.1.12
* #46: Declare dependency on pykwalify 1.6 or above, as older versions yield "Unknown key: version"

View File

@ -11,7 +11,7 @@ borgmatic is simple, configuration-driven backup software for servers and
workstations. Protect your files with client-side encryption. Backup your
databases too. Monitor it all with integrated third-party services.
The canonical home of borgmatic is at <a href="https://torsion.org/borgmatic">https://torsion.org/borgmatic</a>.
The canonical home of borgmatic is at <a href="https://torsion.org/borgmatic">https://torsion.org/borgmatic/</a>
Here's an example configuration file:
@ -24,10 +24,10 @@ location:
# Paths of local or remote repositories to backup to.
repositories:
- 1234@usw-s001.rsync.net:backups.borg
- k8pDxu32@k8pDxu32.repo.borgbase.com:repo
- user1@scp2.cdn.lima-labs.com:repo
- /var/lib/backups/local.borg
- path: ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
label: borgbase
- path: /var/lib/backups/local.borg
label: local
retention:
# Retention policy for how many backups to keep.
@ -38,8 +38,9 @@ retention:
consistency:
# List of checks to run to validate your backups.
checks:
- repository
- archives
- name: repository
- name: archives
frequency: 2 weeks
hooks:
# Custom preparation scripts to run.
@ -55,9 +56,9 @@ hooks:
```
Want to see borgmatic in action? Check out the <a
href="https://asciinema.org/a/203761" target="_blank">screencast</a>.
href="https://asciinema.org/a/203761?autoplay=1" target="_blank">screencast</a>.
<script src="https://asciinema.org/a/203761.js" id="asciicast-203761" async></script>
<a href="https://asciinema.org/a/203761?autoplay=1" target="_blank"><img src="https://asciinema.org/a/203761.png" width="480"></a>
borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
@ -66,11 +67,13 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
<a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://sqlite.org/"><img src="docs/static/sqlite.png" alt="SQLite" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.pagerduty.com/"><img src="docs/static/pagerduty.png" alt="PagerDuty" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic"><img src="docs/static/rsyncnet.png" alt="rsync.net" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://ntfy.sh/"><img src="docs/static/ntfy.png" alt="ntfy" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
@ -79,8 +82,8 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
Your first step is to [install and configure
borgmatic](https://torsion.org/borgmatic/docs/how-to/set-up-backups/).
For additional documentation, check out the links above for <a
href="https://torsion.org/borgmatic/#documentation">borgmatic how-to and
For additional documentation, check out the links above (left panel on wide screens)
for <a href="https://torsion.org/borgmatic/#documentation">borgmatic how-to and
reference guides</a>.
@ -88,40 +91,49 @@ reference guides</a>.
Need somewhere to store your encrypted off-site backups? The following hosting
providers include specific support for Borg/borgmatic—and fund borgmatic
development and hosting when you use these links to sign up. (These are
referral links, but without any tracking scripts or cookies.)
development and hosting when you use these referral links to sign up:
<ul>
<li class="referral"><a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic">rsync.net</a>: Cloud Storage provider with full support for borg and any other SSH/SFTP tool</li>
<li class="referral"><a href="https://www.borgbase.com/?utm_source=borgmatic">BorgBase</a>: Borg hosting service with support for monitoring, 2FA, and append-only repos</li>
<li class="referral"><a href="https://storage.lima-labs.com/special-pricing-offer-for-borgmatic-users/">Lima-Labs</a>: Affordable, reliable cloud data storage accessable via SSH/SCP/FTP for Borg backups or any other bulk storage needs</li>
<li class="referral"><a href="https://hetzner.cloud/?ref=v9dOJ98Ic9I8">Hetzner</a>: A "storage box" that includes support for Borg</li>
</ul>
Additionally, [Hetzner](https://www.hetzner.com/storage/storage-box) has a
compatible storage offering, but does not currently fund borgmatic
development or hosting.
Additionally, rsync.net has a compatible storage offering, but does not fund
borgmatic development or hosting.
## Support and contributing
### Issues
You've got issues? Or an idea for a feature enhancement? We've got an [issue
tracker](https://projects.torsion.org/witten/borgmatic/issues). In order to
create a new issue or comment on an issue, you'll need to [login
first](https://projects.torsion.org/user/login). Note that you can login with
an existing GitHub account if you prefer.
If you'd like to chat with borgmatic developers or users, head on over to the
`#borgmatic` IRC channel on Libera Chat, either via <a
href="https://web.libera.chat/#borgmatic">web chat</a> or a
native <a href="ircs://irc.libera.chat:6697">IRC client</a>. If you
don't get a response right away, please hang around a while—or file a ticket
instead.
Are you experiencing an issue with borgmatic? Or do you have an idea for a
feature enhancement? Head on over to our [issue
tracker](https://projects.torsion.org/borgmatic-collective/borgmatic/issues).
In order to create a new issue or add a comment, you'll need to
[register](https://projects.torsion.org/user/sign_up?invite_code=borgmatic)
first. If you prefer to use an existing GitHub account, you can skip account
creation and [login directly](https://projects.torsion.org/user/login).
Also see the [security
policy](https://torsion.org/borgmatic/docs/security-policy/) for any security
issues.
### Social
Follow [borgmatic on Mastodon](https://fosstodon.org/@borgmatic).
### Chat
To chat with borgmatic developers or users, check out the `#borgmatic`
IRC channel on Libera Chat, either via <a
href="https://web.libera.chat/#borgmatic">web chat</a> or a native <a
href="ircs://irc.libera.chat:6697">IRC client</a>. If you don't get a response
right away, please hang around a while—or file a ticket instead.
### Other
Other questions or comments? Contact
[witten@torsion.org](mailto:witten@torsion.org).
@ -129,21 +141,25 @@ Other questions or comments? Contact
### Contributing
borgmatic [source code is
available](https://projects.torsion.org/witten/borgmatic) and is also mirrored
on [GitHub](https://github.com/witten/borgmatic) for convenience.
available](https://projects.torsion.org/borgmatic-collective/borgmatic) and is also mirrored
on [GitHub](https://github.com/borgmatic-collective/borgmatic) for convenience.
borgmatic is licensed under the GNU General Public License version 3 or any
later version.
If you'd like to contribute to borgmatic development, please feel free to
submit a [Pull Request](https://projects.torsion.org/witten/borgmatic/pulls)
or open an [issue](https://projects.torsion.org/witten/borgmatic/issues) first
to discuss your idea. We also accept Pull Requests on GitHub, if that's more
your thing. In general, contributions are very welcome. We don't bite!
submit a [Pull
Request](https://projects.torsion.org/borgmatic-collective/borgmatic/pulls) or
open an
[issue](https://projects.torsion.org/borgmatic-collective/borgmatic/issues) to
discuss your idea. Note that you'll need to
[register](https://projects.torsion.org/user/sign_up?invite_code=borgmatic)
first. We also accept Pull Requests on GitHub, if that's more your thing. In
general, contributions are very welcome. We don't bite!
Also, please check out the [borgmatic development
how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for
info on cloning source code, running tests, etc.
<a href="https://build.torsion.org/witten/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/witten/borgmatic/status.svg?ref=refs/heads/master)</a>
<a href="https://build.torsion.org/borgmatic-collective/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/borgmatic-collective/borgmatic/status.svg?ref=refs/heads/main)</a>

View File

@ -7,8 +7,8 @@ permalink: security-policy/index.html
While we want to hear about security vulnerabilities in all versions of
borgmatic, security fixes are only made to the most recently released version.
It's simply not practical for our small volunteer effort to maintain multiple
release branches and put out separate security patches for each.
It's not practical for our small volunteer effort to maintain multiple release
branches and put out separate security patches for each.
## Reporting a vulnerability

View File

View File

@ -0,0 +1,9 @@
import argparse
def update_arguments(arguments, **updates):
'''
Given an argparse.Namespace instance of command-line arguments and one or more keyword argument
updates to perform, return a copy of the arguments with those updates applied.
'''
return argparse.Namespace(**dict(vars(arguments), **updates))

45
borgmatic/actions/borg.py Normal file
View File

@ -0,0 +1,45 @@
import logging
import borgmatic.borg.borg
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_borg(
repository,
storage,
local_borg_version,
borg_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "borg" action for the given repository.
'''
if borg_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, borg_arguments.repository
):
logger.info(
f'{repository.get("label", repository["path"])}: Running arbitrary Borg command'
)
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
borg_arguments.archive,
storage,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
borgmatic.borg.borg.run_arbitrary_borg(
repository['path'],
storage,
local_borg_version,
options=borg_arguments.options,
archive=archive_name,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,34 @@
import logging
import borgmatic.borg.break_lock
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_break_lock(
repository,
storage,
local_borg_version,
break_lock_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "break-lock" action for the given repository.
'''
if break_lock_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, break_lock_arguments.repository
):
logger.info(
f'{repository.get("label", repository["path"])}: Breaking repository and cache locks'
)
borgmatic.borg.break_lock.break_lock(
repository['path'],
storage,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,62 @@
import logging
import borgmatic.borg.check
import borgmatic.config.validate
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_check(
config_filename,
repository,
location,
storage,
consistency,
hooks,
hook_context,
local_borg_version,
check_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "check" action for the given repository.
'''
if check_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, check_arguments.repository
):
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_check'),
hooks.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository.get("label", repository["path"])}: Running consistency checks')
borgmatic.borg.check.check_archives(
repository['path'],
location,
storage,
consistency,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
progress=check_arguments.progress,
repair=check_arguments.repair,
only_checks=check_arguments.only,
force=check_arguments.force,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_check'),
hooks.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,68 @@
import logging
import borgmatic.borg.compact
import borgmatic.borg.feature
import borgmatic.config.validate
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_compact(
config_filename,
repository,
storage,
retention,
hooks,
hook_context,
local_borg_version,
compact_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
):
'''
Run the "compact" action for the given repository.
'''
if compact_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, compact_arguments.repository
):
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_compact'),
hooks.get('umask'),
config_filename,
'pre-compact',
global_arguments.dry_run,
**hook_context,
)
if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
logger.info(
f'{repository.get("label", repository["path"])}: Compacting segments{dry_run_label}'
)
borgmatic.borg.compact.compact_segments(
global_arguments.dry_run,
repository['path'],
storage,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
progress=compact_arguments.progress,
cleanup_commits=compact_arguments.cleanup_commits,
threshold=compact_arguments.threshold,
)
else: # pragma: nocover
logger.info(
f'{repository.get("label", repository["path"])}: Skipping compact (only available/needed in Borg 1.2+)'
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_compact'),
hooks.get('umask'),
config_filename,
'post-compact',
global_arguments.dry_run,
**hook_context,
)

View File

View File

@ -0,0 +1,105 @@
import json
import logging
import os
import borgmatic.borg.extract
import borgmatic.borg.rlist
import borgmatic.config.validate
import borgmatic.hooks.command
from borgmatic.borg.state import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
logger = logging.getLogger(__name__)
def get_config_paths(bootstrap_arguments, global_arguments, local_borg_version):
'''
Given:
The bootstrap arguments, which include the repository and archive name, borgmatic source directory,
destination directory, and whether to strip components.
The global arguments, which include the dry run flag
and the local borg version,
Return:
The config paths from the manifest.json file in the borgmatic source directory after extracting it from the
repository.
Raise ValueError if the manifest JSON is missing, can't be decoded, or doesn't contain the
expected configuration path data.
'''
borgmatic_source_directory = (
bootstrap_arguments.borgmatic_source_directory or DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
borgmatic_manifest_path = os.path.expanduser(
os.path.join(borgmatic_source_directory, 'bootstrap', 'manifest.json')
)
extract_process = borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
bootstrap_arguments.repository,
borgmatic.borg.rlist.resolve_archive_name(
bootstrap_arguments.repository,
bootstrap_arguments.archive,
{},
local_borg_version,
global_arguments,
),
[borgmatic_manifest_path],
{},
{},
local_borg_version,
global_arguments,
extract_to_stdout=True,
)
manifest_json = extract_process.stdout.read()
if not manifest_json:
raise ValueError(
'Cannot read configuration paths from archive due to missing bootstrap manifest'
)
try:
manifest_data = json.loads(manifest_json)
except json.JSONDecodeError as error:
raise ValueError(
f'Cannot read configuration paths from archive due to invalid bootstrap manifest JSON: {error}'
)
try:
return manifest_data['config_paths']
except KeyError:
raise ValueError(
'Cannot read configuration paths from archive due to invalid bootstrap manifest'
)
def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
'''
Run the "bootstrap" action for the given repository.
Raise ValueError if the bootstrap configuration could not be loaded.
Raise CalledProcessError or OSError if Borg could not be run.
'''
manifest_config_paths = get_config_paths(
bootstrap_arguments, global_arguments, local_borg_version
)
logger.info(f"Bootstrapping config paths: {', '.join(manifest_config_paths)}")
borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
bootstrap_arguments.repository,
borgmatic.borg.rlist.resolve_archive_name(
bootstrap_arguments.repository,
bootstrap_arguments.archive,
{},
local_borg_version,
global_arguments,
),
[config_path.lstrip(os.path.sep) for config_path in manifest_config_paths],
{},
{},
local_borg_version,
global_arguments,
extract_to_stdout=False,
destination_path=bootstrap_arguments.destination,
strip_components=bootstrap_arguments.strip_components,
progress=bootstrap_arguments.progress,
)

View File

@ -0,0 +1,46 @@
import logging
import borgmatic.config.generate
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_generate(generate_arguments, global_arguments):
'''
Given the generate arguments and the global arguments, each as an argparse.Namespace instance,
run the "generate" action.
Raise FileExistsError if a file already exists at the destination path and the generate
arguments do not have overwrite set.
'''
dry_run_label = ' (dry run; not actually writing anything)' if global_arguments.dry_run else ''
logger.answer(
f'Generating a configuration file at: {generate_arguments.destination_filename}{dry_run_label}'
)
borgmatic.config.generate.generate_sample_configuration(
global_arguments.dry_run,
generate_arguments.source_filename,
generate_arguments.destination_filename,
borgmatic.config.validate.schema_filename(),
overwrite=generate_arguments.overwrite,
)
if generate_arguments.source_filename:
logger.answer(
f'''
Merged in the contents of configuration file at: {generate_arguments.source_filename}
To review the changes made, run:
diff --unified {generate_arguments.source_filename} {generate_arguments.destination_filename}'''
)
logger.answer(
'''
This includes all available configuration options with example values, the few
required options as indicated. Please edit the file to suit your needs.
If you ever need help: https://torsion.org/borgmatic/#issues'''
)

View File

@ -0,0 +1,22 @@
import logging
import borgmatic.config.generate
logger = logging.getLogger(__name__)
def run_validate(validate_arguments, configs):
'''
Given the validate arguments as an argparse.Namespace instance and a dict of configuration
filename to corresponding parsed configuration, run the "validate" action.
Most of the validation is actually performed implicitly by the standard borgmatic configuration
loading machinery prior to here, so this function mainly exists to support additional validate
flags like "--show".
'''
if validate_arguments.show:
for config_path, config in configs.items():
if len(configs) > 1:
logger.answer('---')
logger.answer(borgmatic.config.generate.render_configuration(config))

136
borgmatic/actions/create.py Normal file
View File

@ -0,0 +1,136 @@
import json
import logging
import os
try:
import importlib_metadata
except ModuleNotFoundError: # pragma: nocover
import importlib.metadata as importlib_metadata
import borgmatic.borg.create
import borgmatic.borg.state
import borgmatic.config.validate
import borgmatic.hooks.command
import borgmatic.hooks.dispatch
import borgmatic.hooks.dump
logger = logging.getLogger(__name__)
def create_borgmatic_manifest(location, config_paths, dry_run):
'''
Create a borgmatic manifest file to store the paths to the configuration files used to create
the archive.
'''
if dry_run:
return
borgmatic_source_directory = location.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
borgmatic_manifest_path = os.path.expanduser(
os.path.join(borgmatic_source_directory, 'bootstrap', 'manifest.json')
)
if not os.path.exists(borgmatic_manifest_path):
os.makedirs(os.path.dirname(borgmatic_manifest_path), exist_ok=True)
with open(borgmatic_manifest_path, 'w') as config_list_file:
json.dump(
{
'borgmatic_version': importlib_metadata.version('borgmatic'),
'config_paths': config_paths,
},
config_list_file,
)
def run_create(
config_filename,
repository,
location,
storage,
hooks,
hook_context,
local_borg_version,
create_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
):
'''
Run the "create" action for the given repository.
If create_arguments.json is True, yield the JSON output from creating the archive.
'''
if create_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, create_arguments.repository
):
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_backup'),
hooks.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository.get("label", repository["path"])}: Creating archive{dry_run_label}')
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
active_dumps = borgmatic.hooks.dispatch.call_hooks(
'dump_databases',
hooks,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
create_borgmatic_manifest(
location, global_arguments.used_config_paths, global_arguments.dry_run
)
stream_processes = [process for processes in active_dumps.values() for process in processes]
json_output = borgmatic.borg.create.create_archive(
global_arguments.dry_run,
repository['path'],
location,
storage,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
progress=create_arguments.progress,
stats=create_arguments.stats,
json=create_arguments.json,
list_files=create_arguments.list_files,
stream_processes=stream_processes,
)
if json_output: # pragma: nocover
yield json.loads(json_output)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
config_filename,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_backup'),
hooks.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,50 @@
import logging
import borgmatic.borg.export_tar
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_export_tar(
repository,
storage,
local_borg_version,
export_tar_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "export-tar" action for the given repository.
'''
if export_tar_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_tar_arguments.repository
):
logger.info(
f'{repository["path"]}: Exporting archive {export_tar_arguments.archive} as tar file'
)
borgmatic.borg.export_tar.export_tar_archive(
global_arguments.dry_run,
repository['path'],
borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
export_tar_arguments.archive,
storage,
local_borg_version,
global_arguments,
local_path,
remote_path,
),
export_tar_arguments.paths,
export_tar_arguments.destination,
storage,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
tar_filter=export_tar_arguments.tar_filter,
list_files=export_tar_arguments.list_files,
strip_components=export_tar_arguments.strip_components,
)

View File

@ -0,0 +1,71 @@
import logging
import borgmatic.borg.extract
import borgmatic.borg.rlist
import borgmatic.config.validate
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_extract(
config_filename,
repository,
location,
storage,
hooks,
hook_context,
local_borg_version,
extract_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "extract" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
hooks.get('before_extract'),
hooks.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
**hook_context,
)
if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, extract_arguments.repository
):
logger.info(
f'{repository.get("label", repository["path"])}: Extracting archive {extract_arguments.archive}'
)
borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
repository['path'],
borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
extract_arguments.archive,
storage,
local_borg_version,
global_arguments,
local_path,
remote_path,
),
extract_arguments.paths,
location,
storage,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
destination_path=extract_arguments.destination,
strip_components=extract_arguments.strip_components,
progress=extract_arguments.progress,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_extract'),
hooks.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,
**hook_context,
)

52
borgmatic/actions/info.py Normal file
View File

@ -0,0 +1,52 @@
import json
import logging
import borgmatic.actions.arguments
import borgmatic.borg.info
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_info(
repository,
storage,
local_borg_version,
info_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "info" action for the given repository and archive.
If info_arguments.json is True, yield the JSON output from the info for the archive.
'''
if info_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, info_arguments.repository
):
if not info_arguments.json: # pragma: nocover
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying archive summary information'
)
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
info_arguments.archive,
storage,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
json_output = borgmatic.borg.info.display_archives_info(
repository['path'],
storage,
local_borg_version,
borgmatic.actions.arguments.update_arguments(info_arguments, archive=archive_name),
global_arguments,
local_path,
remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

53
borgmatic/actions/list.py Normal file
View File

@ -0,0 +1,53 @@
import json
import logging
import borgmatic.actions.arguments
import borgmatic.borg.list
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_list(
repository,
storage,
local_borg_version,
list_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "list" action for the given repository and archive.
If list_arguments.json is True, yield the JSON output from listing the archive.
'''
if list_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, list_arguments.repository
):
if not list_arguments.json: # pragma: nocover
if list_arguments.find_paths:
logger.answer(f'{repository.get("label", repository["path"])}: Searching archives')
elif not list_arguments.archive:
logger.answer(f'{repository.get("label", repository["path"])}: Listing archives')
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
list_arguments.archive,
storage,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
json_output = borgmatic.borg.list.list_archive(
repository['path'],
storage,
local_borg_version,
borgmatic.actions.arguments.update_arguments(list_arguments, archive=archive_name),
global_arguments,
local_path,
remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -0,0 +1,49 @@
import logging
import borgmatic.borg.mount
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_mount(
repository,
storage,
local_borg_version,
mount_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "mount" action for the given repository.
'''
if mount_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, mount_arguments.repository
):
if mount_arguments.archive:
logger.info(
f'{repository.get("label", repository["path"])}: Mounting archive {mount_arguments.archive}'
)
else: # pragma: nocover
logger.info(f'{repository.get("label", repository["path"])}: Mounting repository')
borgmatic.borg.mount.mount_archive(
repository['path'],
borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
mount_arguments.archive,
storage,
local_borg_version,
global_arguments,
local_path,
remote_path,
),
mount_arguments,
storage,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,59 @@
import logging
import borgmatic.borg.prune
import borgmatic.config.validate
import borgmatic.hooks.command
logger = logging.getLogger(__name__)
def run_prune(
config_filename,
repository,
storage,
retention,
hooks,
hook_context,
local_borg_version,
prune_arguments,
global_arguments,
dry_run_label,
local_path,
remote_path,
):
'''
Run the "prune" action for the given repository.
'''
if prune_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, prune_arguments.repository
):
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_prune'),
hooks.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository.get("label", repository["path"])}: Pruning archives{dry_run_label}')
borgmatic.borg.prune.prune_archives(
global_arguments.dry_run,
repository['path'],
storage,
retention,
local_borg_version,
prune_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_prune'),
hooks.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,
**hook_context,
)

View File

@ -0,0 +1,41 @@
import logging
import borgmatic.borg.rcreate
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_rcreate(
repository,
storage,
local_borg_version,
rcreate_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "rcreate" action for the given repository.
'''
if rcreate_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, rcreate_arguments.repository
):
return
logger.info(f'{repository.get("label", repository["path"])}: Creating repository')
borgmatic.borg.rcreate.create_repository(
global_arguments.dry_run,
repository['path'],
storage,
local_borg_version,
global_arguments,
rcreate_arguments.encryption_mode,
rcreate_arguments.source_repository,
rcreate_arguments.copy_crypt_key,
rcreate_arguments.append_only,
rcreate_arguments.storage_quota,
rcreate_arguments.make_parent_dirs,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -0,0 +1,382 @@
import copy
import logging
import os
import borgmatic.borg.extract
import borgmatic.borg.list
import borgmatic.borg.mount
import borgmatic.borg.rlist
import borgmatic.borg.state
import borgmatic.config.validate
import borgmatic.hooks.dispatch
import borgmatic.hooks.dump
logger = logging.getLogger(__name__)
UNSPECIFIED_HOOK = object()
def get_configured_database(
hooks, archive_database_names, hook_name, database_name, configuration_database_name=None
):
'''
Find the first database with the given hook name and database name in the configured hooks
dict and the given archive database names dict (from hook name to database names contained in
a particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all database
hooks for the named database. If a configuration database name is given, use that instead of the
database name to lookup the database in the given hooks configuration.
Return the found database as a tuple of (found hook name, database configuration dict).
'''
if not configuration_database_name:
configuration_database_name = database_name
if hook_name == UNSPECIFIED_HOOK:
hooks_to_search = hooks
else:
hooks_to_search = {hook_name: hooks[hook_name]}
return next(
(
(name, hook_database)
for (name, hook) in hooks_to_search.items()
for hook_database in hook
if hook_database['name'] == configuration_database_name
and database_name in archive_database_names.get(name, [])
),
(None, None),
)
def get_configured_hook_name_and_database(hooks, database_name):
'''
Find the hook name and first database dict with the given database name in the configured hooks
dict. This searches across all database hooks.
'''
def restore_single_database(
repository,
location,
storage,
hooks,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
hook_name,
database,
connection_params,
): # pragma: no cover
'''
Given (among other things) an archive name, a database hook name, the hostname,
port, username and password as connection params, and a configured database
configuration dict, restore that database from the archive.
'''
logger.info(
f'{repository.get("label", repository["path"])}: Restoring database {database["name"]}'
)
dump_pattern = borgmatic.hooks.dispatch.call_hooks(
'make_database_dump_pattern',
hooks,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
database['name'],
)[hook_name]
# Kick off a single database extract to stdout.
extract_process = borgmatic.borg.extract.extract_archive(
dry_run=global_arguments.dry_run,
repository=repository['path'],
archive=archive_name,
paths=borgmatic.hooks.dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
location_config=location,
storage_config=storage,
local_borg_version=local_borg_version,
global_arguments=global_arguments,
local_path=local_path,
remote_path=remote_path,
destination_path='/',
# A directory format dump isn't a single file, and therefore can't extract
# to stdout. In this case, the extract_process return value is None.
extract_to_stdout=bool(database.get('format') != 'directory'),
)
# Run a single database restore, consuming the extract stdout (if any).
borgmatic.hooks.dispatch.call_hooks(
'restore_database_dump',
{hook_name: [database]},
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
extract_process,
connection_params,
)
def collect_archive_database_names(
repository,
archive,
location,
storage,
local_borg_version,
global_arguments,
local_path,
remote_path,
):
'''
Given a local or remote repository path, a resolved archive name, a location configuration dict,
a storage configuration dict, the local Borg version, global_arguments an argparse.Namespace,
and local and remote Borg paths, query the archive for the names of databases it contains and
return them as a dict from hook name to a sequence of database names.
'''
borgmatic_source_directory = os.path.expanduser(
location.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
).lstrip('/')
parent_dump_path = os.path.expanduser(
borgmatic.hooks.dump.make_database_dump_path(borgmatic_source_directory, '*_databases/*/*')
)
dump_paths = borgmatic.borg.list.capture_archive_listing(
repository,
archive,
storage,
local_borg_version,
global_arguments,
list_path=parent_dump_path,
local_path=local_path,
remote_path=remote_path,
)
# Determine the database names corresponding to the dumps found in the archive and
# add them to restore_names.
archive_database_names = {}
for dump_path in dump_paths:
try:
(hook_name, _, database_name) = dump_path.split(
borgmatic_source_directory + os.path.sep, 1
)[1].split(os.path.sep)[0:3]
except (ValueError, IndexError):
logger.warning(
f'{repository}: Ignoring invalid database dump path "{dump_path}" in archive {archive}'
)
else:
if database_name not in archive_database_names.get(hook_name, []):
archive_database_names.setdefault(hook_name, []).extend([database_name])
return archive_database_names
def find_databases_to_restore(requested_database_names, archive_database_names):
'''
Given a sequence of requested database names to restore and a dict of hook name to the names of
databases found in an archive, return an expanded sequence of database names to restore,
replacing "all" with actual database names as appropriate.
Raise ValueError if any of the requested database names cannot be found in the archive.
'''
# A map from database hook name to the database names to restore for that hook.
restore_names = (
{UNSPECIFIED_HOOK: requested_database_names}
if requested_database_names
else {UNSPECIFIED_HOOK: ['all']}
)
# If "all" is in restore_names, then replace it with the names of dumps found within the
# archive.
if 'all' in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove('all')
for hook_name, database_names in archive_database_names.items():
restore_names.setdefault(hook_name, []).extend(database_names)
# If a database is to be restored as part of "all", then remove it from restore names so
# it doesn't get restored twice.
for database_name in database_names:
if database_name in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove(database_name)
if not restore_names[UNSPECIFIED_HOOK]:
restore_names.pop(UNSPECIFIED_HOOK)
combined_restore_names = set(
name for database_names in restore_names.values() for name in database_names
)
combined_archive_database_names = set(
name for database_names in archive_database_names.values() for name in database_names
)
missing_names = sorted(set(combined_restore_names) - combined_archive_database_names)
if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError(
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from archive"
)
return restore_names
def ensure_databases_found(restore_names, remaining_restore_names, found_names):
'''
Given a dict from hook name to database names to restore, a dict from hook name to remaining
database names to restore, and a sequence of found (actually restored) database names, raise
ValueError if requested databases to restore were missing from the archive and/or configuration.
'''
combined_restore_names = set(
name
for database_names in tuple(restore_names.values())
+ tuple(remaining_restore_names.values())
for name in database_names
)
if not combined_restore_names and not found_names:
raise ValueError('No databases were found to restore')
missing_names = sorted(set(combined_restore_names) - set(found_names))
if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError(
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from borgmatic's configuration"
)
def run_restore(
repository,
location,
storage,
hooks,
local_borg_version,
restore_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "restore" action for the given repository, but only if the repository matches the
requested repository in restore arguments.
Raise ValueError if a configured database could not be found to restore.
'''
if restore_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, restore_arguments.repository
):
return
logger.info(
f'{repository.get("label", repository["path"])}: Restoring databases from archive {restore_arguments.archive}'
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
restore_arguments.archive,
storage,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
archive_database_names = collect_archive_database_names(
repository['path'],
archive_name,
location,
storage,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
restore_names = find_databases_to_restore(restore_arguments.databases, archive_database_names)
found_names = set()
remaining_restore_names = {}
connection_params = {
'hostname': restore_arguments.hostname,
'port': restore_arguments.port,
'username': restore_arguments.username,
'password': restore_arguments.password,
'restore_path': restore_arguments.restore_path,
}
for hook_name, database_names in restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
hooks, archive_database_names, hook_name, database_name
)
if not found_database:
remaining_restore_names.setdefault(found_hook_name or hook_name, []).append(
database_name
)
continue
found_names.add(database_name)
restore_single_database(
repository,
location,
storage,
hooks,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
found_hook_name or hook_name,
dict(found_database, **{'schemas': restore_arguments.schemas}),
connection_params,
)
# For any database that weren't found via exact matches in the hooks configuration, try to
# fallback to "all" entries.
for hook_name, database_names in remaining_restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
hooks, archive_database_names, hook_name, database_name, 'all'
)
if not found_database:
continue
found_names.add(database_name)
database = copy.copy(found_database)
database['name'] = database_name
restore_single_database(
repository,
location,
storage,
hooks,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
found_hook_name or hook_name,
dict(database, **{'schemas': restore_arguments.schemas}),
connection_params,
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
ensure_databases_found(restore_names, remaining_restore_names, found_names)

View File

@ -0,0 +1,42 @@
import json
import logging
import borgmatic.borg.rinfo
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_rinfo(
repository,
storage,
local_borg_version,
rinfo_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "rinfo" action for the given repository.
If rinfo_arguments.json is True, yield the JSON output from the info for the repository.
'''
if rinfo_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rinfo_arguments.repository
):
if not rinfo_arguments.json: # pragma: nocover
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying repository summary information'
)
json_output = borgmatic.borg.rinfo.display_repository_info(
repository['path'],
storage,
local_borg_version,
rinfo_arguments=rinfo_arguments,
global_arguments=global_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -0,0 +1,40 @@
import json
import logging
import borgmatic.borg.rlist
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_rlist(
repository,
storage,
local_borg_version,
rlist_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "rlist" action for the given repository.
If rlist_arguments.json is True, yield the JSON output from listing the repository.
'''
if rlist_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rlist_arguments.repository
):
if not rlist_arguments.json: # pragma: nocover
logger.answer(f'{repository.get("label", repository["path"])}: Listing repository')
json_output = borgmatic.borg.rlist.list_repository(
repository['path'],
storage,
local_borg_version,
rlist_arguments=rlist_arguments,
global_arguments=global_arguments,
local_path=local_path,
remote_path=remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -0,0 +1,32 @@
import logging
import borgmatic.borg.transfer
logger = logging.getLogger(__name__)
def run_transfer(
repository,
storage,
local_borg_version,
transfer_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "transfer" action for the given repository.
'''
logger.info(
f'{repository.get("label", repository["path"])}: Transferring archives to repository'
)
borgmatic.borg.transfer.transfer_archives(
global_arguments.dry_run,
repository['path'],
storage,
local_borg_version,
transfer_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -1,45 +1,70 @@
import logging
from borgmatic.borg.flags import make_flags
from borgmatic.execute import execute_command
import borgmatic.commands.arguments
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
REPOSITORYLESS_BORG_COMMANDS = {'serve', None}
BORG_SUBCOMMANDS_WITH_SUBCOMMANDS = {'key', 'debug'}
def run_arbitrary_borg(
repository, storage_config, options, archive=None, local_path='borg', remote_path=None
repository_path,
storage_config,
local_borg_version,
options,
archive=None,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, a sequence of arbitrary
command-line Borg options, and an optional archive name, run an arbitrary Borg command on the
given repository/archive.
Given a local or remote repository path, a storage config dict, the local Borg version, a
sequence of arbitrary command-line Borg options, and an optional archive name, run an arbitrary
Borg command, passing in REPOSITORY and ARCHIVE environment variables for optional use in the
command.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
try:
options = options[1:] if options[0] == '--' else options
borg_command = options[0]
command_options = tuple(options[1:])
except IndexError:
borg_command = None
command_options = ()
repository_archive = '::'.join((repository, archive)) if repository and archive else repository
# Borg commands like "key" have a sub-command ("export", etc.) that must follow it.
command_options_start_index = 2 if options[0] in BORG_SUBCOMMANDS_WITH_SUBCOMMANDS else 1
borg_command = tuple(options[:command_options_start_index])
command_options = tuple(options[command_options_start_index:])
if borg_command and borg_command[0] in borgmatic.commands.arguments.ACTION_ALIASES.keys():
logger.warning(
f"Borg's {borg_command[0]} subcommand is supported natively by borgmatic. Try this instead: borgmatic {borg_command[0]}"
)
except IndexError:
borg_command = ()
command_options = ()
full_command = (
(local_path,)
+ ((borg_command,) if borg_command else ())
+ ((repository_archive,) if borg_command and repository_archive else ())
+ command_options
+ borg_command
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
+ command_options
)
return execute_command(
full_command, output_log_level=logging.WARNING, borg_local_path=local_path,
full_command,
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
shell=True,
extra_environment=dict(
(environment.make_environment(storage_config) or {}),
**{
'BORG_REPO': repository_path,
'ARCHIVE': archive if archive else '',
},
),
)

View File

@ -0,0 +1,37 @@
import logging
from borgmatic.borg import environment, flags
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def break_lock(
repository_path,
storage_config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage configuration dict, the local Borg version,
an argparse.Namespace of global arguments, and optional local and remote Borg paths, break any
repository and cache locks leftover from Borg aborting.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(local_path, 'break-lock')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
borg_environment = environment.make_environment(storage_config)
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)

View File

@ -1,48 +1,213 @@
import argparse
import datetime
import hashlib
import itertools
import json
import logging
import os
import pathlib
from borgmatic.borg import extract
from borgmatic.borg import environment, extract, feature, flags, rinfo, state
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
DEFAULT_CHECKS = ('repository', 'archives')
DEFAULT_PREFIX = '{hostname}-'
DEFAULT_CHECKS = (
{'name': 'repository', 'frequency': '1 month'},
{'name': 'archives', 'frequency': '1 month'},
)
logger = logging.getLogger(__name__)
def _parse_checks(consistency_config, only_checks=None):
def parse_checks(consistency_config, only_checks=None):
'''
Given a consistency config with a "checks" list, and an optional list of override checks,
transform them a tuple of named checks to run.
Given a consistency config with a "checks" sequence of dicts and an optional list of override
checks, return a tuple of named checks to run.
For example, given a retention config of:
{'checks': ['repository', 'archives']}
{'checks': ({'name': 'repository'}, {'name': 'archives'})}
This will be returned as:
('repository', 'archives')
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If the checks value
is the string "disabled", return an empty tuple, meaning that no checks should be run.
If the "data" option is present, then make sure the "archives" option is included as well.
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If a checks value
has a name of "disabled", return an empty tuple, meaning that no checks should be run.
'''
checks = [
check.lower() for check in (only_checks or consistency_config.get('checks', []) or [])
]
if checks == ['disabled']:
checks = only_checks or tuple(
check_config['name']
for check_config in (consistency_config.get('checks', None) or DEFAULT_CHECKS)
)
checks = tuple(check.lower() for check in checks)
if 'disabled' in checks:
if len(checks) > 1:
logger.warning(
'Multiple checks are configured, but one of them is "disabled"; not running any checks'
)
return ()
if 'data' in checks and 'archives' not in checks:
checks.append('archives')
return tuple(check for check in checks if check not in ('disabled', '')) or DEFAULT_CHECKS
return checks
def _make_check_flags(checks, check_last=None, prefix=None):
def parse_frequency(frequency):
'''
Given a parsed sequence of checks, transform it into tuple of command-line flags.
Given a frequency string with a number and a unit of time, return a corresponding
datetime.timedelta instance or None if the frequency is None or "always".
For instance, given "3 weeks", return datetime.timedelta(weeks=3)
Raise ValueError if the given frequency cannot be parsed.
'''
if not frequency:
return None
frequency = frequency.strip().lower()
if frequency == 'always':
return None
try:
number, time_unit = frequency.split(' ')
number = int(number)
except ValueError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
if not time_unit.endswith('s'):
time_unit += 's'
if time_unit == 'months':
number *= 30
time_unit = 'days'
elif time_unit == 'years':
number *= 365
time_unit = 'days'
try:
return datetime.timedelta(**{time_unit: number})
except TypeError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
def filter_checks_on_frequency(
location_config,
consistency_config,
borg_repository_id,
checks,
force,
archives_check_id=None,
):
'''
Given a location config, a consistency config with a "checks" sequence of dicts, a Borg
repository ID, a sequence of checks, whether to force checks to run, and an ID for the archives
check potentially being run (if any), filter down those checks based on the configured
"frequency" for each check as compared to its check time file.
In other words, a check whose check time file's timestamp is too new (based on the configured
frequency) will get cut from the returned sequence of checks. Example:
consistency_config = {
'checks': [
{
'name': 'archives',
'frequency': '2 weeks',
},
]
}
When this function is called with that consistency_config and "archives" in checks, "archives"
will get filtered out of the returned result if its check time file is newer than 2 weeks old,
indicating that it's not yet time to run that check again.
Raise ValueError if a frequency cannot be parsed.
'''
filtered_checks = list(checks)
if force:
return tuple(filtered_checks)
for check_config in consistency_config.get('checks', DEFAULT_CHECKS):
check = check_config['name']
if checks and check not in checks:
continue
frequency_delta = parse_frequency(check_config.get('frequency'))
if not frequency_delta:
continue
check_time = probe_for_check_time(
location_config, borg_repository_id, check, archives_check_id
)
if not check_time:
continue
# If we've not yet reached the time when the frequency dictates we're ready for another
# check, skip this check.
if datetime.datetime.now() < check_time + frequency_delta:
remaining = check_time + frequency_delta - datetime.datetime.now()
logger.info(
f'Skipping {check} check due to configured frequency; {remaining} until next check'
)
filtered_checks.remove(check)
return tuple(filtered_checks)
def make_archive_filter_flags(
local_borg_version, storage_config, checks, check_last=None, prefix=None
):
'''
Given the local Borg version, a storage configuration dict, a parsed sequence of checks, the
check last value, and a consistency check prefix, transform the checks into tuple of
command-line flags for filtering archives in a check command.
If a check_last value is given and "archives" is in checks, then include a "--last" flag. And if
a prefix value is given and "archives" is in checks, then include a "--match-archives" flag.
'''
if 'archives' in checks or 'data' in checks:
return (('--last', str(check_last)) if check_last else ()) + (
(
('--match-archives', f'sh:{prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else ('--glob-archives', f'{prefix}*')
)
if prefix
else (
flags.make_match_archives_flags(
storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
if check_last:
logger.warning(
'Ignoring check_last option, as "archives" or "data" are not in consistency checks'
)
if prefix:
logger.warning(
'Ignoring consistency prefix option, as "archives" or "data" are not in consistency checks'
)
return ()
def make_archives_check_id(archive_filter_flags):
'''
Given a sequence of flags to filter archives, return a unique hash corresponding to those
particular flags. If there are no flags, return None.
'''
if not archive_filter_flags:
return None
return hashlib.sha256(' '.join(archive_filter_flags).encode()).hexdigest()
def make_check_flags(checks, archive_filter_flags):
'''
Given a parsed sequence of checks and a sequence of flags to filter archives, transform the
checks into tuple of command-line check flags.
For example, given parsed checks of:
@ -53,47 +218,156 @@ def _make_check_flags(checks, check_last=None, prefix=None):
('--repository-only',)
However, if both "repository" and "archives" are in checks, then omit them from the returned
flags because Borg does both checks by default.
Additionally, if a check_last value is given and "archives" is in checks, then include a
"--last" flag. And if a prefix value is given and "archives" is in checks, then include a
"--prefix" flag.
flags because Borg does both checks by default. If "data" is in checks, that implies "archives".
'''
if 'archives' in checks:
last_flags = ('--last', str(check_last)) if check_last else ()
prefix_flags = ('--prefix', prefix) if prefix else ()
if 'data' in checks:
data_flags = ('--verify-data',)
checks += ('archives',)
else:
last_flags = ()
prefix_flags = ()
if check_last:
logger.warning(
'Ignoring check_last option, as "archives" is not in consistency checks.'
)
if prefix:
logger.warning(
'Ignoring consistency prefix option, as "archives" is not in consistency checks.'
)
data_flags = ()
common_flags = last_flags + prefix_flags + (('--verify-data',) if 'data' in checks else ())
common_flags = (archive_filter_flags if 'archives' in checks else ()) + data_flags
if set(DEFAULT_CHECKS).issubset(set(checks)):
if {'repository', 'archives'}.issubset(set(checks)):
return common_flags
return (
tuple('--{}-only'.format(check) for check in checks if check in DEFAULT_CHECKS)
tuple(f'--{check}-only' for check in checks if check in ('repository', 'archives'))
+ common_flags
)
def make_check_time_path(location_config, borg_repository_id, check_type, archives_check_id=None):
'''
Given a location configuration dict, a Borg repository ID, the name of a check type
("repository", "archives", etc.), and a unique hash of the archives filter flags, return a
path for recording that check's time (the time of that check last occurring).
'''
borgmatic_source_directory = os.path.expanduser(
location_config.get('borgmatic_source_directory', state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY)
)
if check_type in ('archives', 'data'):
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
archives_check_id if archives_check_id else 'all',
)
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
)
def write_check_time(path): # pragma: no cover
'''
Record a check time of now as the modification time of the given path.
'''
logger.debug(f'Writing check time at {path}')
os.makedirs(os.path.dirname(path), mode=0o700, exist_ok=True)
pathlib.Path(path, mode=0o600).touch()
def read_check_time(path):
'''
Return the check time based on the modification time of the given path. Return None if the path
doesn't exist.
'''
logger.debug(f'Reading check time from {path}')
try:
return datetime.datetime.fromtimestamp(os.stat(path).st_mtime)
except FileNotFoundError:
return None
def probe_for_check_time(location_config, borg_repository_id, check, archives_check_id):
'''
Given a location configuration dict, a Borg repository ID, the name of a check type
("repository", "archives", etc.), and a unique hash of the archives filter flags, return a
the corresponding check time or None if such a check time does not exist.
When the check type is "archives" or "data", this function probes two different paths to find
the check time, e.g.:
~/.borgmatic/checks/1234567890/archives/9876543210
~/.borgmatic/checks/1234567890/archives/all
... and returns the maximum modification time of the files found (if any). The first path
represents a more specific archives check time (a check on a subset of archives), and the second
is a fallback to the last "all" archives check.
For other check types, this function reads from a single check time path, e.g.:
~/.borgmatic/checks/1234567890/repository
'''
check_times = (
read_check_time(group[0])
for group in itertools.groupby(
(
make_check_time_path(location_config, borg_repository_id, check, archives_check_id),
make_check_time_path(location_config, borg_repository_id, check),
)
)
)
try:
return max(check_time for check_time in check_times if check_time)
except ValueError:
return None
def upgrade_check_times(location_config, borg_repository_id):
'''
Given a location configuration dict and a Borg repository ID, upgrade any corresponding check
times on disk from old-style paths to new-style paths.
Currently, the only upgrade performed is renaming an archive or data check path that looks like:
~/.borgmatic/checks/1234567890/archives
to:
~/.borgmatic/checks/1234567890/archives/all
'''
for check_type in ('archives', 'data'):
new_path = make_check_time_path(location_config, borg_repository_id, check_type, 'all')
old_path = os.path.dirname(new_path)
temporary_path = f'{old_path}.temp'
if not os.path.isfile(old_path) and not os.path.isfile(temporary_path):
continue
logger.debug(f'Upgrading archives check time from {old_path} to {new_path}')
try:
os.rename(old_path, temporary_path)
except FileNotFoundError:
pass
os.mkdir(old_path)
os.rename(temporary_path, new_path)
def check_archives(
repository,
repository_path,
location_config,
storage_config,
consistency_config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=None,
repair=None,
only_checks=None,
force=None,
):
'''
Given a local or remote repository path, a storage config dict, a consistency config dict,
@ -102,14 +376,47 @@ def check_archives(
Borg archives for consistency.
If there are no consistency checks to run, skip running them.
Raises ValueError if the Borg repository ID cannot be determined.
'''
checks = _parse_checks(consistency_config, only_checks)
try:
borg_repository_id = json.loads(
rinfo.display_repository_info(
repository_path,
storage_config,
local_borg_version,
argparse.Namespace(json=True),
global_arguments,
local_path,
remote_path,
)
)['repository']['id']
except (json.JSONDecodeError, KeyError):
raise ValueError(f'Cannot determine Borg repository ID for {repository_path}')
upgrade_check_times(location_config, borg_repository_id)
check_last = consistency_config.get('check_last', None)
prefix = consistency_config.get('prefix')
configured_checks = parse_checks(consistency_config, only_checks)
lock_wait = None
extra_borg_options = storage_config.get('extra_borg_options', {}).get('check', '')
archive_filter_flags = make_archive_filter_flags(
local_borg_version, storage_config, configured_checks, check_last, prefix
)
archives_check_id = make_archives_check_id(archive_filter_flags)
if set(checks).intersection(set(DEFAULT_CHECKS + ('data',))):
lock_wait = storage_config.get('lock_wait', None)
checks = filter_checks_on_frequency(
location_config,
consistency_config,
borg_repository_id,
configured_checks,
force,
archives_check_id,
)
if set(checks).intersection({'repository', 'archives', 'data'}):
lock_wait = storage_config.get('lock_wait')
verbosity_flags = ()
if logger.isEnabledFor(logging.INFO):
@ -117,26 +424,43 @@ def check_archives(
if logger.isEnabledFor(logging.DEBUG):
verbosity_flags = ('--debug', '--show-rc')
prefix = consistency_config.get('prefix', DEFAULT_PREFIX)
full_command = (
(local_path, 'check')
+ (('--repair',) if repair else ())
+ _make_check_flags(checks, check_last, prefix)
+ make_check_flags(checks, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ (repository,)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
# The Borg repair option trigger an interactive prompt, which won't work when output is
borg_environment = environment.make_environment(storage_config)
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
if repair or progress:
execute_command(full_command, output_file=DO_NOT_CAPTURE)
execute_command(
full_command, output_file=DO_NOT_CAPTURE, extra_environment=borg_environment
)
else:
execute_command(full_command)
execute_command(full_command, extra_environment=borg_environment)
for check in checks:
write_check_time(
make_check_time_path(location_config, borg_repository_id, check, archives_check_id)
)
if 'extract' in checks:
extract.extract_last_archive_dry_run(repository, lock_wait, local_path, remote_path)
extract.extract_last_archive_dry_run(
storage_config,
local_borg_version,
global_arguments,
repository_path,
lock_wait,
local_path,
remote_path,
)
write_check_time(make_check_time_path(location_config, borg_repository_id, 'extract'))

53
borgmatic/borg/compact.py Normal file
View File

@ -0,0 +1,53 @@
import logging
from borgmatic.borg import environment, flags
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def compact_segments(
dry_run,
repository_path,
storage_config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=False,
cleanup_commits=False,
threshold=None,
):
'''
Given dry-run flag, a local or remote repository path, a storage config dict, and the local
Borg version, compact the segments in a repository.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('compact', '')
full_command = (
(local_path, 'compact')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--progress',) if progress else ())
+ (('--cleanup-commits',) if cleanup_commits else ())
+ (('--threshold', str(threshold)) if threshold else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if dry_run:
logging.info(f'{repository_path}: Skipping compact (dry run)')
return
execute_command(
full_command,
output_log_level=logging.INFO,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

View File

@ -3,14 +3,22 @@ import itertools
import logging
import os
import pathlib
import stat
import tempfile
from borgmatic.execute import DO_NOT_CAPTURE, execute_command, execute_command_with_processes
import borgmatic.logger
from borgmatic.borg import environment, feature, flags, state
from borgmatic.execute import (
DO_NOT_CAPTURE,
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
logger = logging.getLogger(__name__)
def _expand_directory(directory):
def expand_directory(directory):
'''
Given a directory path, expand any tilde (representing a user's home directory) and any globs
therein. Return a list of one or more resulting paths.
@ -20,7 +28,7 @@ def _expand_directory(directory):
return glob.glob(expanded_directory) or [expanded_directory]
def _expand_directories(directories):
def expand_directories(directories):
'''
Given a sequence of directory paths, expand tildes and globs in each one. Return all the
resulting directories as a single flattened tuple.
@ -29,11 +37,11 @@ def _expand_directories(directories):
return ()
return tuple(
itertools.chain.from_iterable(_expand_directory(directory) for directory in directories)
itertools.chain.from_iterable(expand_directory(directory) for directory in directories)
)
def _expand_home_directories(directories):
def expand_home_directories(directories):
'''
Given a sequence of directory paths, expand tildes in each one. Do not perform any globbing.
Return the results as a tuple.
@ -44,16 +52,21 @@ def _expand_home_directories(directories):
return tuple(os.path.expanduser(directory) for directory in directories)
def map_directories_to_devices(directories): # pragma: no cover
def map_directories_to_devices(directories):
'''
Given a sequence of directories, return a map from directory to an identifier for the device on
which that directory resides. This is handy for determining whether two different directories
are on the same filesystem (have the same device identifier).
which that directory resides or None if the path doesn't exist.
This is handy for determining whether two different directories are on the same filesystem (have
the same device identifier).
'''
return {directory: os.stat(directory).st_dev for directory in directories}
return {
directory: os.stat(directory).st_dev if os.path.exists(directory) else None
for directory in directories
}
def deduplicate_directories(directory_devices):
def deduplicate_directories(directory_devices, additional_directory_devices):
'''
Given a map from directory to the identifier for the device on which that directory resides,
return the directories as a sorted tuple with all duplicate child directories removed. For
@ -68,21 +81,28 @@ def deduplicate_directories(directory_devices):
there are cases where Borg coming across the same file twice will result in duplicate reads and
even hangs, e.g. when a database hook is using a named pipe for streaming database dumps to
Borg.
If any additional directory devices are given, also deduplicate against them, but don't include
them in the returned directories.
'''
deduplicated = set()
directories = sorted(directory_devices.keys())
additional_directories = sorted(additional_directory_devices.keys())
all_devices = {**directory_devices, **additional_directory_devices}
for directory in directories:
deduplicated.add(directory)
parents = pathlib.PurePath(directory).parents
# If another directory in the given list is a parent of current directory (even n levels
# up) and both are on the same filesystem, then the current directory is a duplicate.
for other_directory in directories:
# If another directory in the given list (or the additional list) is a parent of current
# directory (even n levels up) and both are on the same filesystem, then the current
# directory is a duplicate.
for other_directory in directories + additional_directories:
for parent in parents:
if (
pathlib.PurePath(other_directory) == parent
and directory_devices[other_directory] == directory_devices[directory]
and all_devices[directory] is not None
and all_devices[other_directory] == all_devices[directory]
):
if directory in deduplicated:
deduplicated.remove(directory)
@ -91,22 +111,42 @@ def deduplicate_directories(directory_devices):
return tuple(sorted(deduplicated))
def _write_pattern_file(patterns=None):
def write_pattern_file(patterns=None, sources=None, pattern_file=None):
'''
Given a sequence of patterns, write them to a named temporary file and return it. Return None
if no patterns are provided.
Given a sequence of patterns and an optional sequence of source directories, write them to a
named temporary file (with the source directories as additional roots) and return the file.
If an optional open pattern file is given, overwrite it instead of making a new temporary file.
Return None if no patterns are provided.
'''
if not patterns:
if not patterns and not sources:
return None
pattern_file = tempfile.NamedTemporaryFile('w')
pattern_file.write('\n'.join(patterns))
if pattern_file is None:
pattern_file = tempfile.NamedTemporaryFile('w')
else:
pattern_file.seek(0)
pattern_file.write(
'\n'.join(tuple(patterns or ()) + tuple(f'R {source}' for source in (sources or [])))
)
pattern_file.flush()
return pattern_file
def _make_pattern_flags(location_config, pattern_filename=None):
def ensure_files_readable(*filename_lists):
'''
Given a sequence of filename sequences, ensure that each filename is openable. This prevents
unreadable files from being passed to Borg, which in certain situations only warns instead of
erroring.
'''
for file_object in itertools.chain.from_iterable(
filename_list for filename_list in filename_lists if filename_list
):
open(file_object).close()
def make_pattern_flags(location_config, pattern_filename=None):
'''
Given a location config dict with a potential patterns_from option, and a filename containing
any additional patterns, return the corresponding Borg flags for those files as a tuple.
@ -122,7 +162,7 @@ def _make_pattern_flags(location_config, pattern_filename=None):
)
def _make_exclude_flags(location_config, exclude_filename=None):
def make_exclude_flags(location_config, exclude_filename=None):
'''
Given a location config dict with various exclude options, and a filename containing any exclude
patterns, return the corresponding Borg flags as a tuple.
@ -156,15 +196,36 @@ def _make_exclude_flags(location_config, exclude_filename=None):
)
DEFAULT_BORGMATIC_SOURCE_DIRECTORY = '~/.borgmatic'
def make_list_filter_flags(local_borg_version, dry_run):
'''
Given the local Borg version and whether this is a dry run, return the corresponding flags for
passing to "--list --filter". The general idea is that excludes are shown for a dry run or when
the verbosity is debug.
'''
base_flags = 'AME'
show_excludes = logger.isEnabledFor(logging.DEBUG)
if feature.available(feature.Feature.EXCLUDED_FILES_MINUS, local_borg_version):
if show_excludes or dry_run:
return f'{base_flags}+-'
else:
return base_flags
if show_excludes:
return f'{base_flags}x-'
else:
return f'{base_flags}-'
def borgmatic_source_directories(borgmatic_source_directory):
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}' # noqa: FS003
def collect_borgmatic_source_directories(borgmatic_source_directory):
'''
Return a list of borgmatic-specific source directories used for state like database backups.
'''
if not borgmatic_source_directory:
borgmatic_source_directory = DEFAULT_BORGMATIC_SOURCE_DIRECTORY
borgmatic_source_directory = state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
return (
[borgmatic_source_directory]
@ -173,20 +234,108 @@ def borgmatic_source_directories(borgmatic_source_directory):
)
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}'
ROOT_PATTERN_PREFIX = 'R '
def pattern_root_directories(patterns=None):
'''
Given a sequence of patterns, parse out and return just the root directories.
'''
if not patterns:
return []
return [
pattern.split(ROOT_PATTERN_PREFIX, maxsplit=1)[1]
for pattern in patterns
if pattern.startswith(ROOT_PATTERN_PREFIX)
]
def special_file(path):
'''
Return whether the given path is a special file (character device, block device, or named pipe
/ FIFO).
'''
try:
mode = os.stat(path).st_mode
except (FileNotFoundError, OSError):
return False
return stat.S_ISCHR(mode) or stat.S_ISBLK(mode) or stat.S_ISFIFO(mode)
def any_parent_directories(path, candidate_parents):
'''
Return whether any of the given candidate parent directories are an actual parent of the given
path. This includes grandparents, etc.
'''
for parent in candidate_parents:
if pathlib.PurePosixPath(parent) in pathlib.PurePath(path).parents:
return True
return False
def collect_special_file_paths(
create_command, local_path, working_directory, borg_environment, skip_directories
):
'''
Given a Borg create command as a tuple, a local Borg path, a working directory, a dict of
environment variables to pass to Borg, and a sequence of parent directories to skip, collect the
paths for any special files (character devices, block devices, and named pipes / FIFOs) that
Borg would encounter during a create. These are all paths that could cause Borg to hang if its
--read-special flag is used.
'''
# Omit "--exclude-nodump" from the Borg dry run command, because that flag causes Borg to open
# files including any named pipe we've created.
paths_output = execute_command_and_capture_output(
tuple(argument for argument in create_command if argument != '--exclude-nodump')
+ ('--dry-run', '--list'),
capture_stderr=True,
working_directory=working_directory,
extra_environment=borg_environment,
)
paths = tuple(
path_line.split(' ', 1)[1]
for path_line in paths_output.split('\n')
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
return tuple(
path
for path in paths
if special_file(path) and not any_parent_directories(path, skip_directories)
)
def check_all_source_directories_exist(source_directories):
'''
Given a sequence of source directories, check that they all exist. If any do not, raise an
exception.
'''
missing_directories = [
source_directory
for source_directory in source_directories
if not all([os.path.exists(directory) for directory in expand_directory(source_directory)])
]
if missing_directories:
raise ValueError(f"Source directories do not exist: {', '.join(missing_directories)}")
def create_archive(
dry_run,
repository,
repository_path,
location_config,
storage_config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=False,
stats=False,
json=False,
files=False,
list_files=False,
stream_processes=None,
):
'''
@ -196,73 +345,124 @@ def create_archive(
If a sequence of stream processes is given (instances of subprocess.Popen), then execute the
create command while also triggering the given processes to produce output.
'''
borgmatic.logger.add_custom_log_levels()
borgmatic_source_directories = expand_directories(
collect_borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
)
if location_config.get('source_directories_must_exist', False):
check_all_source_directories_exist(location_config.get('source_directories'))
sources = deduplicate_directories(
map_directories_to_devices(
_expand_directories(
location_config['source_directories']
+ borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
expand_directories(
tuple(location_config.get('source_directories', ()))
+ borgmatic_source_directories
+ tuple(global_arguments.used_config_paths)
)
)
),
additional_directory_devices=map_directories_to_devices(
expand_directories(pattern_root_directories(location_config.get('patterns')))
),
)
pattern_file = _write_pattern_file(location_config.get('patterns'))
exclude_file = _write_pattern_file(
_expand_home_directories(location_config.get('exclude_patterns'))
ensure_files_readable(location_config.get('patterns_from'), location_config.get('exclude_from'))
try:
working_directory = os.path.expanduser(location_config.get('working_directory'))
except TypeError:
working_directory = None
pattern_file = (
write_pattern_file(location_config.get('patterns'), sources)
if location_config.get('patterns') or location_config.get('patterns_from')
else None
)
exclude_file = write_pattern_file(
expand_home_directories(location_config.get('exclude_patterns'))
)
checkpoint_interval = storage_config.get('checkpoint_interval', None)
checkpoint_volume = storage_config.get('checkpoint_volume', None)
chunker_params = storage_config.get('chunker_params', None)
compression = storage_config.get('compression', None)
remote_rate_limit = storage_config.get('remote_rate_limit', None)
upload_rate_limit = storage_config.get('upload_rate_limit', None)
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
list_filter_flags = make_list_filter_flags(local_borg_version, dry_run)
files_cache = location_config.get('files_cache')
archive_name_format = storage_config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('create', '')
full_command = (
if feature.available(feature.Feature.ATIME, local_borg_version):
atime_flags = ('--atime',) if location_config.get('atime') is True else ()
else:
atime_flags = ('--noatime',) if location_config.get('atime') is False else ()
if feature.available(feature.Feature.NOFLAGS, local_borg_version):
noflags_flags = ('--noflags',) if location_config.get('flags') is False else ()
else:
noflags_flags = ('--nobsdflags',) if location_config.get('flags') is False else ()
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
else:
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
if feature.available(feature.Feature.UPLOAD_RATELIMIT, local_borg_version):
upload_ratelimit_flags = (
('--upload-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
)
else:
upload_ratelimit_flags = (
('--remote-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
)
if stream_processes and location_config.get('read_special') is False:
logger.warning(
f'{repository_path}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
create_command = (
tuple(local_path.split(' '))
+ ('create',)
+ _make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
+ _make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
+ make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
+ make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
+ (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ())
+ (('--checkpoint-volume', str(checkpoint_volume)) if checkpoint_volume else ())
+ (('--chunker-params', chunker_params) if chunker_params else ())
+ (('--compression', compression) if compression else ())
+ (('--remote-ratelimit', str(remote_rate_limit)) if remote_rate_limit else ())
+ upload_ratelimit_flags
+ (
('--one-file-system',)
if location_config.get('one_file_system') or stream_processes
else ()
)
+ (('--numeric-owner',) if location_config.get('numeric_owner') else ())
+ (('--noatime',) if location_config.get('atime') is False else ())
+ numeric_ids_flags
+ atime_flags
+ (('--noctime',) if location_config.get('ctime') is False else ())
+ (('--nobirthtime',) if location_config.get('birthtime') is False else ())
+ (('--read-special',) if (location_config.get('read_special') or stream_processes) else ())
+ (('--nobsdflags',) if location_config.get('bsd_flags') is False else ())
+ (('--read-special',) if location_config.get('read_special') or stream_processes else ())
+ noflags_flags
+ (('--files-cache', files_cache) if files_cache else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--list', '--filter', 'AME-') if files and not json and not progress else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
+ (('--dry-run',) if dry_run else ())
+ (('--progress',) if progress else ())
+ (('--json',) if json else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ (
'{repository}::{archive_name_format}'.format(
repository=repository, archive_name_format=archive_name_format
),
('--list', '--filter', list_filter_flags)
if list_files and not json and not progress
else ()
)
+ sources
+ (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_archive_flags(
repository_path, archive_name_format, local_borg_version
)
+ (sources if not pattern_file else ())
)
if json:
output_log_level = None
elif (stats or files) and logger.getEffectiveLevel() == logging.WARNING:
output_log_level = logging.WARNING
elif list_files or (stats and not dry_run):
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
@ -270,13 +470,62 @@ def create_archive(
# the terminal directly.
output_file = DO_NOT_CAPTURE if progress else None
borg_environment = environment.make_environment(storage_config)
# If database hooks are enabled (as indicated by streaming processes), exclude files that might
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
if stream_processes and not location_config.get('read_special'):
logger.debug(f'{repository_path}: Collecting special file paths')
special_file_paths = collect_special_file_paths(
create_command,
local_path,
working_directory,
borg_environment,
skip_directories=borgmatic_source_directories,
)
if special_file_paths:
logger.warning(
f'{repository_path}: Excluding special files to prevent Borg from hanging: {", ".join(special_file_paths)}'
)
exclude_file = write_pattern_file(
expand_home_directories(
tuple(location_config.get('exclude_patterns') or ()) + special_file_paths
),
pattern_file=exclude_file,
)
create_command += make_exclude_flags(location_config, exclude_file.name)
create_command += (
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
+ (('--progress',) if progress else ())
+ (('--json',) if json else ())
)
if stream_processes:
return execute_command_with_processes(
full_command,
create_command,
stream_processes,
output_log_level,
output_file,
borg_local_path=local_path,
working_directory=working_directory,
extra_environment=borg_environment,
)
elif output_log_level is None:
return execute_command_and_capture_output(
create_command,
working_directory=working_directory,
extra_environment=borg_environment,
)
else:
execute_command(
create_command,
output_log_level,
output_file,
borg_local_path=local_path,
working_directory=working_directory,
extra_environment=borg_environment,
)
return execute_command(full_command, output_log_level, output_file, borg_local_path=local_path)

View File

@ -1,9 +1,8 @@
import os
OPTION_TO_ENVIRONMENT_VARIABLE = {
'borg_base_directory': 'BORG_BASE_DIR',
'borg_config_directory': 'BORG_CONFIG_DIR',
'borg_cache_directory': 'BORG_CACHE_DIR',
'borg_files_cache_ttl': 'BORG_FILES_CACHE_TTL',
'borg_security_directory': 'BORG_SECURITY_DIR',
'borg_keys_directory': 'BORG_KEYS_DIR',
'encryption_passcommand': 'BORG_PASSCOMMAND',
@ -19,21 +18,24 @@ DEFAULT_BOOL_OPTION_TO_ENVIRONMENT_VARIABLE = {
}
def initialize(storage_config):
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
def make_environment(storage_config):
'''
Given a borgmatic storage configuration dict, return its options converted to a Borg environment
variable dict.
'''
environment = {}
# Options from borgmatic configuration take precedence over already set BORG_* environment
# variables.
value = storage_config.get(option_name) or os.environ.get(environment_variable_name)
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
value = storage_config.get(option_name)
if value:
os.environ[environment_variable_name] = value
else:
os.environ.pop(environment_variable_name, None)
environment[environment_variable_name] = str(value)
for (
option_name,
environment_variable_name,
) in DEFAULT_BOOL_OPTION_TO_ENVIRONMENT_VARIABLE.items():
value = storage_config.get(option_name, False)
os.environ[environment_variable_name] = 'YES' if value else 'NO'
environment[environment_variable_name] = 'YES' if value else 'NO'
return environment

View File

@ -1,6 +1,7 @@
import logging
import os
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
@ -8,26 +9,29 @@ logger = logging.getLogger(__name__)
def export_tar_archive(
dry_run,
repository,
repository_path,
archive,
paths,
destination_path,
storage_config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
tar_filter=None,
files=False,
list_files=False,
strip_components=None,
):
'''
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
export from the archive, a destination path to export to, a storage configuration dict, optional
local and remote Borg paths, an optional filter program, whether to include per-file details,
and an optional number of path components to strip, export the archive into the given
destination path as a tar-formatted file.
export from the archive, a destination path to export to, a storage configuration dict, the
local Borg version, optional local and remote Borg paths, an optional filter program, whether to
include per-file details, and an optional number of path components to strip, export the archive
into the given destination path as a tar-formatted file.
If the destination path is "-", then stream the output to stdout instead of to a file.
'''
borgmatic.logger.add_custom_log_levels()
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
@ -35,25 +39,30 @@ def export_tar_archive(
(local_path, 'export-tar')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--list',) if files else ())
+ (('--list',) if list_files else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (('--tar-filter', tar_filter) if tar_filter else ())
+ (('--strip-components', str(strip_components)) if strip_components else ())
+ ('::'.join((repository if ':' in repository else os.path.abspath(repository), archive)),)
+ flags.make_repository_archive_flags(
repository_path,
archive,
local_borg_version,
)
+ (destination_path,)
+ (tuple(paths) if paths else ())
)
if files and logger.getEffectiveLevel() == logging.WARNING:
output_log_level = logging.WARNING
if list_files:
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
if dry_run:
logging.info('{}: Skipping export to tar file (dry run)'.format(repository))
logging.info(f'{repository_path}: Skipping export to tar file (dry run)')
return
execute_command(
@ -61,4 +70,5 @@ def export_tar_archive(
output_file=DO_NOT_CAPTURE if destination_path == '-' else None,
output_log_level=output_log_level,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

View File

@ -2,56 +2,63 @@ import logging
import os
import subprocess
import borgmatic.config.validate
from borgmatic.borg import environment, feature, flags, rlist
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def extract_last_archive_dry_run(repository, lock_wait=None, local_path='borg', remote_path=None):
def extract_last_archive_dry_run(
storage_config,
local_borg_version,
global_arguments,
repository_path,
lock_wait=None,
local_path='borg',
remote_path=None,
):
'''
Perform an extraction dry-run of the most recent archive. If there are no archives, skip the
dry-run.
'''
remote_path_flags = ('--remote-path', remote_path) if remote_path else ()
lock_wait_flags = ('--lock-wait', str(lock_wait)) if lock_wait else ()
verbosity_flags = ()
if logger.isEnabledFor(logging.DEBUG):
verbosity_flags = ('--debug', '--show-rc')
elif logger.isEnabledFor(logging.INFO):
verbosity_flags = ('--info',)
full_list_command = (
(local_path, 'list', '--short')
+ remote_path_flags
+ lock_wait_flags
+ verbosity_flags
+ (repository,)
)
list_output = execute_command(
full_list_command, output_log_level=None, borg_local_path=local_path
)
try:
last_archive_name = list_output.strip().splitlines()[-1]
except IndexError:
last_archive_name = rlist.resolve_archive_name(
repository_path,
'latest',
storage_config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
except ValueError:
logger.warning('No archives found. Skipping extract consistency check.')
return
list_flag = ('--list',) if logger.isEnabledFor(logging.DEBUG) else ()
borg_environment = environment.make_environment(storage_config)
full_extract_command = (
(local_path, 'extract', '--dry-run')
+ remote_path_flags
+ lock_wait_flags
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ list_flag
+ (
'{repository}::{last_archive_name}'.format(
repository=repository, last_archive_name=last_archive_name
),
+ flags.make_repository_archive_flags(
repository_path, last_archive_name, local_borg_version
)
)
execute_command(full_extract_command, working_directory=None)
execute_command(
full_extract_command, working_directory=None, extra_environment=borg_environment
)
def extract_archive(
@ -61,6 +68,8 @@ def extract_archive(
paths,
location_config,
storage_config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
destination_path=None,
@ -70,9 +79,9 @@ def extract_archive(
):
'''
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
restore from the archive, location/storage configuration dicts, optional local and remote Borg
paths, and an optional destination path to extract to, extract the archive into the current
directory.
restore from the archive, the local Borg version string, an argparse.Namespace of global
arguments, location/storage configuration dicts, optional local and remote Borg paths, and an
optional destination path to extract to, extract the archive into the current directory.
If extract to stdout is True, then start the extraction streaming to stdout, and return that
extract process as an instance of subprocess.Popen.
@ -83,11 +92,24 @@ def extract_archive(
if progress and extract_to_stdout:
raise ValueError('progress and extract_to_stdout cannot both be set')
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
else:
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
if strip_components == 'all':
if not paths:
raise ValueError('The --strip-components flag with "all" requires at least one --path')
# Calculate the maximum number of leading path components of the given paths.
strip_components = max(0, *(len(path.split(os.path.sep)) - 1 for path in paths))
full_command = (
(local_path, 'extract')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--numeric-owner',) if location_config.get('numeric_owner') else ())
+ numeric_ids_flags
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--list', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
@ -95,15 +117,26 @@ def extract_archive(
+ (('--strip-components', str(strip_components)) if strip_components else ())
+ (('--progress',) if progress else ())
+ (('--stdout',) if extract_to_stdout else ())
+ ('::'.join((repository if ':' in repository else os.path.abspath(repository), archive)),)
+ flags.make_repository_archive_flags(
# Make the repository path absolute so the working directory changes below don't
# prevent Borg from finding the repo.
borgmatic.config.validate.normalize_repository_path(repository),
archive,
local_borg_version,
)
+ (tuple(paths) if paths else ())
)
borg_environment = environment.make_environment(storage_config)
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.
if progress:
return execute_command(
full_command, output_file=DO_NOT_CAPTURE, working_directory=destination_path
full_command,
output_file=DO_NOT_CAPTURE,
working_directory=destination_path,
extra_environment=borg_environment,
)
return None
@ -113,8 +146,11 @@ def extract_archive(
output_file=subprocess.PIPE,
working_directory=destination_path,
run_to_completion=False,
extra_environment=borg_environment,
)
# Don't give Borg local path, so as to error on warnings, as Borg only gives a warning if the
# restore paths don't exist in the archive!
execute_command(full_command, working_directory=destination_path)
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command(
full_command, working_directory=destination_path, extra_environment=borg_environment
)

40
borgmatic/borg/feature.py Normal file
View File

@ -0,0 +1,40 @@
from enum import Enum
from packaging.version import parse
class Feature(Enum):
COMPACT = 1
ATIME = 2
NOFLAGS = 3
NUMERIC_IDS = 4
UPLOAD_RATELIMIT = 5
SEPARATE_REPOSITORY_ARCHIVE = 6
RCREATE = 7
RLIST = 8
RINFO = 9
MATCH_ARCHIVES = 10
EXCLUDED_FILES_MINUS = 11
FEATURE_TO_MINIMUM_BORG_VERSION = {
Feature.COMPACT: parse('1.2.0a2'), # borg compact
Feature.ATIME: parse('1.2.0a7'), # borg create --atime
Feature.NOFLAGS: parse('1.2.0a8'), # borg create --noflags
Feature.NUMERIC_IDS: parse('1.2.0b3'), # borg create/extract/mount --numeric-ids
Feature.UPLOAD_RATELIMIT: parse('1.2.0b3'), # borg create --upload-ratelimit
Feature.SEPARATE_REPOSITORY_ARCHIVE: parse('2.0.0a2'), # --repo with separate archive
Feature.RCREATE: parse('2.0.0a2'), # borg rcreate
Feature.RLIST: parse('2.0.0a2'), # borg rlist
Feature.RINFO: parse('2.0.0a2'), # borg rinfo
Feature.MATCH_ARCHIVES: parse('2.0.0b3'), # borg --match-archives
Feature.EXCLUDED_FILES_MINUS: parse('2.0.0b5'), # --list --filter uses "-" for excludes
}
def available(feature, borg_version):
'''
Given a Borg Feature constant and a Borg version string, return whether that feature is
available in that version of Borg.
'''
return FEATURE_TO_MINIMUM_BORG_VERSION[feature] <= parse(borg_version)

View File

@ -1,4 +1,7 @@
import itertools
import re
from borgmatic.borg import feature
def make_flags(name, value):
@ -8,7 +11,7 @@ def make_flags(name, value):
if not value:
return ()
flag = '--{}'.format(name.replace('_', '-'))
flag = f"--{name.replace('_', '-')}"
if value is True:
return (flag,)
@ -29,3 +32,52 @@ def make_flags_from_arguments(arguments, excludes=()):
if name not in excludes and not name.startswith('_')
)
)
def make_repository_flags(repository_path, local_borg_version):
'''
Given the path of a Borg repository and the local Borg version, return Borg-version-appropriate
command-line flags (as a tuple) for selecting that repository.
'''
return (
('--repo',)
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else ()
) + (repository_path,)
def make_repository_archive_flags(repository_path, archive, local_borg_version):
'''
Given the path of a Borg repository, an archive name or pattern, and the local Borg version,
return Borg-version-appropriate command-line flags (as a tuple) for selecting that repository
and archive.
'''
return (
('--repo', repository_path, archive)
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else (f'{repository_path}::{archive}',)
)
def make_match_archives_flags(match_archives, archive_name_format, local_borg_version):
'''
Return match archives flags based on the given match archives value, if any. If it isn't set,
return match archives flags to match archives created with the given archive name format, if
any. This is done by replacing certain archive name format placeholders for ephemeral data (like
"{now}") with globs.
'''
if match_archives:
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
return ('--match-archives', match_archives)
else:
return ('--glob-archives', re.sub(r'^sh:', '', match_archives))
if not archive_name_format:
return ()
derived_match_archives = re.sub(r'\{(now|utcnow|pid)([:%\w\.-]*)\}', '*', archive_name_format)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
return ('--match-archives', f'sh:{derived_match_archives}')
else:
return ('--glob-archives', f'{derived_match_archives}')

View File

@ -1,19 +1,27 @@
import logging
from borgmatic.borg.flags import make_flags, make_flags_from_arguments
from borgmatic.execute import execute_command
import borgmatic.logger
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__)
def display_archives_info(
repository, storage_config, info_arguments, local_path='borg', remote_path=None
repository_path,
storage_config,
local_borg_version,
info_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, and the arguments to the info
action, display summary information for Borg archives in the repository or return JSON summary
information.
Given a local or remote repository path, a storage config dict, the local Borg version, global
arguments as an argparse.Namespace, and the arguments to the info action, display summary
information for Borg archives in the repository or return JSON summary information.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
full_command = (
@ -28,18 +36,41 @@ def display_archives_info(
if logger.isEnabledFor(logging.DEBUG) and not info_arguments.json
else ()
)
+ make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait)
+ make_flags_from_arguments(info_arguments, excludes=('repository', 'archive'))
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', lock_wait)
+ (
'::'.join((repository, info_arguments.archive))
if info_arguments.archive
else repository,
(
flags.make_flags('match-archives', f'sh:{info_arguments.prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else flags.make_flags('glob-archives', f'{info_arguments.prefix}*')
)
if info_arguments.prefix
else (
flags.make_match_archives_flags(
info_arguments.match_archives
or info_arguments.archive
or storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
+ flags.make_flags_from_arguments(
info_arguments, excludes=('repository', 'archive', 'prefix', 'match_archives')
)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
return execute_command(
full_command,
output_log_level=None if info_arguments.json else logging.WARNING,
borg_local_path=local_path,
)
if info_arguments.json:
return execute_command_and_capture_output(
full_command,
extra_environment=environment.make_environment(storage_config),
)
else:
execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

View File

@ -1,58 +0,0 @@
import logging
import subprocess
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
INFO_REPOSITORY_NOT_FOUND_EXIT_CODE = 2
def initialize_repository(
repository,
storage_config,
encryption_mode,
append_only=None,
storage_quota=None,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage configuration dict, a Borg encryption mode,
whether the repository should be append-only, and the storage quota to use, initialize the
repository. If the repository already exists, then log and skip initialization.
'''
info_command = (
(local_path, 'info')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (repository,)
)
logger.debug(' '.join(info_command))
try:
execute_command(info_command, output_log_level=None)
logger.info('Repository already exists. Skipping initialization.')
return
except subprocess.CalledProcessError as error:
if error.returncode != INFO_REPOSITORY_NOT_FOUND_EXIT_CODE:
raise
extra_borg_options = storage_config.get('extra_borg_options', {}).get('init', '')
init_command = (
(local_path, 'init')
+ (('--encryption', encryption_mode) if encryption_mode else ())
+ (('--append-only',) if append_only else ())
+ (('--storage-quota', storage_quota) if storage_quota else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ (repository,)
)
# Do not capture output here, so as to support interactive prompts.
execute_command(init_command, output_file=DO_NOT_CAPTURE, borg_local_path=local_path)

View File

@ -1,63 +1,41 @@
import argparse
import copy
import logging
import re
from borgmatic.borg.flags import make_flags, make_flags_from_arguments
from borgmatic.execute import execute_command
import borgmatic.logger
from borgmatic.borg import environment, feature, flags, rlist
from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__)
# A hack to convince Borg to exclude archives ending in ".checkpoint". This assumes that a
# non-checkpoint archive name ends in a digit (e.g. from a timestamp).
BORG_EXCLUDE_CHECKPOINTS_GLOB = '*[0123456789]'
ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST = ('prefix', 'match_archives', 'sort_by', 'first', 'last')
MAKE_FLAGS_EXCLUDES = (
'repository',
'archive',
'paths',
'find_paths',
) + ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST
def resolve_archive_name(repository, archive, storage_config, local_path='borg', remote_path=None):
def make_list_command(
repository_path,
storage_config,
local_borg_version,
list_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a storage config dict, a local Borg
path, and a remote Borg path, simply return the archive name. But if the archive name is
"latest", then instead introspect the repository for the latest successful (non-checkpoint)
archive, and return its name.
Raise ValueError if "latest" is given but there are no archives in the repository.
'''
if archive != "latest":
return archive
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(local_path, 'list')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait)
+ make_flags('glob-archives', BORG_EXCLUDE_CHECKPOINTS_GLOB)
+ make_flags('last', 1)
+ ('--short', repository)
)
output = execute_command(full_command, output_log_level=None, borg_local_path=local_path)
try:
latest_archive = output.strip().splitlines()[-1]
except IndexError:
raise ValueError('No archives found in the repository')
logger.debug('{}: Latest archive is {}'.format(repository, latest_archive))
return latest_archive
def list_archives(repository, storage_config, list_arguments, local_path='borg', remote_path=None):
'''
Given a local or remote repository path, a storage config dict, and the arguments to the list
action, display the output of listing Borg archives in the repository or return JSON output. Or,
if an archive name is given, listing the files in that archive.
Given a local or remote repository path, a storage config dict, the arguments to the list
action, and local and remote Borg paths, return a command as a tuple to list archives or paths
within an archive.
'''
lock_wait = storage_config.get('lock_wait', None)
if list_arguments.successful:
list_arguments.glob_archives = BORG_EXCLUDE_CHECKPOINTS_GLOB
full_command = (
return (
(local_path, 'list')
+ (
('--info',)
@ -69,21 +47,208 @@ def list_archives(repository, storage_config, list_arguments, local_path='borg',
if logger.isEnabledFor(logging.DEBUG) and not list_arguments.json
else ()
)
+ make_flags('remote-path', remote_path)
+ make_flags('lock-wait', lock_wait)
+ make_flags_from_arguments(
list_arguments, excludes=('repository', 'archive', 'paths', 'successful')
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', lock_wait)
+ flags.make_flags_from_arguments(list_arguments, excludes=MAKE_FLAGS_EXCLUDES)
+ (
'::'.join((repository, list_arguments.archive))
flags.make_repository_archive_flags(
repository_path, list_arguments.archive, local_borg_version
)
if list_arguments.archive
else repository,
else flags.make_repository_flags(repository_path, local_borg_version)
)
+ (tuple(list_arguments.paths) if list_arguments.paths else ())
)
return execute_command(
full_command,
output_log_level=None if list_arguments.json else logging.WARNING,
borg_local_path=local_path,
def make_find_paths(find_paths):
'''
Given a sequence of path fragments or patterns as passed to `--find`, transform all path
fragments into glob patterns. Pass through existing patterns untouched.
For example, given find_paths of:
['foo.txt', 'pp:root/somedir']
... transform that into:
['sh:**/*foo.txt*/**', 'pp:root/somedir']
'''
if not find_paths:
return ()
return tuple(
find_path
if re.compile(r'([-!+RrPp] )|(\w\w:)').match(find_path)
else f'sh:**/*{find_path}*/**'
for find_path in find_paths
)
def capture_archive_listing(
repository_path,
archive,
storage_config,
local_borg_version,
global_arguments,
list_path=None,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a storage config dict, the local Borg
version, global arguments as an argparse.Namespace, the archive path in which to list files, and
local and remote Borg paths, capture the output of listing that archive and return it as a list
of file paths.
'''
borg_environment = environment.make_environment(storage_config)
return tuple(
execute_command_and_capture_output(
make_list_command(
repository_path,
storage_config,
local_borg_version,
argparse.Namespace(
repository=repository_path,
archive=archive,
paths=[f'sh:{list_path}'],
find_paths=None,
json=None,
format='{path}{NL}', # noqa: FS003
),
global_arguments,
local_path,
remote_path,
),
extra_environment=borg_environment,
)
.strip('\n')
.split('\n')
)
def list_archive(
repository_path,
storage_config,
local_borg_version,
list_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, global
arguments as an argparse.Namespace, the arguments to the list action as an argparse.Namespace,
and local and remote Borg paths, display the output of listing the files of a Borg archive (or
return JSON output). If list_arguments.find_paths are given, list the files by searching across
multiple archives. If neither find_paths nor archive name are given, instead list the archives
in the given repository.
'''
borgmatic.logger.add_custom_log_levels()
if not list_arguments.archive and not list_arguments.find_paths:
if feature.available(feature.Feature.RLIST, local_borg_version):
logger.warning(
'Omitting the --archive flag on the list action is deprecated when using Borg 2.x+. Use the rlist action instead.'
)
rlist_arguments = argparse.Namespace(
repository=repository_path,
short=list_arguments.short,
format=list_arguments.format,
json=list_arguments.json,
prefix=list_arguments.prefix,
match_archives=list_arguments.match_archives,
sort_by=list_arguments.sort_by,
first=list_arguments.first,
last=list_arguments.last,
)
return rlist.list_repository(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path,
remote_path,
)
if list_arguments.archive:
for name in ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST:
if getattr(list_arguments, name, None):
logger.warning(
f"The --{name.replace('_', '-')} flag on the list action is ignored when using the --archive flag."
)
if list_arguments.json:
raise ValueError(
'The --json flag on the list action is not supported when using the --archive/--find flags.'
)
borg_environment = environment.make_environment(storage_config)
# If there are any paths to find (and there's not a single archive already selected), start by
# getting a list of archives to search.
if list_arguments.find_paths and not list_arguments.archive:
rlist_arguments = argparse.Namespace(
repository=repository_path,
short=True,
format=None,
json=None,
prefix=list_arguments.prefix,
match_archives=list_arguments.match_archives,
sort_by=list_arguments.sort_by,
first=list_arguments.first,
last=list_arguments.last,
)
# Ask Borg to list archives. Capture its output for use below.
archive_lines = tuple(
execute_command_and_capture_output(
rlist.make_rlist_command(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path,
remote_path,
),
extra_environment=borg_environment,
)
.strip('\n')
.split('\n')
)
else:
archive_lines = (list_arguments.archive,)
# For each archive listed by Borg, run list on the contents of that archive.
for archive in archive_lines:
logger.answer(f'{repository_path}: Listing archive {archive}')
archive_arguments = copy.copy(list_arguments)
archive_arguments.archive = archive
# This list call is to show the files in a single archive, not list multiple archives. So
# blank out any archive filtering flags. They'll break anyway in Borg 2.
for name in ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST:
setattr(archive_arguments, name, None)
main_command = make_list_command(
repository_path,
storage_config,
local_borg_version,
archive_arguments,
global_arguments,
local_path,
remote_path,
) + make_find_paths(list_arguments.find_paths)
execute_command(
main_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=borg_environment,
)

View File

@ -1,25 +1,26 @@
import logging
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def mount_archive(
repository,
repository_path,
archive,
mount_point,
paths,
foreground,
options,
mount_arguments,
storage_config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an optional archive name, a filesystem mount point,
zero or more paths to mount from the archive, extra Borg mount options, a storage configuration
dict, and optional local and remote Borg paths, mount the archive onto the mount point.
dict, the local Borg version, global arguments as an argparse.Namespace instance, and optional
local and remote Borg paths, mount the archive onto the mount point.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
@ -28,19 +29,45 @@ def mount_archive(
(local_path, 'mount')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--foreground',) if foreground else ())
+ (('-o', options) if options else ())
+ (('::'.join((repository, archive)),) if archive else (repository,))
+ (mount_point,)
+ (tuple(paths) if paths else ())
+ flags.make_flags_from_arguments(
mount_arguments,
excludes=('repository', 'archive', 'mount_point', 'paths', 'options'),
)
+ (('-o', mount_arguments.options) if mount_arguments.options else ())
+ (
(
flags.make_repository_flags(repository_path, local_borg_version)
+ (
('--match-archives', archive)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else ('--glob-archives', archive)
)
)
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else (
flags.make_repository_archive_flags(repository_path, archive, local_borg_version)
if archive
else flags.make_repository_flags(repository_path, local_borg_version)
)
)
+ (mount_arguments.mount_point,)
+ (tuple(mount_arguments.paths) if mount_arguments.paths else ())
)
borg_environment = environment.make_environment(storage_config)
# Don't capture the output when foreground mode is used so that ctrl-C can work properly.
if foreground:
execute_command(full_command, output_file=DO_NOT_CAPTURE, borg_local_path=local_path)
if mount_arguments.foreground:
execute_command(
full_command,
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
extra_environment=borg_environment,
)
return
execute_command(full_command, borg_local_path=local_path)
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)

View File

@ -1,14 +1,16 @@
import logging
import borgmatic.logger
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def _make_prune_flags(retention_config):
def make_prune_flags(storage_config, retention_config, local_borg_version):
'''
Given a retention config dict mapping from option name to value, tranform it into an iterable of
command-line name-value flag pairs.
Given a retention config dict mapping from option name to value, transform it into an sequence of
command-line flags.
For example, given a retention config of:
@ -22,54 +24,78 @@ def _make_prune_flags(retention_config):
)
'''
config = retention_config.copy()
prefix = config.pop('prefix', None)
if 'prefix' not in config:
config['prefix'] = '{hostname}-'
elif not config['prefix']:
config.pop('prefix')
return (
flag_pairs = (
('--' + option_name.replace('_', '-'), str(value)) for option_name, value in config.items()
)
return tuple(element for pair in flag_pairs for element in pair) + (
(
('--match-archives', f'sh:{prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else ('--glob-archives', f'{prefix}*')
)
if prefix
else (
flags.make_match_archives_flags(
storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
def prune_archives(
dry_run,
repository,
repository_path,
storage_config,
retention_config,
local_borg_version,
prune_arguments,
global_arguments,
local_path='borg',
remote_path=None,
stats=False,
files=False,
):
'''
Given dry-run flag, a local or remote repository path, a storage config dict, and a
retention config dict, prune Borg archives according to the retention policy specified in that
configuration.
'''
borgmatic.logger.add_custom_log_levels()
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('prune', '')
full_command = (
(local_path, 'prune')
+ tuple(element for pair in _make_prune_flags(retention_config) for element in pair)
+ make_prune_flags(storage_config, retention_config, local_borg_version)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--stats',) if stats and not dry_run else ())
+ (('--stats',) if prune_arguments.stats and not dry_run else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--list',) if files else ())
+ flags.make_flags_from_arguments(
prune_arguments,
excludes=('repository', 'stats', 'list_archives'),
)
+ (('--list',) if prune_arguments.list_archives else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ (repository,)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if (stats or files) and logger.getEffectiveLevel() == logging.WARNING:
output_log_level = logging.WARNING
if prune_arguments.stats or prune_arguments.list_archives:
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
execute_command(full_command, output_log_level=output_log_level, borg_local_path=local_path)
execute_command(
full_command,
output_log_level=output_log_level,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

86
borgmatic/borg/rcreate.py Normal file
View File

@ -0,0 +1,86 @@
import argparse
import logging
import subprocess
from borgmatic.borg import environment, feature, flags, rinfo
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE = 2
def create_repository(
dry_run,
repository_path,
storage_config,
local_borg_version,
global_arguments,
encryption_mode,
source_repository=None,
copy_crypt_key=False,
append_only=None,
storage_quota=None,
make_parent_dirs=False,
local_path='borg',
remote_path=None,
):
'''
Given a dry-run flag, a local or remote repository path, a storage configuration dict, the local
Borg version, a Borg encryption mode, the path to another repo whose key material should be
reused, whether the repository should be append-only, and the storage quota to use, create the
repository. If the repository already exists, then log and skip creation.
'''
try:
rinfo.display_repository_info(
repository_path,
storage_config,
local_borg_version,
argparse.Namespace(json=True),
global_arguments,
local_path,
remote_path,
)
logger.info(f'{repository_path}: Repository already exists. Skipping creation.')
return
except subprocess.CalledProcessError as error:
if error.returncode != RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE:
raise
lock_wait = storage_config.get('lock_wait')
extra_borg_options = storage_config.get('extra_borg_options', {}).get('rcreate', '')
rcreate_command = (
(local_path,)
+ (
('rcreate',)
if feature.available(feature.Feature.RCREATE, local_borg_version)
else ('init',)
)
+ (('--encryption', encryption_mode) if encryption_mode else ())
+ (('--other-repo', source_repository) if source_repository else ())
+ (('--copy-crypt-key',) if copy_crypt_key else ())
+ (('--append-only',) if append_only else ())
+ (('--storage-quota', storage_quota) if storage_quota else ())
+ (('--make-parent-dirs',) if make_parent_dirs else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if dry_run:
logging.info(f'{repository_path}: Skipping repository creation (dry run)')
return
# Do not capture output here, so as to support interactive prompts.
execute_command(
rcreate_command,
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

64
borgmatic/borg/rinfo.py Normal file
View File

@ -0,0 +1,64 @@
import logging
import borgmatic.logger
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__)
def display_repository_info(
repository_path,
storage_config,
local_borg_version,
rinfo_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the rinfo action, and global arguments as an argparse.Namespace, display summary
information for the Borg repository or return JSON summary information.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(local_path,)
+ (
('rinfo',)
if feature.available(feature.Feature.RINFO, local_borg_version)
else ('info',)
)
+ (
('--info',)
if logger.getEffectiveLevel() == logging.INFO and not rinfo_arguments.json
else ()
)
+ (
('--debug', '--show-rc')
if logger.isEnabledFor(logging.DEBUG) and not rinfo_arguments.json
else ()
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', lock_wait)
+ (('--json',) if rinfo_arguments.json else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
)
extra_environment = environment.make_environment(storage_config)
if rinfo_arguments.json:
return execute_command_and_capture_output(
full_command,
extra_environment=extra_environment,
)
else:
execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=extra_environment,
)

148
borgmatic/borg/rlist.py Normal file
View File

@ -0,0 +1,148 @@
import logging
import borgmatic.logger
from borgmatic.borg import environment, feature, flags
from borgmatic.execute import execute_command, execute_command_and_capture_output
logger = logging.getLogger(__name__)
def resolve_archive_name(
repository_path,
archive,
storage_config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a storage config dict, the local Borg
version, global arguments as an argparse.Namespace, a local Borg path, and a remote Borg path,
return the archive name. But if the archive name is "latest", then instead introspect the
repository for the latest archive and return its name.
Raise ValueError if "latest" is given but there are no archives in the repository.
'''
if archive != 'latest':
return archive
full_command = (
(
local_path,
'rlist' if feature.available(feature.Feature.RLIST, local_borg_version) else 'list',
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', storage_config.get('lock_wait'))
+ flags.make_flags('last', 1)
+ ('--short',)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
output = execute_command_and_capture_output(
full_command,
extra_environment=environment.make_environment(storage_config),
)
try:
latest_archive = output.strip().splitlines()[-1]
except IndexError:
raise ValueError('No archives found in the repository')
logger.debug(f'{repository_path}: Latest archive is {latest_archive}')
return latest_archive
MAKE_FLAGS_EXCLUDES = ('repository', 'prefix', 'match_archives')
def make_rlist_command(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the rlist action, global arguments as an argparse.Namespace instance, and local and
remote Borg paths, return a command as a tuple to list archives with a repository.
'''
return (
(
local_path,
'rlist' if feature.available(feature.Feature.RLIST, local_borg_version) else 'list',
)
+ (
('--info',)
if logger.getEffectiveLevel() == logging.INFO and not rlist_arguments.json
else ()
)
+ (
('--debug', '--show-rc')
if logger.isEnabledFor(logging.DEBUG) and not rlist_arguments.json
else ()
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', storage_config.get('lock_wait'))
+ (
(
flags.make_flags('match-archives', f'sh:{rlist_arguments.prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else flags.make_flags('glob-archives', f'{rlist_arguments.prefix}*')
)
if rlist_arguments.prefix
else (
flags.make_match_archives_flags(
rlist_arguments.match_archives or storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
+ flags.make_flags_from_arguments(rlist_arguments, excludes=MAKE_FLAGS_EXCLUDES)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
def list_repository(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the list action, global arguments as an argparse.Namespace instance, and local and
remote Borg paths, display the output of listing Borg archives in the given repository (or
return JSON output).
'''
borgmatic.logger.add_custom_log_levels()
borg_environment = environment.make_environment(storage_config)
main_command = make_rlist_command(
repository_path,
storage_config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path,
remote_path,
)
if rlist_arguments.json:
return execute_command_and_capture_output(main_command, extra_environment=borg_environment)
else:
execute_command(
main_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=borg_environment,
)

1
borgmatic/borg/state.py Normal file
View File

@ -0,0 +1 @@
DEFAULT_BORGMATIC_SOURCE_DIRECTORY = '~/.borgmatic'

View File

@ -0,0 +1,60 @@
import logging
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def transfer_archives(
dry_run,
repository_path,
storage_config,
local_borg_version,
transfer_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a dry-run flag, a local or remote repository path, a storage config dict, the local Borg
version, the arguments to the transfer action, and global arguments as an argparse.Namespace
instance, transfer archives to the given repository.
'''
borgmatic.logger.add_custom_log_levels()
full_command = (
(local_path, 'transfer')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', storage_config.get('lock_wait', None))
+ (
flags.make_flags_from_arguments(
transfer_arguments,
excludes=('repository', 'source_repository', 'archive', 'match_archives'),
)
or (
flags.make_match_archives_flags(
transfer_arguments.match_archives
or transfer_arguments.archive
or storage_config.get('match_archives'),
storage_config.get('archive_name_format'),
local_borg_version,
)
)
)
+ flags.make_repository_flags(repository_path, local_borg_version)
+ flags.make_flags('other-repo', transfer_arguments.source_repository)
+ flags.make_flags('dry-run', dry_run)
)
return execute_command(
full_command,
output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
)

29
borgmatic/borg/version.py Normal file
View File

@ -0,0 +1,29 @@
import logging
from borgmatic.borg import environment
from borgmatic.execute import execute_command_and_capture_output
logger = logging.getLogger(__name__)
def local_borg_version(storage_config, local_path='borg'):
'''
Given a storage configuration dict and a local Borg binary path, return a version string for it.
Raise OSError or CalledProcessError if there is a problem running Borg.
Raise ValueError if the version cannot be parsed.
'''
full_command = (
(local_path, '--version')
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
)
output = execute_command_and_capture_output(
full_command,
extra_environment=environment.make_environment(storage_config),
)
try:
return output.split(' ')[1].strip()
except IndexError:
raise ValueError('Could not parse Borg version string')

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,36 @@
import borgmatic.commands.arguments
def upgrade_message(language: str, upgrade_command: str, completion_file: str):
return f'''
Your {language} completions script is from a different version of borgmatic than is
currently installed. Please upgrade your script so your completions match the
command-line flags in your installed borgmatic! Try this to upgrade:
{upgrade_command}
source {completion_file}
'''
def available_actions(subparsers, current_action=None):
'''
Given subparsers as an argparse._SubParsersAction instance and a current action name (if
any), return the actions names that can follow the current action on a command-line.
This takes into account which sub-actions that the current action supports. For instance, if
"bootstrap" is a sub-action for "config", then "bootstrap" should be able to follow a current
action of "config" but not "list".
'''
action_to_subactions = borgmatic.commands.arguments.get_subactions_for_actions(
subparsers.choices
)
current_subactions = action_to_subactions.get(current_action)
if current_subactions:
return current_subactions
all_subactions = set(
subaction for subactions in action_to_subactions.values() for subaction in subactions
)
return tuple(action for action in subparsers.choices.keys() if action not in all_subactions)

View File

@ -0,0 +1,66 @@
import borgmatic.commands.arguments
import borgmatic.commands.completion.actions
def parser_flags(parser):
'''
Given an argparse.ArgumentParser instance, return its argument flags in a space-separated
string.
'''
return ' '.join(option for action in parser._actions for option in action.option_strings)
def bash_completion():
'''
Return a bash completion script for the borgmatic command. Produce this by introspecting
borgmatic's command-line argument parsers.
'''
(
unused_global_parser,
action_parsers,
global_plus_action_parser,
) = borgmatic.commands.arguments.make_parsers()
global_flags = parser_flags(global_plus_action_parser)
# Avert your eyes.
return '\n'.join(
(
'check_version() {',
' local this_script="$(cat "$BASH_SOURCE" 2> /dev/null)"',
' local installed_script="$(borgmatic --bash-completion 2> /dev/null)"',
' if [ "$this_script" != "$installed_script" ] && [ "$installed_script" != "" ];'
f''' then cat << EOF\n{borgmatic.commands.completion.actions.upgrade_message(
'bash',
'sudo sh -c "borgmatic --bash-completion > $BASH_SOURCE"',
'$BASH_SOURCE',
)}\nEOF''',
' fi',
'}',
'complete_borgmatic() {',
)
+ tuple(
''' if [[ " ${COMP_WORDS[*]} " =~ " %s " ]]; then
COMPREPLY=($(compgen -W "%s %s %s" -- "${COMP_WORDS[COMP_CWORD]}"))
return 0
fi'''
% (
action,
parser_flags(action_parser),
' '.join(
borgmatic.commands.completion.actions.available_actions(action_parsers, action)
),
global_flags,
)
for action, action_parser in reversed(action_parsers.choices.items())
)
+ (
' COMPREPLY=($(compgen -W "%s %s" -- "${COMP_WORDS[COMP_CWORD]}"))' # noqa: FS003
% (
' '.join(borgmatic.commands.completion.actions.available_actions(action_parsers)),
global_flags,
),
' (check_version &)',
'}',
'\ncomplete -o bashdefault -o default -F complete_borgmatic borgmatic',
)
)

View File

@ -0,0 +1,176 @@
import shlex
from argparse import Action
from textwrap import dedent
import borgmatic.commands.arguments
import borgmatic.commands.completion.actions
def has_file_options(action: Action):
'''
Given an argparse.Action instance, return True if it takes a file argument.
'''
return action.metavar in (
'FILENAME',
'PATH',
) or action.dest in ('config_paths',)
def has_choice_options(action: Action):
'''
Given an argparse.Action instance, return True if it takes one of a predefined set of arguments.
'''
return action.choices is not None
def has_unknown_required_param_options(action: Action):
'''
A catch-all for options that take a required parameter, but we don't know what the parameter is.
This should be used last. These are actions that take something like a glob, a list of numbers, or a string.
Actions that match this pattern should not show the normal arguments, because those are unlikely to be valid.
'''
return (
action.required is True
or action.nargs
in (
'+',
'*',
)
or action.metavar in ('PATTERN', 'KEYS', 'N')
or (action.type is not None and action.default is None)
)
def has_exact_options(action: Action):
return (
has_file_options(action)
or has_choice_options(action)
or has_unknown_required_param_options(action)
)
def exact_options_completion(action: Action):
'''
Given an argparse.Action instance, return a completion invocation that forces file completions, options completion,
or just that some value follow the action, if the action takes such an argument and was the last action on the
command line prior to the cursor.
Otherwise, return an empty string.
'''
if not has_exact_options(action):
return ''
args = ' '.join(action.option_strings)
if has_file_options(action):
return f'''\ncomplete -c borgmatic -Fr -n "__borgmatic_current_arg {args}"'''
if has_choice_options(action):
return f'''\ncomplete -c borgmatic -f -a '{' '.join(map(str, action.choices))}' -n "__borgmatic_current_arg {args}"'''
if has_unknown_required_param_options(action):
return f'''\ncomplete -c borgmatic -x -n "__borgmatic_current_arg {args}"'''
raise ValueError(
f'Unexpected action: {action} passes has_exact_options but has no choices produced'
)
def dedent_strip_as_tuple(string: str):
'''
Dedent a string, then strip it to avoid requiring your first line to have content, then return a tuple of the string.
Makes it easier to write multiline strings for completions when you join them with a tuple.
'''
return (dedent(string).strip('\n'),)
def fish_completion():
'''
Return a fish completion script for the borgmatic command. Produce this by introspecting
borgmatic's command-line argument parsers.
'''
(
unused_global_parser,
action_parsers,
global_plus_action_parser,
) = borgmatic.commands.arguments.make_parsers()
all_action_parsers = ' '.join(action for action in action_parsers.choices.keys())
exact_option_args = tuple(
' '.join(action.option_strings)
for action_parser in action_parsers.choices.values()
for action in action_parser._actions
if has_exact_options(action)
) + tuple(
' '.join(action.option_strings)
for action in global_plus_action_parser._actions
if len(action.option_strings) > 0
if has_exact_options(action)
)
# Avert your eyes.
return '\n'.join(
dedent_strip_as_tuple(
f'''
function __borgmatic_check_version
set -fx this_filename (status current-filename)
fish -c '
if test -f "$this_filename"
set this_script (cat $this_filename 2> /dev/null)
set installed_script (borgmatic --fish-completion 2> /dev/null)
if [ "$this_script" != "$installed_script" ] && [ "$installed_script" != "" ]
echo "{borgmatic.commands.completion.actions.upgrade_message(
'fish',
'borgmatic --fish-completion | sudo tee $this_filename',
'$this_filename',
)}"
end
end
' &
end
__borgmatic_check_version
function __borgmatic_current_arg --description 'Check if any of the given arguments are the last on the command line before the cursor'
set -l all_args (commandline -poc)
# premature optimization to avoid iterating all args if there aren't enough
# to have a last arg beyond borgmatic
if [ (count $all_args) -lt 2 ]
return 1
end
for arg in $argv
if [ "$arg" = "$all_args[-1]" ]
return 0
end
end
return 1
end
set --local action_parser_condition "not __fish_seen_subcommand_from {all_action_parsers}"
set --local exact_option_condition "not __borgmatic_current_arg {' '.join(exact_option_args)}"
'''
)
+ ('\n# action_parser completions',)
+ tuple(
f'''complete -c borgmatic -f -n "$action_parser_condition" -n "$exact_option_condition" -a '{action_name}' -d {shlex.quote(action_parser.description)}'''
for action_name, action_parser in action_parsers.choices.items()
)
+ ('\n# global flags',)
+ tuple(
# -n is checked in order, so put faster / more likely to be true checks first
f'''complete -c borgmatic -f -n "$exact_option_condition" -a '{' '.join(action.option_strings)}' -d {shlex.quote(action.help)}{exact_options_completion(action)}'''
for action in global_plus_action_parser._actions
# ignore the noargs action, as this is an impossible completion for fish
if len(action.option_strings) > 0
if 'Deprecated' not in action.help
)
+ ('\n# action_parser flags',)
+ tuple(
f'''complete -c borgmatic -f -n "$exact_option_condition" -a '{' '.join(action.option_strings)}' -d {shlex.quote(action.help)} -n "__fish_seen_subcommand_from {action_name}"{exact_options_completion(action)}'''
for action_name, action_parser in action_parsers.choices.items()
for action in action_parser._actions
if 'Deprecated' not in (action.help or ())
)
)

View File

@ -1,110 +0,0 @@
import os
import sys
import textwrap
from argparse import ArgumentParser
from ruamel import yaml
from borgmatic.config import convert, generate, legacy, validate
DEFAULT_SOURCE_CONFIG_FILENAME = '/etc/borgmatic/config'
DEFAULT_SOURCE_EXCLUDES_FILENAME = '/etc/borgmatic/excludes'
DEFAULT_DESTINATION_CONFIG_FILENAME = '/etc/borgmatic/config.yaml'
def parse_arguments(*arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as an ArgumentParser instance.
'''
parser = ArgumentParser(
description='''
Convert legacy INI-style borgmatic configuration and excludes files to a single YAML
configuration file. Note that this replaces any comments from the source files.
'''
)
parser.add_argument(
'-s',
'--source-config',
dest='source_config_filename',
default=DEFAULT_SOURCE_CONFIG_FILENAME,
help='Source INI-style configuration filename. Default: {}'.format(
DEFAULT_SOURCE_CONFIG_FILENAME
),
)
parser.add_argument(
'-e',
'--source-excludes',
dest='source_excludes_filename',
default=DEFAULT_SOURCE_EXCLUDES_FILENAME
if os.path.exists(DEFAULT_SOURCE_EXCLUDES_FILENAME)
else None,
help='Excludes filename',
)
parser.add_argument(
'-d',
'--destination-config',
dest='destination_config_filename',
default=DEFAULT_DESTINATION_CONFIG_FILENAME,
help='Destination YAML configuration filename. Default: {}'.format(
DEFAULT_DESTINATION_CONFIG_FILENAME
),
)
return parser.parse_args(arguments)
TEXT_WRAP_CHARACTERS = 80
def display_result(args): # pragma: no cover
result_lines = textwrap.wrap(
'Your borgmatic configuration has been upgraded. Please review the result in {}.'.format(
args.destination_config_filename
),
TEXT_WRAP_CHARACTERS,
)
delete_lines = textwrap.wrap(
'Once you are satisfied, you can safely delete {}{}.'.format(
args.source_config_filename,
' and {}'.format(args.source_excludes_filename)
if args.source_excludes_filename
else '',
),
TEXT_WRAP_CHARACTERS,
)
print('\n'.join(result_lines))
print()
print('\n'.join(delete_lines))
def main(): # pragma: no cover
try:
args = parse_arguments(*sys.argv[1:])
schema = yaml.round_trip_load(open(validate.schema_filename()).read())
source_config = legacy.parse_configuration(
args.source_config_filename, legacy.CONFIG_FORMAT
)
source_config_file_mode = os.stat(args.source_config_filename).st_mode
source_excludes = (
open(args.source_excludes_filename).read().splitlines()
if args.source_excludes_filename
else []
)
destination_config = convert.convert_legacy_parsed_config(
source_config, source_excludes, schema
)
generate.write_configuration(
args.destination_config_filename,
generate.render_configuration(destination_config),
mode=source_config_file_mode,
)
display_result(args)
except (ValueError, OSError) as error:
print(error, file=sys.stderr)
sys.exit(1)

View File

@ -1,60 +1,17 @@
import logging
import sys
from argparse import ArgumentParser
from borgmatic.config import generate, validate
DEFAULT_DESTINATION_CONFIG_FILENAME = '/etc/borgmatic/config.yaml'
import borgmatic.commands.borgmatic
def parse_arguments(*arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as an ArgumentParser instance.
'''
parser = ArgumentParser(description='Generate a sample borgmatic YAML configuration file.')
parser.add_argument(
'-s',
'--source',
dest='source_filename',
help='Optional YAML configuration file to merge into the generated configuration, useful for upgrading your configuration',
)
parser.add_argument(
'-d',
'--destination',
dest='destination_filename',
default=DEFAULT_DESTINATION_CONFIG_FILENAME,
help='Destination YAML configuration file. Default: {}'.format(
DEFAULT_DESTINATION_CONFIG_FILENAME
),
)
return parser.parse_args(arguments)
def main(): # pragma: no cover
try:
args = parse_arguments(*sys.argv[1:])
generate.generate_sample_configuration(
args.source_filename, args.destination_filename, validate.schema_filename()
def main():
warning_log = logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg='generate-borgmatic-config is deprecated and will be removed from a future release. Please use "borgmatic config generate" instead.',
)
)
print('Generated a sample configuration file at {}.'.format(args.destination_filename))
print()
if args.source_filename:
print(
'Merged in the contents of configuration file at {}.'.format(args.source_filename)
)
print('To review the changes made, run:')
print()
print(
' diff --unified {} {}'.format(args.source_filename, args.destination_filename)
)
print()
print('Please edit the file to suit your needs. The values are representative.')
print('All fields are optional except where indicated.')
print()
print('If you ever need help: https://torsion.org/borgmatic/#issues')
except (ValueError, OSError) as error:
print(error, file=sys.stderr)
sys.exit(1)
sys.argv = ['borgmatic', 'config', 'generate'] + sys.argv[1:]
borgmatic.commands.borgmatic.main([warning_log])

View File

@ -1,56 +1,17 @@
import logging
import sys
from argparse import ArgumentParser
from borgmatic.config import collect, validate
logger = logging.getLogger(__name__)
import borgmatic.commands.borgmatic
def parse_arguments(*arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as an ArgumentParser instance.
'''
config_paths = collect.get_default_config_paths()
parser = ArgumentParser(description='Validate borgmatic configuration file(s).')
parser.add_argument(
'-c',
'--config',
nargs='+',
dest='config_paths',
default=config_paths,
help='Configuration filenames or directories, defaults to: {}'.format(
' '.join(config_paths)
),
def main():
warning_log = logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg='validate-borgmatic-config is deprecated and will be removed from a future release. Please use "borgmatic config validate" instead.',
)
)
return parser.parse_args(arguments)
def main(): # pragma: no cover
args = parse_arguments(*sys.argv[1:])
logging.basicConfig(level=logging.INFO, format='%(message)s')
config_filenames = tuple(collect.collect_config_filenames(args.config_paths))
if len(config_filenames) == 0:
logger.critical('No files to validate found')
sys.exit(1)
found_issues = False
for config_filename in config_filenames:
try:
validate.parse_configuration(config_filename, validate.schema_filename())
except (ValueError, OSError, validate.Validation_error) as error:
logging.critical('{}: Error parsing configuration file'.format(config_filename))
logging.critical(error)
found_issues = True
if found_issues:
sys.exit(1)
else:
logger.info(
'All given configuration files are valid: {}'.format(', '.join(config_filenames))
)
sys.argv = ['borgmatic', 'config', 'validate'] + sys.argv[1:]
borgmatic.commands.borgmatic.main([warning_log])

View File

@ -16,17 +16,17 @@ def get_default_config_paths(expand_home=True):
return [
'/etc/borgmatic/config.yaml',
'/etc/borgmatic.d',
'%s/borgmatic/config.yaml' % user_config_directory,
'%s/borgmatic.d' % user_config_directory,
os.path.join(user_config_directory, 'borgmatic/config.yaml'),
os.path.join(user_config_directory, 'borgmatic.d'),
]
def collect_config_filenames(config_paths):
'''
Given a sequence of config paths, both filenames and directories, resolve that to an iterable
of files. Accomplish this by listing any given directories looking for contained config files
(ending with the ".yaml" or ".yml" extension). This is non-recursive, so any directories within the given
directories are ignored.
of absolute files. Accomplish this by listing any given directories looking for contained config
files (ending with the ".yaml" or ".yml" extension). This is non-recursive, so any directories
within the given directories are ignored.
Return paths even if they don't exist on disk, so the user can find out about missing
configuration paths. However, skip a default config path if it's missing, so the user doesn't
@ -41,7 +41,7 @@ def collect_config_filenames(config_paths):
continue
if not os.path.isdir(path) or not exists:
yield path
yield os.path.abspath(path)
continue
if not os.access(path, os.R_OK):
@ -51,4 +51,4 @@ def collect_config_filenames(config_paths):
full_filename = os.path.join(path, filename)
matching_filetype = full_filename.endswith('.yaml') or full_filename.endswith('.yml')
if matching_filetype and not os.path.isdir(full_filename):
yield full_filename
yield os.path.abspath(full_filename)

View File

@ -1,95 +0,0 @@
import os
from ruamel import yaml
from borgmatic.config import generate
def _convert_section(source_section_config, section_schema):
'''
Given a legacy Parsed_config instance for a single section, convert it to its corresponding
yaml.comments.CommentedMap representation in preparation for actual serialization to YAML.
Where integer types exist in the given section schema, convert their values to integers.
'''
destination_section_config = yaml.comments.CommentedMap(
[
(
option_name,
int(option_value)
if section_schema['properties'].get(option_name, {}).get('type') == 'integer'
else option_value,
)
for option_name, option_value in source_section_config.items()
]
)
return destination_section_config
def convert_legacy_parsed_config(source_config, source_excludes, schema):
'''
Given a legacy Parsed_config instance loaded from an INI-style config file and a list of exclude
patterns, convert them to a corresponding yaml.comments.CommentedMap representation in
preparation for serialization to a single YAML config file.
Additionally, use the given schema as a source of helpful comments to include within the
returned CommentedMap.
'''
destination_config = yaml.comments.CommentedMap(
[
(section_name, _convert_section(section_config, schema['properties'][section_name]))
for section_name, section_config in source_config._asdict().items()
]
)
# Split space-seperated values into actual lists, make "repository" into a list, and merge in
# excludes.
location = destination_config['location']
location['source_directories'] = source_config.location['source_directories'].split(' ')
location['repositories'] = [location.pop('repository')]
location['exclude_patterns'] = source_excludes
if source_config.consistency.get('checks'):
destination_config['consistency']['checks'] = source_config.consistency['checks'].split(' ')
# Add comments to each section, and then add comments to the fields in each section.
generate.add_comments_to_configuration_object(destination_config, schema)
for section_name, section_config in destination_config.items():
generate.add_comments_to_configuration_object(
section_config, schema['properties'][section_name], indent=generate.INDENT
)
return destination_config
class Legacy_configuration_not_upgraded(FileNotFoundError):
def __init__(self):
super(Legacy_configuration_not_upgraded, self).__init__(
'''borgmatic changed its configuration file format in version 1.1.0 from INI-style
to YAML. This better supports validation, and has a more natural way to express
lists of values. To upgrade your existing configuration, run:
sudo upgrade-borgmatic-config
That will generate a new YAML configuration file at /etc/borgmatic/config.yaml
(by default) using the values from both your existing configuration and excludes
files. The new version of borgmatic will consume the YAML configuration file
instead of the old one.'''
)
def guard_configuration_upgraded(source_config_filename, destination_config_filenames):
'''
If legacy source configuration exists but no destination upgraded configs do, raise
Legacy_configuration_not_upgraded.
The idea is that we want to alert the user about upgrading their config if they haven't already.
'''
destination_config_exists = any(
os.path.exists(filename) for filename in destination_config_filenames
)
if os.path.exists(source_config_filename) and not destination_config_exists:
raise Legacy_configuration_not_upgraded()

View File

@ -0,0 +1,45 @@
import os
import re
_VARIABLE_PATTERN = re.compile(
r'(?P<escape>\\)?(?P<variable>\$\{(?P<name>[A-Za-z0-9_]+)((:?-)(?P<default>[^}]+))?\})'
)
def _resolve_string(matcher):
'''
Get the value from environment given a matcher containing a name and an optional default value.
If the variable is not defined in environment and no default value is provided, an Error is raised.
'''
if matcher.group('escape') is not None:
# in case of escaped envvar, unescape it
return matcher.group('variable')
# resolve the env var
name, default = matcher.group('name'), matcher.group('default')
out = os.getenv(name, default=default)
if out is None:
raise ValueError(f'Cannot find variable {name} in environment')
return out
def resolve_env_variables(item):
'''
Resolves variables like or ${FOO} from given configuration with values from process environment
Supported formats:
- ${FOO} will return FOO env variable
- ${FOO-bar} or ${FOO:-bar} will return FOO env variable if it exists, else "bar"
If any variable is missing in environment and no default value is provided, an Error is raised.
'''
if isinstance(item, str):
return _VARIABLE_PATTERN.sub(_resolve_string, item)
if isinstance(item, list):
for i, subitem in enumerate(item):
item[i] = resolve_env_variables(subitem)
if isinstance(item, dict):
for key, value in item.items():
item[key] = resolve_env_variables(value)
return item

View File

@ -5,7 +5,7 @@ import re
from ruamel import yaml
from borgmatic.config import load
from borgmatic.config import load, normalize
INDENT = 4
SEQUENCE_INDENT = 2
@ -48,7 +48,7 @@ def _schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
config, schema, indent=indent, skip_first=parent_is_sequence
)
else:
raise ValueError('Schema at level {} is unsupported: {}'.format(level, schema))
raise ValueError(f'Schema at level {level} is unsupported: {schema}')
return config
@ -84,7 +84,7 @@ def _comment_out_optional_configuration(rendered_config):
for line in rendered_config.split('\n'):
# Upon encountering an optional configuration option, comment out lines until the next blank
# line.
if line.strip().startswith('# {}'.format(COMMENTED_OUT_SENTINEL)):
if line.strip().startswith(f'# {COMMENTED_OUT_SENTINEL}'):
optional = True
continue
@ -109,13 +109,16 @@ def render_configuration(config):
return rendered.getvalue()
def write_configuration(config_filename, rendered_config, mode=0o600):
def write_configuration(config_filename, rendered_config, mode=0o600, overwrite=False):
'''
Given a target config filename and rendered config YAML, write it out to file. Create any
containing directories as needed.
containing directories as needed. But if the file already exists and overwrite is False,
abort before writing anything.
'''
if os.path.exists(config_filename):
raise FileExistsError('{} already exists. Aborting.'.format(config_filename))
if not overwrite and os.path.exists(config_filename):
raise FileExistsError(
f'{config_filename} already exists. Aborting. Use --overwrite to replace the file.'
)
try:
os.makedirs(os.path.dirname(config_filename), mode=0o700)
@ -213,7 +216,7 @@ def remove_commented_out_sentinel(config, field_name):
except KeyError:
return
if last_comment_value == '# {}\n'.format(COMMENTED_OUT_SENTINEL):
if last_comment_value == f'# {COMMENTED_OUT_SENTINEL}\n':
config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX].pop()
@ -257,30 +260,38 @@ def merge_source_configuration_into_destination(destination_config, source_confi
)
continue
# This is some sort of scalar. Simply set it into the destination.
# This is some sort of scalar. Set it into the destination.
destination_config[field_name] = source_config[field_name]
return destination_config
def generate_sample_configuration(source_filename, destination_filename, schema_filename):
def generate_sample_configuration(
dry_run, source_filename, destination_filename, schema_filename, overwrite=False
):
'''
Given an optional source configuration filename, and a required destination configuration
filename, and the path to a schema filename in a YAML rendition of the JSON Schema format,
write out a sample configuration file based on that schema. If a source filename is provided,
merge the parsed contents of that configuration into the generated configuration.
filename, the path to a schema filename in a YAML rendition of the JSON Schema format, and
whether to overwrite a destination file, write out a sample configuration file based on that
schema. If a source filename is provided, merge the parsed contents of that configuration into
the generated configuration.
'''
schema = yaml.round_trip_load(open(schema_filename))
source_config = None
if source_filename:
source_config = load.load_configuration(source_filename)
normalize.normalize(source_filename, source_config)
destination_config = merge_source_configuration_into_destination(
_schema_to_sample_configuration(schema), source_config
)
if dry_run:
return
write_configuration(
destination_filename,
_comment_out_optional_configuration(render_configuration(destination_config)),
overwrite=overwrite,
)

View File

@ -1,152 +0,0 @@
from collections import OrderedDict, namedtuple
from configparser import RawConfigParser
Section_format = namedtuple('Section_format', ('name', 'options'))
Config_option = namedtuple('Config_option', ('name', 'value_type', 'required'))
def option(name, value_type=str, required=True):
'''
Given a config file option name, an expected type for its value, and whether it's required,
return a Config_option capturing that information.
'''
return Config_option(name, value_type, required)
CONFIG_FORMAT = (
Section_format(
'location',
(
option('source_directories'),
option('one_file_system', value_type=bool, required=False),
option('remote_path', required=False),
option('repository'),
),
),
Section_format(
'storage',
(
option('encryption_passphrase', required=False),
option('compression', required=False),
option('umask', required=False),
),
),
Section_format(
'retention',
(
option('keep_within', required=False),
option('keep_hourly', int, required=False),
option('keep_daily', int, required=False),
option('keep_weekly', int, required=False),
option('keep_monthly', int, required=False),
option('keep_yearly', int, required=False),
option('prefix', required=False),
),
),
Section_format(
'consistency', (option('checks', required=False), option('check_last', required=False))
),
)
def validate_configuration_format(parser, config_format):
'''
Given an open RawConfigParser and an expected config file format, validate that the parsed
configuration file has the expected sections, that any required options are present in those
sections, and that there aren't any unexpected options.
A section is required if any of its contained options are required.
Raise ValueError if anything is awry.
'''
section_names = set(parser.sections())
required_section_names = tuple(
section.name
for section in config_format
if any(option.required for option in section.options)
)
unknown_section_names = section_names - set(
section_format.name for section_format in config_format
)
if unknown_section_names:
raise ValueError(
'Unknown config sections found: {}'.format(', '.join(unknown_section_names))
)
missing_section_names = set(required_section_names) - section_names
if missing_section_names:
raise ValueError('Missing config sections: {}'.format(', '.join(missing_section_names)))
for section_format in config_format:
if section_format.name not in section_names:
continue
option_names = parser.options(section_format.name)
expected_options = section_format.options
unexpected_option_names = set(option_names) - set(
option.name for option in expected_options
)
if unexpected_option_names:
raise ValueError(
'Unexpected options found in config section {}: {}'.format(
section_format.name, ', '.join(sorted(unexpected_option_names))
)
)
missing_option_names = tuple(
option.name
for option in expected_options
if option.required
if option.name not in option_names
)
if missing_option_names:
raise ValueError(
'Required options missing from config section {}: {}'.format(
section_format.name, ', '.join(missing_option_names)
)
)
def parse_section_options(parser, section_format):
'''
Given an open RawConfigParser and an expected section format, return the option values from that
section as a dict mapping from option name to value. Omit those options that are not present in
the parsed options.
Raise ValueError if any option values cannot be coerced to the expected Python data type.
'''
type_getter = {str: parser.get, int: parser.getint, bool: parser.getboolean}
return OrderedDict(
(option.name, type_getter[option.value_type](section_format.name, option.name))
for option in section_format.options
if parser.has_option(section_format.name, option.name)
)
def parse_configuration(config_filename, config_format):
'''
Given a config filename and an expected config file format, return the parsed configuration
as a namedtuple with one attribute for each parsed section.
Raise IOError if the file cannot be read, or ValueError if the format is not as expected.
'''
parser = RawConfigParser()
if not parser.read(config_filename):
raise ValueError('Configuration file cannot be opened: {}'.format(config_filename))
validate_configuration_format(parser, config_format)
# Describes a parsed configuration, where each attribute is the name of a configuration file
# section and each value is a dict of that section's parsed options.
Parsed_config = namedtuple(
'Parsed_config', (section_format.name for section_format in config_format)
)
return Parsed_config(
*(parse_section_options(parser, section_format) for section_format in config_format)
)

View File

@ -1,3 +1,5 @@
import functools
import json
import logging
import os
@ -6,26 +8,65 @@ import ruamel.yaml
logger = logging.getLogger(__name__)
def load_configuration(filename):
def include_configuration(loader, filename_node, include_directory):
'''
Load the given configuration file and return its contents as a data structure of nested dicts
and lists.
Given a ruamel.yaml.loader.Loader, a ruamel.yaml.serializer.ScalarNode containing the included
filename, and an include directory path to search for matching files, load the given YAML
filename (ignoring the given loader so we can use our own) and return its contents as a data
structure of nested dicts and lists. If the filename is relative, probe for it within 1. the
current working directory and 2. the given include directory.
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes.
Raise FileNotFoundError if an included file was not found.
'''
yaml = ruamel.yaml.YAML(typ='safe')
yaml.Constructor = Include_constructor
include_directories = [os.getcwd(), os.path.abspath(include_directory)]
include_filename = os.path.expanduser(filename_node.value)
return yaml.load(open(filename))
if not os.path.isabs(include_filename):
candidate_filenames = [
os.path.join(directory, include_filename) for directory in include_directories
]
for candidate_filename in candidate_filenames:
if os.path.exists(candidate_filename):
include_filename = candidate_filename
break
else:
raise FileNotFoundError(
f'Could not find include {filename_node.value} at {" or ".join(candidate_filenames)}'
)
return load_configuration(include_filename)
def include_configuration(loader, filename_node):
def raise_retain_node_error(loader, node):
'''
Load the given YAML filename (ignoring the given loader so we can use our own), and return its
contents as a data structure of nested dicts and lists.
Given a ruamel.yaml.loader.Loader and a YAML node, raise an error about "!retain" usage.
Raise ValueError if a mapping or sequence node is given, as that indicates that "!retain" was
used in a configuration file without a merge. In configuration files with a merge, mapping and
sequence nodes with "!retain" tags are handled by deep_merge_nodes() below.
Also raise ValueError if a scalar node is given, as "!retain" is not supported on scalar nodes.
'''
return load_configuration(os.path.expanduser(filename_node.value))
if isinstance(node, (ruamel.yaml.nodes.MappingNode, ruamel.yaml.nodes.SequenceNode)):
raise ValueError(
'The !retain tag may only be used within a configuration file containing a merged !include tag.'
)
raise ValueError('The !retain tag may only be used on a YAML mapping or sequence.')
def raise_omit_node_error(loader, node):
'''
Given a ruamel.yaml.loader.Loader and a YAML node, raise an error about "!omit" usage.
Raise ValueError unconditionally, as an "!omit" node here indicates it was used in a
configuration file without a merge. In configuration files with a merge, nodes with "!omit"
tags are handled by deep_merge_nodes() below.
'''
raise ValueError(
'The !omit tag may only be used on a scalar (e.g., string) list element within a configuration file containing a merged !include tag.'
)
class Include_constructor(ruamel.yaml.SafeConstructor):
@ -34,20 +75,30 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
separate YAML configuration files. Example syntax: `retention: !include common.yaml`
'''
def __init__(self, preserve_quotes=None, loader=None):
def __init__(self, preserve_quotes=None, loader=None, include_directory=None):
super(Include_constructor, self).__init__(preserve_quotes, loader)
self.add_constructor('!include', include_configuration)
self.add_constructor(
'!include',
functools.partial(include_configuration, include_directory=include_directory),
)
self.add_constructor('!retain', raise_retain_node_error)
self.add_constructor('!omit', raise_omit_node_error)
def flatten_mapping(self, node):
'''
Support the special case of shallow merging included configuration into an existing mapping
Support the special case of deep merging included configuration into an existing mapping
using the YAML '<<' merge key. Example syntax:
```
retention:
keep_daily: 1
<<: !include common.yaml
<<: !include common.yaml
```
These includes are deep merged into the current configuration file. For instance, in this
example, any "retention" options in common.yaml will get merged into the "retention" section
in the example configuration file.
'''
representer = ruamel.yaml.representer.SafeRepresenter()
@ -57,3 +108,186 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
node.value[index] = (key_node, included_value)
super(Include_constructor, self).flatten_mapping(node)
node.value = deep_merge_nodes(node.value)
def load_configuration(filename):
'''
Load the given configuration file and return its contents as a data structure of nested dicts
and lists. Also, replace any "{constant}" strings with the value of the "constant" key in the
"constants" section of the configuration file.
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes.
'''
# Use an embedded derived class for the include constructor so as to capture the filename
# value. (functools.partial doesn't work for this use case because yaml.Constructor has to be
# an actual class.)
class Include_constructor_with_include_directory(Include_constructor):
def __init__(self, preserve_quotes=None, loader=None):
super(Include_constructor_with_include_directory, self).__init__(
preserve_quotes, loader, include_directory=os.path.dirname(filename)
)
yaml = ruamel.yaml.YAML(typ='safe')
yaml.Constructor = Include_constructor_with_include_directory
with open(filename) as file:
file_contents = file.read()
config = yaml.load(file_contents)
if config and 'constants' in config:
for key, value in config['constants'].items():
value = json.dumps(value)
file_contents = file_contents.replace(f'{{{key}}}', value.strip('"'))
config = yaml.load(file_contents)
del config['constants']
return config
def filter_omitted_nodes(nodes):
'''
Given a list of nodes, return a filtered list omitting any nodes with an "!omit" tag or with a
value matching such nodes.
'''
omitted_values = tuple(node.value for node in nodes if node.tag == '!omit')
return [node for node in nodes if node.value not in omitted_values]
DELETED_NODE = object()
def deep_merge_nodes(nodes):
'''
Given a nested borgmatic configuration data structure as a list of tuples in the form of:
(
ruamel.yaml.nodes.ScalarNode as a key,
ruamel.yaml.nodes.MappingNode or other Node as a value,
),
... deep merge any node values corresponding to duplicate keys and return the result. If
there are colliding keys with non-MappingNode values (e.g., integers or strings), the last
of the values wins.
For instance, given node values of:
[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
ScalarNode(tag='tag:yaml.org,2002:int', value='24')
),
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='7')
),
]),
),
(
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='5')
),
]),
),
]
... the returned result would be:
[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
ScalarNode(tag='tag:yaml.org,2002:int', value='24')
),
(
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='5')
),
]),
),
]
If a mapping or sequence node has a YAML "!retain" tag, then that node is not merged.
The purpose of deep merging like this is to support, for instance, merging one borgmatic
configuration file into another for reuse, such that a configuration section ("retention",
etc.) does not completely replace the corresponding section in a merged file.
Raise ValueError if a merge is implied using two incompatible types.
'''
# Map from original node key/value to the replacement merged node. DELETED_NODE as a replacement
# node indications deletion.
replaced_nodes = {}
# To find nodes that require merging, compare each node with each other node.
for a_key, a_value in nodes:
for b_key, b_value in nodes:
# If we've already considered one of the nodes for merging, skip it.
if (a_key, a_value) in replaced_nodes or (b_key, b_value) in replaced_nodes:
continue
# If the keys match and the values are different, we need to merge these two A and B nodes.
if a_key.tag == b_key.tag and a_key.value == b_key.value and a_value != b_value:
if not type(a_value) is type(b_value):
raise ValueError(
f'Incompatible types found when trying to merge "{a_key.value}:" values across configuration files: {type(a_value).id} and {type(b_value).id}'
)
# Since we're merging into the B node, consider the A node a duplicate and remove it.
replaced_nodes[(a_key, a_value)] = DELETED_NODE
# If we're dealing with MappingNodes, recurse and merge its values as well.
if isinstance(b_value, ruamel.yaml.nodes.MappingNode):
# A "!retain" tag says to skip deep merging for this node. Replace the tag so
# downstream schema validation doesn't break on our application-specific tag.
if b_value.tag == '!retain':
b_value.tag = 'tag:yaml.org,2002:map'
else:
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.MappingNode(
tag=b_value.tag,
value=deep_merge_nodes(a_value.value + b_value.value),
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
# If we're dealing with SequenceNodes, merge by appending one sequence to the other.
elif isinstance(b_value, ruamel.yaml.nodes.SequenceNode):
# A "!retain" tag says to skip deep merging for this node. Replace the tag so
# downstream schema validation doesn't break on our application-specific tag.
if b_value.tag == '!retain':
b_value.tag = 'tag:yaml.org,2002:seq'
else:
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.SequenceNode(
tag=b_value.tag,
value=filter_omitted_nodes(a_value.value + b_value.value),
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
return [
replaced_nodes.get(node, node) for node in nodes if replaced_nodes.get(node) != DELETED_NODE
]

View File

@ -1,10 +1,213 @@
def normalize(config):
'''
Given a configuration dict, apply particular hard-coded rules to normalize its contents to
adhere to the configuration schema.
'''
exclude_if_present = config.get('location', {}).get('exclude_if_present')
import logging
import os
# "Upgrade" exclude_if_present from a string to a list.
def normalize(config_filename, config):
'''
Given a configuration filename and a configuration dict of its loaded contents, apply particular
hard-coded rules to normalize the configuration to adhere to the current schema. Return any log
message warnings produced based on the normalization performed.
'''
logs = []
location = config.get('location') or {}
storage = config.get('storage') or {}
consistency = config.get('consistency') or {}
retention = config.get('retention') or {}
hooks = config.get('hooks') or {}
# Upgrade exclude_if_present from a string to a list.
exclude_if_present = location.get('exclude_if_present')
if isinstance(exclude_if_present, str):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The exclude_if_present option now expects a list value. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['location']['exclude_if_present'] = [exclude_if_present]
# Upgrade various monitoring hooks from a string to a dict.
healthchecks = hooks.get('healthchecks')
if isinstance(healthchecks, str):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects a mapping value. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['hooks']['healthchecks'] = {'ping_url': healthchecks}
cronitor = hooks.get('cronitor')
if isinstance(cronitor, str):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects key/value pairs. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['hooks']['cronitor'] = {'ping_url': cronitor}
pagerduty = hooks.get('pagerduty')
if isinstance(pagerduty, str):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects key/value pairs. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['hooks']['pagerduty'] = {'integration_key': pagerduty}
cronhub = hooks.get('cronhub')
if isinstance(cronhub, str):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects key/value pairs. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['hooks']['cronhub'] = {'ping_url': cronhub}
# Upgrade consistency checks from a list of strings to a list of dicts.
checks = consistency.get('checks')
if isinstance(checks, list) and len(checks) and isinstance(checks[0], str):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The checks option now expects a list of key/value pairs. Lists of strings for this option are deprecated and support will be removed from a future release.',
)
)
)
config['consistency']['checks'] = [{'name': check_type} for check_type in checks]
# Rename various configuration options.
numeric_owner = location.pop('numeric_owner', None)
if numeric_owner is not None:
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The numeric_owner option has been renamed to numeric_ids. numeric_owner is deprecated and support will be removed from a future release.',
)
)
)
config['location']['numeric_ids'] = numeric_owner
bsd_flags = location.pop('bsd_flags', None)
if bsd_flags is not None:
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The bsd_flags option has been renamed to flags. bsd_flags is deprecated and support will be removed from a future release.',
)
)
)
config['location']['flags'] = bsd_flags
remote_rate_limit = storage.pop('remote_rate_limit', None)
if remote_rate_limit is not None:
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The remote_rate_limit option has been renamed to upload_rate_limit. remote_rate_limit is deprecated and support will be removed from a future release.',
)
)
)
config['storage']['upload_rate_limit'] = remote_rate_limit
# Upgrade remote repositories to ssh:// syntax, required in Borg 2.
repositories = location.get('repositories')
if repositories:
if isinstance(repositories[0], str):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The repositories option now expects a list of key/value pairs. Lists of strings for this option are deprecated and support will be removed from a future release.',
)
)
)
config['location']['repositories'] = [
{'path': repository} for repository in repositories
]
repositories = config['location']['repositories']
config['location']['repositories'] = []
for repository_dict in repositories:
repository_path = repository_dict['path']
if '~' in repository_path:
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Repository paths containing "~" are deprecated in borgmatic and support will be removed from a future release.',
)
)
)
if ':' in repository_path:
if repository_path.startswith('file://'):
updated_repository_path = os.path.abspath(
repository_path.partition('file://')[-1]
)
config['location']['repositories'].append(
dict(
repository_dict,
path=updated_repository_path,
)
)
elif repository_path.startswith('ssh://'):
config['location']['repositories'].append(repository_dict)
else:
rewritten_repository_path = f"ssh://{repository_path.replace(':~', '/~').replace(':/', '/').replace(':', '/./')}"
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Remote repository paths without ssh:// syntax are deprecated and support will be removed from a future release. Interpreting "{repository_path}" as "{rewritten_repository_path}"',
)
)
)
config['location']['repositories'].append(
dict(
repository_dict,
path=rewritten_repository_path,
)
)
else:
config['location']['repositories'].append(repository_dict)
if consistency.get('prefix') or retention.get('prefix'):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The prefix option is deprecated and support will be removed from a future release. Use archive_name_format or match_archives instead.',
)
)
)
return logs

View File

@ -26,6 +26,8 @@ def convert_value_type(value):
'''
Given a string value, determine its logical type (string, boolean, integer, etc.), and return it
converted to that type.
Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
'''
return ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
@ -50,22 +52,33 @@ def parse_overrides(raw_overrides):
if not raw_overrides:
return ()
try:
return tuple(
(tuple(raw_keys.split('.')), convert_value_type(value))
for raw_override in raw_overrides
for raw_keys, value in (raw_override.split('=', 1),)
)
except ValueError:
raise ValueError('Invalid override. Make sure you use the form: SECTION.OPTION=VALUE')
parsed_overrides = []
for raw_override in raw_overrides:
try:
raw_keys, value = raw_override.split('=', 1)
parsed_overrides.append(
(
tuple(raw_keys.split('.')),
convert_value_type(value),
)
)
except ValueError:
raise ValueError(
f"Invalid override '{raw_override}'. Make sure you use the form: SECTION.OPTION=VALUE"
)
except ruamel.yaml.error.YAMLError as error:
raise ValueError(f"Invalid override '{raw_override}': {error.problem}")
return tuple(parsed_overrides)
def apply_overrides(config, raw_overrides):
'''
Given a sequence of configuration file override strings in the form of "section.option=value"
and a configuration dict, parse each override and set it the configuration dict.
Given a configuration dict and a sequence of configuration file override strings in the form of
"section.option=value", parse each override and set it the configuration dict.
'''
overrides = parse_overrides(raw_overrides)
for (keys, value) in overrides:
for keys, value in overrides:
set_values(config, keys, value)

File diff suppressed because it is too large Load Diff

View File

@ -1,39 +1,44 @@
import os
import jsonschema
import pkg_resources
import ruamel.yaml
from borgmatic.config import load, normalize, override
import borgmatic.config
from borgmatic.config import environment, load, normalize, override
def schema_filename():
'''
Path to the installed YAML configuration schema file, used to validate and parse the
configuration.
Raise FileNotFoundError when the schema path does not exist.
'''
return pkg_resources.resource_filename('borgmatic', 'config/schema.yaml')
schema_path = os.path.join(os.path.dirname(borgmatic.config.__file__), 'schema.yaml')
with open(schema_path):
return schema_path
def format_error_path_element(path_element):
def format_json_error_path_element(path_element):
'''
Given a path element into a JSON data structure, format it for display as a string.
'''
if isinstance(path_element, int):
return str('[{}]'.format(path_element))
return str(f'[{path_element}]')
return str('.{}'.format(path_element))
return str(f'.{path_element}')
def format_error(error):
def format_json_error(error):
'''
Given an instance of jsonschema.exceptions.ValidationError, format it for display as a string.
'''
if not error.path:
return 'At the top level: {}'.format(error.message)
return f'At the top level: {error.message}'
formatted_path = ''.join(format_error_path_element(element) for element in error.path)
return "At '{}': {}".format(formatted_path.lstrip('.'), error.message)
formatted_path = ''.join(format_json_error_path_element(element) for element in error.path)
return f"At '{formatted_path.lstrip('.')}': {error.message}"
class Validation_error(ValueError):
@ -44,8 +49,8 @@ class Validation_error(ValueError):
def __init__(self, config_filename, errors):
'''
Given a configuration filename path and a sequence of
jsonschema.exceptions.ValidationError instances, create a Validation_error.
Given a configuration filename path and a sequence of string error messages, create a
Validation_error.
'''
self.config_filename = config_filename
self.errors = errors
@ -54,9 +59,10 @@ class Validation_error(ValueError):
'''
Render a validation error as a user-facing string.
'''
return 'An error occurred while parsing a configuration file at {}:\n'.format(
self.config_filename
) + '\n'.join(format_error(error) for error in self.errors)
return (
f'An error occurred while parsing a configuration file at {self.config_filename}:\n'
+ '\n'.join(error for error in self.errors)
)
def apply_logical_validation(config_filename, parsed_configuration):
@ -65,30 +71,22 @@ def apply_logical_validation(config_filename, parsed_configuration):
below), run through any additional logical validation checks. If there are any such validation
problems, raise a Validation_error.
'''
archive_name_format = parsed_configuration.get('storage', {}).get('archive_name_format')
prefix = parsed_configuration.get('retention', {}).get('prefix')
if archive_name_format and not prefix:
raise Validation_error(
config_filename,
('If you provide an archive_name_format, you must also specify a retention prefix.',),
)
location_repositories = parsed_configuration.get('location', {}).get('repositories')
check_repositories = parsed_configuration.get('consistency', {}).get('check_repositories', [])
for repository in check_repositories:
if repository not in location_repositories:
if not any(
repositories_match(repository, config_repository)
for config_repository in location_repositories
):
raise Validation_error(
config_filename,
(
'Unknown repository in the consistency section\'s check_repositories: {}'.format(
repository
),
f'Unknown repository in the "consistency" section\'s "check_repositories": {repository}',
),
)
def parse_configuration(config_filename, schema_filename, overrides=None):
def parse_configuration(config_filename, schema_filename, overrides=None, resolve_env=True):
'''
Given the path to a config filename in YAML format, the path to a schema filename in a YAML
rendition of JSON Schema format, a sequence of configuration file override strings in the form
@ -98,6 +96,9 @@ def parse_configuration(config_filename, schema_filename, overrides=None):
{'location': {'source_directories': ['/home', '/etc'], 'repository': 'hostname.borg'},
'retention': {'keep_daily': 7}, 'consistency': {'checks': ['repository', 'archives']}}
Also return a sequence of logging.LogRecord instances containing any warnings about the
configuration.
Raise FileNotFoundError if the file does not exist, PermissionError if the user does not
have permissions to read the file, or Validation_error if the config does not match the schema.
'''
@ -108,61 +109,65 @@ def parse_configuration(config_filename, schema_filename, overrides=None):
raise Validation_error(config_filename, (str(error),))
override.apply_overrides(config, overrides)
normalize.normalize(config)
logs = normalize.normalize(config_filename, config)
if resolve_env:
environment.resolve_env_variables(config)
validator = jsonschema.Draft7Validator(schema)
try:
validator = jsonschema.Draft7Validator(schema)
except AttributeError: # pragma: no cover
validator = jsonschema.Draft4Validator(schema)
validation_errors = tuple(validator.iter_errors(config))
if validation_errors:
raise Validation_error(config_filename, validation_errors)
raise Validation_error(
config_filename, tuple(format_json_error(error) for error in validation_errors)
)
apply_logical_validation(config_filename, config)
return config
return config, logs
def normalize_repository_path(repository):
'''
Given a repository path, return the absolute path of it (for local repositories).
'''
# A colon in the repository indicates it's a remote repository. Bail.
if ':' in repository:
# A colon in the repository could mean that it's either a file:// URL or a remote repository.
# If it's a remote repository, we don't want to normalize it. If it's a file:// URL, we do.
if ':' not in repository:
return os.path.abspath(repository)
elif repository.startswith('file://'):
return os.path.abspath(repository.partition('file://')[-1])
else:
return repository
return os.path.abspath(repository)
def repositories_match(first, second):
'''
Given two repository paths (relative and/or absolute), return whether they match.
Given two repository dicts with keys 'path' (relative and/or absolute),
and 'label', or two repository paths, return whether they match.
'''
return normalize_repository_path(first) == normalize_repository_path(second)
if isinstance(first, str):
first = {'path': first, 'label': first}
if isinstance(second, str):
second = {'path': second, 'label': second}
return (first.get('label') == second.get('label')) or (
normalize_repository_path(first.get('path'))
== normalize_repository_path(second.get('path'))
)
def guard_configuration_contains_repository(repository, configurations):
'''
Given a repository path and a dict mapping from config filename to corresponding parsed config
dict, ensure that the repository is declared exactly once in all of the configurations.
If no repository is given, then error if there are multiple configured repositories.
dict, ensure that the repository is declared exactly once in all of the configurations. If no
repository is given, skip this check.
Raise ValueError if the repository is not found in a configuration, or is declared multiple
times.
'''
if not repository:
count = len(
tuple(
config_repository
for config in configurations.values()
for config_repository in config['location']['repositories']
)
)
if count > 1:
raise ValueError(
'Can\'t determine which repository to use. Use --repository option to disambiguate'
)
return
count = len(
@ -170,11 +175,34 @@ def guard_configuration_contains_repository(repository, configurations):
config_repository
for config in configurations.values()
for config_repository in config['location']['repositories']
if repositories_match(repository, config_repository)
if repositories_match(config_repository, repository)
)
)
if count == 0:
raise ValueError('Repository {} not found in configuration files'.format(repository))
raise ValueError(f'Repository {repository} not found in configuration files')
if count > 1:
raise ValueError('Repository {} found in multiple configuration files'.format(repository))
raise ValueError(f'Repository {repository} found in multiple configuration files')
def guard_single_repository_selected(repository, configurations):
'''
Given a repository path and a dict mapping from config filename to corresponding parsed config
dict, ensure either a single repository exists across all configuration files or a repository
path was given.
'''
if repository:
return
count = len(
tuple(
config_repository
for config in configurations.values()
for config_repository in config['location']['repositories']
)
)
if count != 1:
raise ValueError(
"Can't determine which repository to use. Use --repository to disambiguate"
)

View File

@ -11,7 +11,7 @@ ERROR_OUTPUT_MAX_LINE_COUNT = 25
BORG_ERROR_EXIT_CODE = 2
def exit_code_indicates_error(process, exit_code, borg_local_path=None):
def exit_code_indicates_error(command, exit_code, borg_local_path=None):
'''
Return True if the given exit code from running a command corresponds to an error. If a Borg
local path is given and matches the process' command, then treat exit code 1 as a warning
@ -20,8 +20,6 @@ def exit_code_indicates_error(process, exit_code, borg_local_path=None):
if exit_code is None:
return False
command = process.args.split(' ') if isinstance(process.args, str) else process.args
if borg_local_path and command[0] == borg_local_path:
return bool(exit_code < 0 or exit_code >= BORG_ERROR_EXIT_CODE)
@ -45,11 +43,32 @@ def output_buffer_for_process(process, exclude_stdouts):
return process.stderr if process.stdout in exclude_stdouts else process.stdout
def append_last_lines(last_lines, captured_output, line, output_log_level):
'''
Given a rolling list of last lines, a list of captured output, a line to append, and an output
log level, append the line to the last lines and (if necessary) the captured output. Then log
the line at the requested output log level.
'''
last_lines.append(line)
if len(last_lines) > ERROR_OUTPUT_MAX_LINE_COUNT:
last_lines.pop(0)
if output_log_level is None:
captured_output.append(line)
else:
logger.log(output_log_level, line)
def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
'''
Given a sequence of subprocess.Popen() instances for multiple processes, log the output for each
process with the requested log level. Additionally, raise a CalledProcessError if a process
exits with an error (or a warning for exit code 1, if that process matches the Borg local path).
exits with an error (or a warning for exit code 1, if that process does not match the Borg local
path).
If output log level is None, then instead of logging, capture output for each process and return
it as a dict from the process to its output.
For simplicity, it's assumed that the output buffer for each process is its stdout. But if any
stdouts are given to exclude, then for any matching processes, log from their stderr instead.
@ -65,6 +84,8 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
if process.stdout or process.stderr
}
output_buffers = list(process_for_output_buffer.keys())
captured_outputs = collections.defaultdict(list)
still_running = True
# Log output for each process until they all exit.
while True:
@ -87,18 +108,22 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
# Add the process's output to output_buffers to ensure it'll get read.
output_buffers.append(other_process.stdout)
line = ready_buffer.readline().rstrip().decode()
if not line or not ready_process:
continue
while True:
line = ready_buffer.readline().rstrip().decode()
if not line or not ready_process:
break
# Keep the last few lines of output in case the process errors, and we need the output for
# the exception below.
last_lines = buffer_last_lines[ready_buffer]
last_lines.append(line)
if len(last_lines) > ERROR_OUTPUT_MAX_LINE_COUNT:
last_lines.pop(0)
# Keep the last few lines of output in case the process errors, and we need the output for
# the exception below.
append_last_lines(
buffer_last_lines[ready_buffer],
captured_outputs[ready_process],
line,
output_log_level,
)
logger.log(output_log_level, line)
if not still_running:
break
still_running = False
@ -108,13 +133,24 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
if exit_code is None:
still_running = True
command = process.args.split(' ') if isinstance(process.args, str) else process.args
# If any process errors, then raise accordingly.
if exit_code_indicates_error(process, exit_code, borg_local_path):
if exit_code_indicates_error(command, exit_code, borg_local_path):
# If an error occurs, include its output in the raised exception so that we don't
# inadvertently hide error output.
output_buffer = output_buffer_for_process(process, exclude_stdouts)
last_lines = buffer_last_lines[output_buffer] if output_buffer else []
# Collect any straggling output lines that came in since we last gathered output.
while output_buffer: # pragma: no cover
line = output_buffer.readline().rstrip().decode()
if not line:
break
append_last_lines(
last_lines, captured_outputs[process], line, output_log_level=logging.ERROR
)
if len(last_lines) == ERROR_OUTPUT_MAX_LINE_COUNT:
last_lines.insert(0, '...')
@ -129,34 +165,21 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
exit_code, command_for_process(process), '\n'.join(last_lines)
)
if not still_running:
break
# Consume any remaining output that we missed (if any).
for process in processes:
output_buffer = output_buffer_for_process(process, exclude_stdouts)
if not output_buffer:
continue
while True: # pragma: no cover
remaining_output = output_buffer.readline().rstrip().decode()
if not remaining_output:
break
logger.log(output_log_level, remaining_output)
if captured_outputs:
return {
process: '\n'.join(output_lines) for process, output_lines in captured_outputs.items()
}
def log_command(full_command, input_file, output_file):
def log_command(full_command, input_file=None, output_file=None):
'''
Log the given command (a sequence of command/argument strings), along with its input/output file
paths.
'''
logger.debug(
' '.join(full_command)
+ (' < {}'.format(getattr(input_file, 'name', '')) if input_file else '')
+ (' > {}'.format(getattr(output_file, 'name', '')) if output_file else '')
+ (f" < {getattr(input_file, 'name', '')}" if input_file else '')
+ (f" > {getattr(output_file, 'name', '')}" if output_file else '')
)
@ -179,15 +202,14 @@ def execute_command(
):
'''
Execute the given command (a sequence of command/argument strings) and log its output at the
given log level. If output log level is None, instead capture and return the output. (Implies
run_to_completion.) If an open output file object is given, then write stdout to the file and
only log stderr (but only if an output log level is set). If an open input file object is given,
then read stdin from the file. If shell is True, execute the command within a shell. If an extra
environment dict is given, then use it to augment the current environment, and pass the result
into the command. If a working directory is given, use that as the present working directory
when running the command. If a Borg local path is given, and the command matches it (regardless
of arguments), treat exit code 1 as a warning instead of an error. If run to completion is
False, then return the process for the command without executing it to completion.
given log level. If an open output file object is given, then write stdout to the file and only
log stderr. If an open input file object is given, then read stdin from the file. If shell is
True, execute the command within a shell. If an extra environment dict is given, then use it to
augment the current environment, and pass the result into the command. If a working directory is
given, use that as the present working directory when running the command. If a Borg local path
is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning
instead of an error. If run to completion is False, then return the process for the command
without executing it to completion.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
@ -196,12 +218,6 @@ def execute_command(
do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command
if output_log_level is None:
output = subprocess.check_output(
command, shell=shell, env=environment, cwd=working_directory
)
return output.decode() if output is not None else None
process = subprocess.Popen(
command,
stdin=input_file,
@ -219,6 +235,42 @@ def execute_command(
)
def execute_command_and_capture_output(
full_command,
capture_stderr=False,
shell=False,
extra_environment=None,
working_directory=None,
):
'''
Execute the given command (a sequence of command/argument strings), capturing and returning its
output (stdout). If capture stderr is True, then capture and return stderr in addition to
stdout. If shell is True, execute the command within a shell. If an extra environment dict is
given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the command.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
log_command(full_command)
environment = {**os.environ, **extra_environment} if extra_environment else None
command = ' '.join(full_command) if shell else full_command
try:
output = subprocess.check_output(
command,
stderr=subprocess.STDOUT if capture_stderr else None,
shell=shell,
env=environment,
cwd=working_directory,
)
except subprocess.CalledProcessError as error:
if exit_code_indicates_error(command, error.returncode):
raise
output = error.output
return output.decode() if output is not None else None
def execute_command_with_processes(
full_command,
processes,
@ -236,13 +288,14 @@ def execute_command_with_processes(
run as well. This is useful, for instance, for processes that are streaming output to a named
pipe that the given command is consuming from.
If an open output file object is given, then write stdout to the file and only log stderr (but
only if an output log level is set). If an open input file object is given, then read stdin from
the file. If shell is True, execute the command within a shell. If an extra environment dict is
given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the
command. If a Borg local path is given, then for any matching command or process (regardless of
arguments), treat exit code 1 as a warning instead of an error.
If an open output file object is given, then write stdout to the file and only log stderr. But
if output log level is None, instead suppress logging and return the captured output for (only)
the given command. If an open input file object is given, then read stdin from the file. If
shell is True, execute the command within a shell. If an extra environment dict is given, then
use it to augment the current environment, and pass the result into the command. If a working
directory is given, use that as the present working directory when running the command. If a
Borg local path is given, then for any matching command or process (regardless of arguments),
treat exit code 1 as a warning instead of an error.
Raise subprocesses.CalledProcessError if an error occurs while running the command or in the
upstream process.
@ -273,9 +326,12 @@ def execute_command_with_processes(
process.kill()
raise
log_outputs(
captured_outputs = log_outputs(
tuple(processes) + (command_process,),
(input_file, output_file),
output_log_level,
borg_local_path=borg_local_path,
)
if output_log_level is None:
return captured_outputs.get(command_process)

View File

@ -1,5 +1,6 @@
import logging
import os
import re
from borgmatic import execute
@ -9,13 +10,18 @@ logger = logging.getLogger(__name__)
SOFT_FAIL_EXIT_CODE = 75
def interpolate_context(command, context):
def interpolate_context(config_filename, hook_description, command, context):
'''
Given a single hook command and a dict of context names/values, interpolate the values by
"{name}" into the command and return the result.
Given a config filename, a hook description, a single hook command, and a dict of context
names/values, interpolate the values by "{name}" into the command and return the result.
'''
for name, value in context.items():
command = command.replace('{%s}' % name, str(value))
command = command.replace(f'{{{name}}}', str(value))
for unsupported_variable in re.findall(r'{\w+}', command):
logger.warning(
f"{config_filename}: Variable '{unsupported_variable}' is not supported in {hook_description} hook"
)
return command
@ -26,35 +32,32 @@ def execute_hook(commands, umask, config_filename, description, dry_run, **conte
a hook description, and whether this is a dry run, run the given commands. Or, don't run them
if this is a dry run.
The context contains optional values interpolated by name into the hook commands. Currently,
this only applies to the on_error hook.
The context contains optional values interpolated by name into the hook commands.
Raise ValueError if the umask cannot be parsed.
Raise subprocesses.CalledProcessError if an error occurs in a hook.
'''
if not commands:
logger.debug('{}: No commands to run for {} hook'.format(config_filename, description))
logger.debug(f'{config_filename}: No commands to run for {description} hook')
return
dry_run_label = ' (dry run; not actually running hooks)' if dry_run else ''
context['configuration_filename'] = config_filename
commands = [interpolate_context(command, context) for command in commands]
commands = [
interpolate_context(config_filename, description, command, context) for command in commands
]
if len(commands) == 1:
logger.info(
'{}: Running command for {} hook{}'.format(config_filename, description, dry_run_label)
)
logger.info(f'{config_filename}: Running command for {description} hook{dry_run_label}')
else:
logger.info(
'{}: Running {} commands for {} hook{}'.format(
config_filename, len(commands), description, dry_run_label
)
f'{config_filename}: Running {len(commands)} commands for {description} hook{dry_run_label}',
)
if umask:
parsed_umask = int(str(umask), 8)
logger.debug('{}: Set hook umask to {}'.format(config_filename, oct(parsed_umask)))
logger.debug(f'{config_filename}: Set hook umask to {oct(parsed_umask)}')
original_umask = os.umask(parsed_umask)
else:
original_umask = None
@ -86,9 +89,7 @@ def considered_soft_failure(config_filename, error):
if exit_code == SOFT_FAIL_EXIT_CODE:
logger.info(
'{}: Command hook exited with soft failure exit code ({}); skipping remaining actions'.format(
config_filename, SOFT_FAIL_EXIT_CODE
)
f'{config_filename}: Command hook exited with soft failure exit code ({SOFT_FAIL_EXIT_CODE}); skipping remaining actions',
)
return True

View File

@ -22,23 +22,36 @@ def initialize_monitor(
pass
def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run):
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the given Cronhub URL, modified with the monitor.State. Use the given configuration
Ping the configured Cronhub URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything.
'''
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
formatted_state = '/{}/'.format(MONITOR_STATE_TO_CRONHUB[state])
ping_url = ping_url.replace('/start/', formatted_state).replace('/ping/', formatted_state)
if state not in MONITOR_STATE_TO_CRONHUB:
logger.debug(
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in Cronhub hook'
)
return
logger.info(
'{}: Pinging Cronhub {}{}'.format(config_filename, state.name.lower(), dry_run_label)
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
formatted_state = f'/{MONITOR_STATE_TO_CRONHUB[state]}/'
ping_url = (
hook_config['ping_url']
.replace('/start/', formatted_state)
.replace('/ping/', formatted_state)
)
logger.debug('{}: Using Cronhub ping URL {}'.format(config_filename, ping_url))
logger.info(f'{config_filename}: Pinging Cronhub {state.name.lower()}{dry_run_label}')
logger.debug(f'{config_filename}: Using Cronhub ping URL {ping_url}')
if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.get(ping_url)
try:
response = requests.get(ping_url)
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Cronhub error: {error}')
def destroy_monitor(

View File

@ -22,22 +22,31 @@ def initialize_monitor(
pass
def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run):
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the given Cronitor URL, modified with the monitor.State. Use the given configuration
Ping the configured Cronitor URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything.
'''
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
ping_url = '{}/{}'.format(ping_url, MONITOR_STATE_TO_CRONITOR[state])
if state not in MONITOR_STATE_TO_CRONITOR:
logger.debug(
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in Cronitor hook'
)
return
logger.info(
'{}: Pinging Cronitor {}{}'.format(config_filename, state.name.lower(), dry_run_label)
)
logger.debug('{}: Using Cronitor ping URL {}'.format(config_filename, ping_url))
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
ping_url = f"{hook_config['ping_url']}/{MONITOR_STATE_TO_CRONITOR[state]}"
logger.info(f'{config_filename}: Pinging Cronitor {state.name.lower()}{dry_run_label}')
logger.debug(f'{config_filename}: Using Cronitor ping URL {ping_url}')
if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.get(ping_url)
try:
response = requests.get(ping_url)
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Cronitor error: {error}')
def destroy_monitor(

View File

@ -1,16 +1,29 @@
import logging
from borgmatic.hooks import cronhub, cronitor, healthchecks, mysql, pagerduty, postgresql
from borgmatic.hooks import (
cronhub,
cronitor,
healthchecks,
mongodb,
mysql,
ntfy,
pagerduty,
postgresql,
sqlite,
)
logger = logging.getLogger(__name__)
HOOK_NAME_TO_MODULE = {
'healthchecks': healthchecks,
'cronitor': cronitor,
'cronhub': cronhub,
'cronitor': cronitor,
'healthchecks': healthchecks,
'mongodb_databases': mongodb,
'mysql_databases': mysql,
'ntfy': ntfy,
'pagerduty': pagerduty,
'postgresql_databases': postgresql,
'mysql_databases': mysql,
'sqlite_databases': sqlite,
}
@ -18,26 +31,21 @@ def call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs):
'''
Given the hooks configuration dict and a prefix to use in log entries, call the requested
function of the Python module corresponding to the given hook name. Supply that call with the
configuration for this hook, the log prefix, and any given args and kwargs. Return any return
value.
If the hook name is not present in the hooks configuration, then bail without calling anything.
configuration for this hook (if any), the log prefix, and any given args and kwargs. Return any
return value.
Raise ValueError if the hook name is unknown.
Raise AttributeError if the function name is not found in the module.
Raise anything else that the called function raises.
'''
config = hooks.get(hook_name)
if not config:
logger.debug('{}: No {} hook configured.'.format(log_prefix, hook_name))
return
config = hooks.get(hook_name, {})
try:
module = HOOK_NAME_TO_MODULE[hook_name]
except KeyError:
raise ValueError('Unknown hook name: {}'.format(hook_name))
raise ValueError(f'Unknown hook name: {hook_name}')
logger.debug('{}: Calling {} hook function {}'.format(log_prefix, hook_name, function_name))
logger.debug(f'{log_prefix}: Calling {hook_name} hook function {function_name}')
return getattr(module, function_name)(config, log_prefix, *args, **kwargs)
@ -48,7 +56,7 @@ def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
values into a dict from hook name to return value.
If the hook name is not present in the hooks configuration, then don't call the function for it,
If the hook name is not present in the hooks configuration, then don't call the function for it
and omit it from the return values.
Raise ValueError if the hook name is unknown.
@ -60,3 +68,19 @@ def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
for hook_name in hook_names
if hooks.get(hook_name)
}
def call_hooks_even_if_unconfigured(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
'''
Given the hooks configuration dict and a prefix to use in log entries, call the requested
function of the Python module corresponding to each given hook name. Supply each call with the
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
values into a dict from hook name to return value.
Raise AttributeError if the function name is not found in the module.
Raise anything else that a called function raises. An error stops calls to subsequent functions.
'''
return {
hook_name: call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs)
for hook_name in hook_names
}

View File

@ -2,11 +2,16 @@ import logging
import os
import shutil
from borgmatic.borg.create import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
from borgmatic.borg.state import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
logger = logging.getLogger(__name__)
DATABASE_HOOK_NAMES = ('postgresql_databases', 'mysql_databases')
DATABASE_HOOK_NAMES = (
'postgresql_databases',
'mysql_databases',
'mongodb_databases',
'sqlite_databases',
)
def make_database_dump_path(borgmatic_source_directory, database_hook_name):
@ -28,7 +33,7 @@ def make_database_dump_filename(dump_path, name, hostname=None):
Raise ValueError if the database name is invalid.
'''
if os.path.sep in name:
raise ValueError('Invalid database name {}'.format(name))
raise ValueError(f'Invalid database name {name}')
return os.path.join(os.path.expanduser(dump_path), hostname or 'localhost', name)
@ -55,9 +60,7 @@ def remove_database_dumps(dump_path, database_type_name, log_prefix, dry_run):
'''
dry_run_label = ' (dry run; not actually removing anything)' if dry_run else ''
logger.info(
'{}: Removing {} database dumps{}'.format(log_prefix, database_type_name, dry_run_label)
)
logger.debug(f'{log_prefix}: Removing {database_type_name} database dumps{dry_run_label}')
expanded_path = os.path.expanduser(dump_path)
@ -73,4 +76,4 @@ def convert_glob_patterns_to_borg_patterns(patterns):
Convert a sequence of shell glob patterns like "/etc/*" to the corresponding Borg archive
patterns like "sh:etc/*".
'''
return ['sh:{}'.format(pattern.lstrip(os.path.sep)) for pattern in patterns]
return [f'sh:{pattern.lstrip(os.path.sep)}' for pattern in patterns]

View File

@ -10,16 +10,18 @@ MONITOR_STATE_TO_HEALTHCHECKS = {
monitor.State.START: 'start',
monitor.State.FINISH: None, # Healthchecks doesn't append to the URL for the finished state.
monitor.State.FAIL: 'fail',
monitor.State.LOG: 'log',
}
PAYLOAD_TRUNCATION_INDICATOR = '...\n'
PAYLOAD_LIMIT_BYTES = 10 * 1024 - len(PAYLOAD_TRUNCATION_INDICATOR)
DEFAULT_PING_BODY_LIMIT_BYTES = 100000
class Forgetful_buffering_handler(logging.Handler):
'''
A buffering log handler that stores log messages in memory, and throws away messages (oldest
first) once a particular capacity in bytes is reached.
first) once a particular capacity in bytes is reached. But if the given byte capacity is zero,
don't throw away any messages.
'''
def __init__(self, byte_capacity, log_level):
@ -36,6 +38,9 @@ class Forgetful_buffering_handler(logging.Handler):
self.byte_count += len(message)
self.buffer.append(message)
if not self.byte_capacity:
return
while self.byte_count > self.byte_capacity and self.buffer:
self.byte_count -= len(self.buffer[0])
self.buffer.pop(0)
@ -65,51 +70,70 @@ def format_buffered_logs_for_payload():
return payload
def initialize_monitor(
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
def initialize_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
'''
Add a handler to the root logger that stores in memory the most recent logs emitted. That
way, we can send them all to Healthchecks upon a finish or failure state.
Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
we can send them all to Healthchecks upon a finish or failure state. But skip this if the
"send_logs" option is false.
'''
if hook_config.get('send_logs') is False:
return
ping_body_limit = max(
hook_config.get('ping_body_limit', DEFAULT_PING_BODY_LIMIT_BYTES)
- len(PAYLOAD_TRUNCATION_INDICATOR),
0,
)
logging.getLogger().addHandler(
Forgetful_buffering_handler(PAYLOAD_LIMIT_BYTES, monitoring_log_level)
Forgetful_buffering_handler(ping_body_limit, monitoring_log_level)
)
def ping_monitor(ping_url_or_uuid, config_filename, state, monitoring_log_level, dry_run):
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the given Healthchecks URL or UUID, modified with the monitor.State. Use the given
Ping the configured Healthchecks URL or UUID, modified with the monitor.State. Use the given
configuration filename in any log entries, and log to Healthchecks with the giving log level.
If this is a dry run, then don't actually ping anything.
'''
ping_url = (
ping_url_or_uuid
if ping_url_or_uuid.startswith('http')
else 'https://hc-ping.com/{}'.format(ping_url_or_uuid)
hook_config['ping_url']
if hook_config['ping_url'].startswith('http')
else f"https://hc-ping.com/{hook_config['ping_url']}"
)
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
if 'states' in hook_config and state.name.lower() not in hook_config['states']:
logger.info(
f'{config_filename}: Skipping Healthchecks {state.name.lower()} ping due to configured states'
)
return
healthchecks_state = MONITOR_STATE_TO_HEALTHCHECKS.get(state)
if healthchecks_state:
ping_url = '{}/{}'.format(ping_url, healthchecks_state)
ping_url = f'{ping_url}/{healthchecks_state}'
logger.info(
'{}: Pinging Healthchecks {}{}'.format(config_filename, state.name.lower(), dry_run_label)
)
logger.debug('{}: Using Healthchecks ping URL {}'.format(config_filename, ping_url))
logger.info(f'{config_filename}: Pinging Healthchecks {state.name.lower()}{dry_run_label}')
logger.debug(f'{config_filename}: Using Healthchecks ping URL {ping_url}')
if state in (monitor.State.FINISH, monitor.State.FAIL):
if state in (monitor.State.FINISH, monitor.State.FAIL, monitor.State.LOG):
payload = format_buffered_logs_for_payload()
else:
payload = ''
if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.post(ping_url, data=payload.encode('utf-8'))
try:
response = requests.post(
ping_url, data=payload.encode('utf-8'), verify=hook_config.get('verify_tls', True)
)
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: Healthchecks error: {error}')
def destroy_monitor(ping_url_or_uuid, config_filename, monitoring_log_level, dry_run):
def destroy_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
'''
Remove the monitor handler that was added to the root logger. This prevents the handler from
getting reused by other instances of this monitor.

182
borgmatic/hooks/mongodb.py Normal file
View File

@ -0,0 +1,182 @@
import logging
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'mongodb_databases'
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given MongoDB databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given log
prefix in any log entries. Use the given location configuration dict to construct the
destination path.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
logger.info(f'{log_prefix}: Dumping MongoDB databases{dry_run_label}')
processes = []
for database in databases:
name = database['name']
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), name, database.get('hostname')
)
dump_format = database.get('format', 'archive')
logger.debug(
f'{log_prefix}: Dumping MongoDB database {name} to {dump_filename}{dry_run_label}',
)
if dry_run:
continue
command = build_dump_command(database, dump_filename, dump_format)
if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename)
execute_command(command, shell=True)
else:
dump.create_named_pipe_for_dump(dump_filename)
processes.append(execute_command(command, shell=True, run_to_completion=False))
return processes
def build_dump_command(database, dump_filename, dump_format):
'''
Return the mongodump command from a single database configuration.
'''
all_databases = database['name'] == 'all'
command = ['mongodump']
if dump_format == 'directory':
command.extend(('--out', dump_filename))
if 'hostname' in database:
command.extend(('--host', database['hostname']))
if 'port' in database:
command.extend(('--port', str(database['port'])))
if 'username' in database:
command.extend(('--username', database['username']))
if 'password' in database:
command.extend(('--password', database['password']))
if 'authentication_database' in database:
command.extend(('--authenticationDatabase', database['authentication_database']))
if not all_databases:
command.extend(('--db', database['name']))
if 'options' in database:
command.extend(database['options'].split(' '))
if dump_format != 'directory':
command.extend(('--archive', '>', dump_filename))
return command
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the log
prefix in any log entries. Use the given location configuration dict to construct the
destination path. If this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'MongoDB', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
'''
Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
and a database name to match, return the corresponding glob patterns to match the database dump
in an archive.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
def restore_database_dump(
database_config, log_prefix, location_config, dry_run, extract_process, connection_params
):
'''
Restore the given MongoDB database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database = database_config[0]
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), database['name'], database.get('hostname')
)
restore_command = build_restore_command(
extract_process, database, dump_filename, connection_params
)
logger.debug(f"{log_prefix}: Restoring MongoDB database {database['name']}{dry_run_label}")
if dry_run:
return
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process] if extract_process else [],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout if extract_process else None,
)
def build_restore_command(extract_process, database, dump_filename, connection_params):
'''
Return the mongorestore command from a single database configuration.
'''
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
)
port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
username = connection_params['username'] or database.get(
'restore_username', database.get('username')
)
password = connection_params['password'] or database.get(
'restore_password', database.get('password')
)
command = ['mongorestore']
if extract_process:
command.append('--archive')
else:
command.extend(('--dir', dump_filename))
if database['name'] != 'all':
command.extend(('--drop', '--db', database['name']))
if hostname:
command.extend(('--host', hostname))
if port:
command.extend(('--port', str(port)))
if username:
command.extend(('--username', username))
if password:
command.extend(('--password', password))
if 'authentication_database' in database:
command.extend(('--authenticationDatabase', database['authentication_database']))
if 'restore_options' in database:
command.extend(database['restore_options'].split(' '))
if database['schemas']:
for schema in database['schemas']:
command.extend(('--nsInclude', schema))
return command

View File

@ -1,9 +1,10 @@
from enum import Enum
MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty')
MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty', 'ntfy')
class State(Enum):
START = 1
FINISH = 2
FAIL = 3
LOG = 4

View File

@ -1,6 +1,12 @@
import copy
import logging
import os
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.execute import (
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
@ -18,19 +24,20 @@ def make_dump_path(location_config): # pragma: no cover
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
def database_names_to_dump(database, extra_environment, log_prefix, dry_run_label):
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
'''
Given a requested database name, return the corresponding sequence of database names to dump.
Given a requested database config, return the corresponding sequence of database names to dump.
In the case of "all", query for the names of databases on the configured host and return them,
excluding any system databases that will cause problems during restore.
'''
requested_name = database['name']
if requested_name != 'all':
return (requested_name,)
if database['name'] != 'all':
return (database['name'],)
if dry_run:
return ()
show_command = (
('mysql',)
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
@ -38,11 +45,9 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run_labe
+ ('--skip-column-names', '--batch')
+ ('--execute', 'show schemas')
)
logger.debug(
'{}: Querying for "all" MySQL databases to dump{}'.format(log_prefix, dry_run_label)
)
show_output = execute_command(
show_command, output_log_level=None, extra_environment=extra_environment
logger.debug(f'{log_prefix}: Querying for "all" MySQL databases to dump')
show_output = execute_command_and_capture_output(
show_command, extra_environment=extra_environment
)
return tuple(
@ -52,6 +57,55 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run_labe
)
def execute_dump_command(
database, log_prefix, dump_path, database_names, extra_environment, dry_run, dry_run_label
):
'''
Kick off a dump for the given MySQL/MariaDB database (provided as a configuration dict) to a
named pipe constructed from the given dump path and database names. Use the given log prefix in
any log entries.
Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if
this is a dry run, then don't actually dump anything and return None.
'''
database_name = database['name']
dump_filename = dump.make_database_dump_filename(
dump_path, database['name'], database.get('hostname')
)
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of MySQL database "{database_name}" to {dump_filename}'
)
return None
dump_command = (
('mysqldump',)
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ ('--databases',)
+ database_names
+ ('--result-file', dump_filename)
)
logger.debug(
f'{log_prefix}: Dumping MySQL database "{database_name}" to {dump_filename}{dry_run_label}'
)
if dry_run:
return None
dump.create_named_pipe_for_dump(dump_filename)
return execute_command(
dump_command,
extra_environment=extra_environment,
run_to_completion=False,
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given MySQL/MariaDB databases to a named pipe. The databases are supplied as a sequence
@ -65,55 +119,50 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info('{}: Dumping MySQL databases{}'.format(log_prefix, dry_run_label))
logger.info(f'{log_prefix}: Dumping MySQL databases{dry_run_label}')
for database in databases:
requested_name = database['name']
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), requested_name, database.get('hostname')
)
dump_path = make_dump_path(location_config)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run_label
database, extra_environment, log_prefix, dry_run
)
if not dump_database_names:
if dry_run:
continue
raise ValueError('Cannot find any MySQL databases to dump.')
dump_command = (
('mysqldump',)
+ ('--add-drop-database',)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ ('--databases',)
+ dump_database_names
# Use shell redirection rather than execute_command(output_file=open(...)) to prevent
# the open() call on a named pipe from hanging the main borgmatic process.
+ ('>', dump_filename)
)
logger.debug(
'{}: Dumping MySQL database {} to {}{}'.format(
log_prefix, requested_name, dump_filename, dry_run_label
if database['name'] == 'all' and database.get('format'):
for dump_name in dump_database_names:
renamed_database = copy.copy(database)
renamed_database['name'] = dump_name
processes.append(
execute_dump_command(
renamed_database,
log_prefix,
dump_path,
(dump_name,),
extra_environment,
dry_run,
dry_run_label,
)
)
else:
processes.append(
execute_dump_command(
database,
log_prefix,
dump_path,
dump_database_names,
extra_environment,
dry_run,
dry_run_label,
)
)
)
if dry_run:
continue
dump.create_named_pipe_for_dump(dump_filename)
processes.append(
execute_command(
dump_command,
shell=True,
extra_environment=extra_environment,
run_to_completion=False,
)
)
return processes
return [process for process in processes if process]
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
@ -136,7 +185,9 @@ def make_database_dump_pattern(
return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
def restore_database_dump(
database_config, log_prefix, location_config, dry_run, extract_process, connection_params
):
'''
Restore the given MySQL/MariaDB database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
@ -150,26 +201,38 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
raise ValueError('The database configuration value is invalid')
database = database_config[0]
restore_command = (
('mysql', '--batch', '--verbose')
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
logger.debug(
'{}: Restoring MySQL database {}{}'.format(log_prefix, database['name'], dry_run_label)
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
)
port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
username = connection_params['username'] or database.get(
'restore_username', database.get('username')
)
password = connection_params['password'] or database.get(
'restore_password', database.get('password')
)
restore_command = (
('mysql', '--batch')
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (('--host', hostname) if hostname else ())
+ (('--port', str(port)) if port else ())
+ (('--protocol', 'tcp') if hostname or port else ())
+ (('--user', username) if username else ())
)
extra_environment = {'MYSQL_PWD': password} if password else None
logger.debug(f"{log_prefix}: Restoring MySQL database {database['name']}{dry_run_label}")
if dry_run:
return
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout,
extra_environment=extra_environment,
borg_local_path=location_config.get('local_path', 'borg'),
)

83
borgmatic/hooks/ntfy.py Normal file
View File

@ -0,0 +1,83 @@
import logging
import requests
logger = logging.getLogger(__name__)
def initialize_monitor(
ping_url, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No initialization is necessary for this monitor.
'''
pass
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the configured Ntfy topic. Use the given configuration filename in any log entries.
If this is a dry run, then don't actually ping anything.
'''
run_states = hook_config.get('states', ['fail'])
if state.name.lower() in run_states:
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
state_config = hook_config.get(
state.name.lower(),
{
'title': f'A Borgmatic {state.name} event happened',
'message': f'A Borgmatic {state.name} event happened',
'priority': 'default',
'tags': 'borgmatic',
},
)
base_url = hook_config.get('server', 'https://ntfy.sh')
topic = hook_config.get('topic')
logger.info(f'{config_filename}: Pinging ntfy topic {topic}{dry_run_label}')
logger.debug(f'{config_filename}: Using Ntfy ping URL {base_url}/{topic}')
headers = {
'X-Title': state_config.get('title'),
'X-Message': state_config.get('message'),
'X-Priority': state_config.get('priority'),
'X-Tags': state_config.get('tags'),
}
username = hook_config.get('username')
password = hook_config.get('password')
auth = None
if (username and password) is not None:
auth = requests.auth.HTTPBasicAuth(username, password)
logger.info(f'{config_filename}: Using basic auth with user {username} for ntfy')
elif username is not None:
logger.warning(
f'{config_filename}: Password missing for ntfy authentication, defaulting to no auth'
)
elif password is not None:
logger.warning(
f'{config_filename}: Username missing for ntfy authentication, defaulting to no auth'
)
if not dry_run:
logging.getLogger('urllib3').setLevel(logging.ERROR)
try:
response = requests.post(f'{base_url}/{topic}', headers=headers, auth=auth)
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: ntfy error: {error}')
def destroy_monitor(
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No destruction is necessary for this monitor.
'''
pass

View File

@ -21,22 +21,20 @@ def initialize_monitor(
pass
def ping_monitor(integration_key, config_filename, state, monitoring_log_level, dry_run):
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
'''
If this is an error state, create a PagerDuty event with the given integration key. Use the
given configuration filename in any log entries. If this is a dry run, then don't actually
If this is an error state, create a PagerDuty event with the configured integration key. Use
the given configuration filename in any log entries. If this is a dry run, then don't actually
create an event.
'''
if state != monitor.State.FAIL:
logger.debug(
'{}: Ignoring unsupported monitoring {} in PagerDuty hook'.format(
config_filename, state.name.lower()
)
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in PagerDuty hook',
)
return
dry_run_label = ' (dry run; not actually sending)' if dry_run else ''
logger.info('{}: Sending failure event to PagerDuty {}'.format(config_filename, dry_run_label))
logger.info(f'{config_filename}: Sending failure event to PagerDuty {dry_run_label}')
if dry_run:
return
@ -47,10 +45,10 @@ def ping_monitor(integration_key, config_filename, state, monitoring_log_level,
)
payload = json.dumps(
{
'routing_key': integration_key,
'routing_key': hook_config['integration_key'],
'event_action': 'trigger',
'payload': {
'summary': 'backup failed on {}'.format(hostname),
'summary': f'backup failed on {hostname}',
'severity': 'error',
'source': hostname,
'timestamp': local_timestamp,
@ -65,10 +63,15 @@ def ping_monitor(integration_key, config_filename, state, monitoring_log_level,
},
}
)
logger.debug('{}: Using PagerDuty payload: {}'.format(config_filename, payload))
logger.debug(f'{config_filename}: Using PagerDuty payload: {payload}')
logging.getLogger('urllib3').setLevel(logging.ERROR)
requests.post(EVENTS_API_URL, data=payload.encode('utf-8'))
try:
response = requests.post(EVENTS_API_URL, data=payload.encode('utf-8'))
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as error:
logger.warning(f'{config_filename}: PagerDuty error: {error}')
def destroy_monitor(

View File

@ -1,6 +1,14 @@
import csv
import itertools
import logging
import os
import shlex
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.execute import (
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
@ -15,13 +23,23 @@ def make_dump_path(location_config): # pragma: no cover
)
def make_extra_environment(database):
def make_extra_environment(database, restore_connection_params=None):
'''
Make the extra_environment dict from the given database configuration.
If restore connection params are given, this is for a restore operation.
'''
extra = dict()
if 'password' in database:
extra['PGPASSWORD'] = database['password']
try:
if restore_connection_params:
extra['PGPASSWORD'] = restore_connection_params.get('password') or database.get(
'restore_password', database['password']
)
else:
extra['PGPASSWORD'] = database['password']
except (AttributeError, KeyError):
pass
extra['PGSSLMODE'] = database.get('ssl_mode', 'disable')
if 'ssl_cert' in database:
extra['PGSSLCERT'] = database['ssl_cert']
@ -34,6 +52,46 @@ def make_extra_environment(database):
return extra
EXCLUDED_DATABASE_NAMES = ('template0', 'template1')
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
'''
Given a requested database config, return the corresponding sequence of database names to dump.
In the case of "all" when a database format is given, query for the names of databases on the
configured host and return them. For "all" without a database format, just return a sequence
containing "all".
'''
requested_name = database['name']
if requested_name != 'all':
return (requested_name,)
if not database.get('format'):
return ('all',)
if dry_run:
return ()
psql_command = shlex.split(database.get('psql_command') or 'psql')
list_command = (
tuple(psql_command)
+ ('--list', '--no-password', '--no-psqlrc', '--csv', '--tuples-only')
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
)
logger.debug(f'{log_prefix}: Querying for "all" PostgreSQL databases to dump')
list_output = execute_command_and_capture_output(
list_command, extra_environment=extra_environment
)
return tuple(
row[0]
for row in csv.reader(list_output.splitlines(), delimiter=',', quotechar='"')
if row[0] not in EXCLUDED_DATABASE_NAMES
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given PostgreSQL databases to a named pipe. The databases are supplied as a sequence of
@ -43,58 +101,84 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
Raise ValueError if the databases to dump cannot be determined.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info('{}: Dumping PostgreSQL databases{}'.format(log_prefix, dry_run_label))
logger.info(f'{log_prefix}: Dumping PostgreSQL databases{dry_run_label}')
for database in databases:
name = database['name']
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), name, database.get('hostname')
)
all_databases = bool(name == 'all')
dump_format = database.get('format', 'custom')
command = (
(
'pg_dumpall' if all_databases else 'pg_dump',
'--no-password',
'--clean',
'--if-exists',
)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (() if all_databases else ('--format', dump_format))
+ (('--file', dump_filename) if dump_format == 'directory' else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (() if all_databases else (name,))
# Use shell redirection rather than the --file flag to sidestep synchronization issues
# when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump
# format in a particular, a named destination is required, and redirection doesn't work.
+ (('>', dump_filename) if dump_format != 'directory' else ())
)
extra_environment = make_extra_environment(database)
logger.debug(
'{}: Dumping PostgreSQL database {} to {}{}'.format(
log_prefix, name, dump_filename, dry_run_label
)
dump_path = make_dump_path(location_config)
dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run
)
if dry_run:
continue
if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename)
else:
dump.create_named_pipe_for_dump(dump_filename)
if not dump_database_names:
if dry_run:
continue
processes.append(
execute_command(
command, shell=True, extra_environment=extra_environment, run_to_completion=False
raise ValueError('Cannot find any PostgreSQL databases to dump.')
for database_name in dump_database_names:
dump_format = database.get('format', None if database_name == 'all' else 'custom')
default_dump_command = 'pg_dumpall' if database_name == 'all' else 'pg_dump'
dump_command = database.get('pg_dump_command') or default_dump_command
dump_filename = dump.make_database_dump_filename(
dump_path, database_name, database.get('hostname')
)
)
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of PostgreSQL database "{database_name}" to {dump_filename}'
)
continue
command = (
(
dump_command,
'--no-password',
'--clean',
'--if-exists',
)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (('--no-owner',) if database.get('no_owner', False) else ())
+ (('--format', dump_format) if dump_format else ())
+ (('--file', dump_filename) if dump_format == 'directory' else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (() if database_name == 'all' else (database_name,))
# Use shell redirection rather than the --file flag to sidestep synchronization issues
# when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump
# format in a particular, a named destination is required, and redirection doesn't work.
+ (('>', dump_filename) if dump_format != 'directory' else ())
)
logger.debug(
f'{log_prefix}: Dumping PostgreSQL database "{database_name}" to {dump_filename}{dry_run_label}'
)
if dry_run:
continue
if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename)
execute_command(
command,
shell=True,
extra_environment=extra_environment,
)
else:
dump.create_named_pipe_for_dump(dump_filename)
processes.append(
execute_command(
command,
shell=True,
extra_environment=extra_environment,
run_to_completion=False,
)
)
return processes
@ -119,7 +203,9 @@ def make_database_dump_pattern(
return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
def restore_database_dump(
database_config, log_prefix, location_config, dry_run, extract_process, connection_params
):
'''
Restore the given PostgreSQL database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
@ -129,6 +215,9 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream.
Use the given connection parameters to connect to the database. The connection parameters are
hostname, port, username, and password.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
@ -136,44 +225,65 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
raise ValueError('The database configuration value is invalid')
database = database_config[0]
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
)
port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
username = connection_params['username'] or database.get(
'restore_username', database.get('username')
)
all_databases = bool(database['name'] == 'all')
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), database['name'], database.get('hostname')
)
psql_command = shlex.split(database.get('psql_command') or 'psql')
analyze_command = (
('psql', '--no-password', '--quiet')
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
tuple(psql_command)
+ ('--no-password', '--no-psqlrc', '--quiet')
+ (('--host', hostname) if hostname else ())
+ (('--port', port) if port else ())
+ (('--username', username) if username else ())
+ (('--dbname', database['name']) if not all_databases else ())
+ (tuple(database['analyze_options'].split(' ')) if 'analyze_options' in database else ())
+ ('--command', 'ANALYZE')
)
use_psql_command = all_databases or database.get('format') == 'plain'
pg_restore_command = shlex.split(database.get('pg_restore_command') or 'pg_restore')
restore_command = (
('psql' if all_databases else 'pg_restore', '--no-password')
+ (
('--if-exists', '--exit-on-error', '--clean', '--dbname', database['name'])
if not all_databases
tuple(psql_command if use_psql_command else pg_restore_command)
+ ('--no-password',)
+ (('--no-psqlrc',) if use_psql_command else ('--if-exists', '--exit-on-error', '--clean'))
+ (('--dbname', database['name']) if not all_databases else ())
+ (('--host', hostname) if hostname else ())
+ (('--port', port) if port else ())
+ (('--username', username) if username else ())
+ (('--no-owner',) if database.get('no_owner', False) else ())
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (() if extract_process else (dump_filename,))
+ tuple(
itertools.chain.from_iterable(('--schema', schema) for schema in database['schemas'])
if database['schemas']
else ()
)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (() if extract_process else (dump_filename,))
)
extra_environment = make_extra_environment(database)
logger.debug(
'{}: Restoring PostgreSQL database {}{}'.format(log_prefix, database['name'], dry_run_label)
extra_environment = make_extra_environment(
database, restore_connection_params=connection_params
)
logger.debug(f"{log_prefix}: Restoring PostgreSQL database {database['name']}{dry_run_label}")
if dry_run:
return
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process] if extract_process else [],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout if extract_process else None,
extra_environment=extra_environment,
borg_local_path=location_config.get('local_path', 'borg'),
)
execute_command(analyze_command, extra_environment=extra_environment)

129
borgmatic/hooks/sqlite.py Normal file
View File

@ -0,0 +1,129 @@
import logging
import os
from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'sqlite_databases'
)
def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Dump the given SQLite3 databases to a file. The databases are supplied as a sequence of
configuration dicts, as per the configuration schema. Use the given log prefix in any log
entries. Use the given location configuration dict to construct the destination path. If this
is a dry run, then don't actually dump anything.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info(f'{log_prefix}: Dumping SQLite databases{dry_run_label}')
for database in databases:
database_path = database['path']
if database['name'] == 'all':
logger.warning('The "all" database name has no meaning for SQLite3 databases')
if not os.path.exists(database_path):
logger.warning(
f'{log_prefix}: No SQLite database at {database_path}; An empty database will be created and dumped'
)
dump_path = make_dump_path(location_config)
dump_filename = dump.make_database_dump_filename(dump_path, database['name'])
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of SQLite database at {database_path} to {dump_filename}'
)
continue
command = (
'sqlite3',
database_path,
'.dump',
'>',
dump_filename,
)
logger.debug(
f'{log_prefix}: Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}'
)
if dry_run:
continue
dump.create_parent_directory_for_dump(dump_filename)
processes.append(execute_command(command, shell=True, run_to_completion=False))
return processes
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
'''
Remove the given SQLite3 database dumps from the filesystem. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema. Use the given log prefix in
any log entries. Use the given location configuration dict to construct the destination path.
If this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'SQLite', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
'''
Make a pattern that matches the given SQLite3 databases. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name)
def restore_database_dump(
database_config, log_prefix, location_config, dry_run, extract_process, connection_params
):
'''
Restore the given SQLite3 database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database_path = connection_params['restore_path'] or database_config[0].get(
'restore_path', database_config[0].get('path')
)
logger.debug(f'{log_prefix}: Restoring SQLite database at {database_path}{dry_run_label}')
if dry_run:
return
try:
os.remove(database_path)
logger.warning(f'{log_prefix}: Removed existing SQLite database at {database_path}')
except FileNotFoundError: # pragma: no cover
pass
restore_command = (
'sqlite3',
database_path,
)
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout,
)

View File

@ -68,7 +68,7 @@ class Multi_stream_handler(logging.Handler):
def emit(self, record):
'''
Dispatch the log record to the approriate stream handler for the record's log level.
Dispatch the log record to the appropriate stream handler for the record's log level.
'''
self.log_level_to_handler[record.levelno].emit(record)
@ -85,18 +85,19 @@ class Multi_stream_handler(logging.Handler):
handler.setLevel(level)
LOG_LEVEL_TO_COLOR = {
logging.CRITICAL: colorama.Fore.RED,
logging.ERROR: colorama.Fore.RED,
logging.WARN: colorama.Fore.YELLOW,
logging.INFO: colorama.Fore.GREEN,
logging.DEBUG: colorama.Fore.CYAN,
}
class Console_color_formatter(logging.Formatter):
def format(self, record):
color = LOG_LEVEL_TO_COLOR.get(record.levelno)
add_custom_log_levels()
color = {
logging.CRITICAL: colorama.Fore.RED,
logging.ERROR: colorama.Fore.RED,
logging.WARN: colorama.Fore.YELLOW,
logging.ANSWER: colorama.Fore.MAGENTA,
logging.INFO: colorama.Fore.GREEN,
logging.DEBUG: colorama.Fore.CYAN,
}.get(record.levelno)
return color_text(color, record.msg)
@ -107,7 +108,48 @@ def color_text(color, message):
if not color:
return message
return '{}{}{}'.format(color, message, colorama.Style.RESET_ALL)
return f'{color}{message}{colorama.Style.RESET_ALL}'
def add_logging_level(level_name, level_number):
'''
Globally add a custom logging level based on the given (all uppercase) level name and number.
Do this idempotently.
Inspired by https://stackoverflow.com/questions/2183233/how-to-add-a-custom-loglevel-to-pythons-logging-facility/35804945#35804945
'''
method_name = level_name.lower()
if not hasattr(logging, level_name):
logging.addLevelName(level_number, level_name)
setattr(logging, level_name, level_number)
if not hasattr(logging, method_name):
def log_for_level(self, message, *args, **kwargs): # pragma: no cover
if self.isEnabledFor(level_number):
self._log(level_number, message, args, **kwargs)
setattr(logging.getLoggerClass(), method_name, log_for_level)
if not hasattr(logging.getLoggerClass(), method_name):
def log_to_root(message, *args, **kwargs): # pragma: no cover
logging.log(level_number, message, *args, **kwargs)
setattr(logging, method_name, log_to_root)
ANSWER = logging.WARN - 5
DISABLED = logging.CRITICAL + 10
def add_custom_log_levels(): # pragma: no cover
'''
Add a custom log level between WARN and INFO for user-requested answers.
'''
add_logging_level('ANSWER', ANSWER)
add_logging_level('DISABLED', DISABLED)
def configure_logging(
@ -116,6 +158,7 @@ def configure_logging(
log_file_log_level=None,
monitoring_log_level=None,
log_file=None,
log_file_format=None,
):
'''
Configure logging to go to both the console and (syslog or log file). Use the given log levels,
@ -130,15 +173,20 @@ def configure_logging(
if monitoring_log_level is None:
monitoring_log_level = console_log_level
add_custom_log_levels()
# Log certain log levels to console stderr and others to stdout. This supports use cases like
# grepping (non-error) output.
console_disabled = logging.NullHandler()
console_error_handler = logging.StreamHandler(sys.stderr)
console_standard_handler = logging.StreamHandler(sys.stdout)
console_handler = Multi_stream_handler(
{
logging.DISABLED: console_disabled,
logging.CRITICAL: console_error_handler,
logging.ERROR: console_error_handler,
logging.WARN: console_standard_handler,
logging.WARN: console_error_handler,
logging.ANSWER: console_standard_handler,
logging.INFO: console_standard_handler,
logging.DEBUG: console_standard_handler,
}
@ -147,7 +195,7 @@ def configure_logging(
console_handler.setLevel(console_log_level)
syslog_path = None
if log_file is None:
if log_file is None and syslog_log_level != logging.DISABLED:
if os.path.exists('/dev/log'):
syslog_path = '/dev/log'
elif os.path.exists('/var/run/syslog'):
@ -157,12 +205,18 @@ def configure_logging(
if syslog_path and not interactive_console():
syslog_handler = logging.handlers.SysLogHandler(address=syslog_path)
syslog_handler.setFormatter(logging.Formatter('borgmatic: %(levelname)s %(message)s'))
syslog_handler.setFormatter(
logging.Formatter('borgmatic: {levelname} {message}', style='{') # noqa: FS003
)
syslog_handler.setLevel(syslog_log_level)
handlers = (console_handler, syslog_handler)
elif log_file:
elif log_file and log_file_log_level != logging.DISABLED:
file_handler = logging.handlers.WatchedFileHandler(log_file)
file_handler.setFormatter(logging.Formatter('[%(asctime)s] %(levelname)s: %(message)s'))
file_handler.setFormatter(
logging.Formatter(
log_file_format or '[{asctime}] {levelname}: {message}', style='{' # noqa: FS003
)
)
file_handler.setLevel(log_file_log_level)
handlers = (console_handler, file_handler)
else:

View File

@ -1,23 +1,34 @@
import logging
import os
import signal
import sys
logger = logging.getLogger(__name__)
def _handle_signal(signal_number, frame): # pragma: no cover
EXIT_CODE_FROM_SIGNAL = 128
def handle_signal(signal_number, frame):
'''
Send the signal to all processes in borgmatic's process group, which includes child processes.
'''
# Prevent infinite signal handler recursion. If the parent frame is this very same handler
# function, we know we're recursing.
if frame.f_back.f_code.co_name == _handle_signal.__name__:
if frame.f_back.f_code.co_name == handle_signal.__name__:
return
os.killpg(os.getpgrp(), signal_number)
if signal_number == signal.SIGTERM:
logger.critical('Exiting due to TERM signal')
sys.exit(EXIT_CODE_FROM_SIGNAL + signal.SIGTERM)
def configure_signals(): # pragma: no cover
def configure_signals():
'''
Configure borgmatic's signal handlers to pass relevant signals through to any child processes
like Borg. Note that SIGINT gets passed through even without these changes.
'''
for signal_number in (signal.SIGHUP, signal.SIGTERM, signal.SIGUSR1, signal.SIGUSR2):
signal.signal(signal_number, _handle_signal)
signal.signal(signal_number, handle_signal)

View File

@ -1,7 +1,10 @@
import logging
import borgmatic.logger
VERBOSITY_DISABLED = -2
VERBOSITY_ERROR = -1
VERBOSITY_WARNING = 0
VERBOSITY_ANSWER = 0
VERBOSITY_SOME = 1
VERBOSITY_LOTS = 2
@ -10,9 +13,12 @@ def verbosity_to_log_level(verbosity):
'''
Given a borgmatic verbosity value, return the corresponding Python log level.
'''
borgmatic.logger.add_custom_log_levels()
return {
VERBOSITY_DISABLED: logging.DISABLED,
VERBOSITY_ERROR: logging.ERROR,
VERBOSITY_WARNING: logging.WARNING,
VERBOSITY_ANSWER: logging.ANSWER,
VERBOSITY_SOME: logging.INFO,
VERBOSITY_LOTS: logging.DEBUG,
}.get(verbosity, logging.WARNING)

View File

@ -1,13 +1,14 @@
FROM python:3.8-alpine3.12 as borgmatic
FROM docker.io/alpine:3.17.1 as borgmatic
COPY . /app
RUN pip install --no-cache ruamel.yaml.clib==0.2.2 /app && generate-borgmatic-config && chmod +r /etc/borgmatic/config.yaml
RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib
RUN pip install --no-cache /app && generate-borgmatic-config && chmod +r /etc/borgmatic/config.yaml
RUN borgmatic --help > /command-line.txt \
&& for action in init prune create check extract export-tar mount umount restore list info borg; do \
&& for action in rcreate transfer create prune compact check extract config "config bootstrap" "config generate" "config validate" export-tar mount umount restore rlist list rinfo info break-lock borg; do \
echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \
&& borgmatic "$action" --help >> /command-line.txt; done
&& borgmatic $action --help >> /command-line.txt; done
FROM node:15.2.1-alpine as html
FROM docker.io/node:19.5.0-alpine as html
ARG ENVIRONMENT=production
@ -17,6 +18,7 @@ RUN npm install @11ty/eleventy \
@11ty/eleventy-plugin-syntaxhighlight \
@11ty/eleventy-plugin-inclusive-language \
@11ty/eleventy-navigation \
eleventy-plugin-code-clipboard \
markdown-it \
markdown-it-anchor \
markdown-it-replace-link
@ -26,7 +28,7 @@ COPY . /source
RUN NODE_ENV=${ENVIRONMENT} npx eleventy --input=/source/docs --output=/output/docs \
&& mv /output/docs/index.html /output/index.html
FROM nginx:1.19.4-alpine
FROM docker.io/nginx:1.22.1-alpine
COPY --from=html /output /usr/share/nginx/html
COPY --from=borgmatic /etc/borgmatic/config.yaml /usr/share/nginx/html/docs/reference/config.yaml

View File

@ -63,11 +63,6 @@
top: -2px;
bottom: 2px;
}
@media (prefers-color-scheme: dark) {
.inlinelist .inlinelist-item code:before {
border-left-color: rgba(0,0,0,.8);
}
}
}
a.buzzword {
text-decoration: underline;
@ -91,26 +86,9 @@ a.buzzword {
.buzzword {
background-color: #f7f7f7;
}
@media (prefers-color-scheme: dark) {
.buzzword-list li,
.buzzword {
background-color: #080808;
}
}
.inlinelist .inlinelist-item {
background-color: #e9e9e9;
}
@media (prefers-color-scheme: dark) {
.inlinelist .inlinelist-item {
background-color: #000;
}
.inlinelist .inlinelist-item a {
color: #fff;
}
.inlinelist .inlinelist-item code {
color: inherit;
}
}
.inlinelist .inlinelist-item:hover,
.inlinelist .inlinelist-item:focus,
.buzzword-list li:hover,
@ -217,12 +195,6 @@ main p a.buzzword {
height: 1.75em;
font-weight: 600;
}
@media (prefers-color-scheme: dark) {
.numberflag {
background-color: #00bcd4;
color: #222;
}
}
h1 .numberflag,
h2 .numberflag,
h3 .numberflag,
@ -244,11 +216,6 @@ h2 .numberflag:after {
background-color: #fff;
width: calc(100% + 0.4em); /* 16px /40 */
}
@media (prefers-color-scheme: dark) {
h2 .numberflag:after {
background-color: #222;
}
}
/* Super featured list on home page */
.list-superfeatured .avatar {

View File

@ -12,16 +12,6 @@
line-height: 1.285714285714; /* 18px /14 */
font-family: system-ui, -apple-system, sans-serif;
}
@media (prefers-color-scheme: dark) {
.minilink {
background-color: #222;
/*
!important to override .elv-callout a
see _includes/components/callout.css
*/
color: #fff !important;
}
}
table .minilink {
margin-top: 6px;
}
@ -32,12 +22,6 @@ table .minilink {
.minilink[href]:focus {
background-color: #bbb;
}
@media (prefers-color-scheme: dark) {
.minilink[href]:hover,
.minilink[href]:focus {
background-color: #444;
}
}
pre + .minilink {
color: #fff;
border-radius: 0 0 0.2857142857143em 0.2857142857143em; /* 4px /14 */
@ -74,11 +58,6 @@ h4 .minilink {
text-transform: none;
box-shadow: 0 0 0 1px rgba(0,0,0,0.3);
}
@media (prefers-color-scheme: dark) {
.minilink-addedin {
box-shadow: 0 0 0 1px rgba(255,255,255,0.3);
}
}
.minilink-addedin:not(:first-child) {
margin-left: .5em;
}

View File

@ -1,17 +1,5 @@
<h2>Improve this documentation</h2>
<p>Have an idea on how to make this documentation even better? Use our <a
href="https://projects.torsion.org/witten/borgmatic/issues">issue tracker</a> to send your
href="https://projects.torsion.org/borgmatic-collective/borgmatic/issues">issue tracker</a> to send your
feedback!</p>
<script>
document.getElementById('_page').value = window.location.href;
window.sk=window.sk||function(){(sk.q=sk.q||[]).push(arguments)};
sk('form', 'init', {
id: '1d536680ab96',
element: '#suggestion-form'
});
</script>
<script defer src="https://js.statickit.com/statickit.js"></script>

View File

@ -79,22 +79,11 @@
border-bottom: 1px solid #ddd;
margin-bottom: 0.25em; /* 4px /16 */
}
@media (prefers-color-scheme: dark) {
.elv-toc-list > li > a {
color: #fff;
border-color: #444;
}
}
/* Active links */
.elv-toc-list li.elv-toc-active > a {
background-color: #dff7ff;
}
@media (prefers-color-scheme: dark) {
.elv-toc-list li.elv-toc-active > a {
background-color: #353535;
}
}
.elv-toc-list ul .elv-toc-active > a:after {
content: "";
}
@ -105,7 +94,7 @@
display: block;
}
/* Footer catgory navigation */
/* Footer category navigation */
.elv-cat-list-active {
font-weight: 600;
}

View File

@ -258,6 +258,7 @@ footer.elv-layout {
/* Header */
.elv-header {
position: relative;
text-align: center;
}
.elv-header-default {
display: flex;
@ -284,11 +285,6 @@ footer.elv-layout {
.elv-hero {
background-color: #222;
}
@media (prefers-color-scheme: dark) {
.elv-hero {
background-color: #292929;
}
}
.elv-hero img,
.elv-hero svg {
width: 42.95774646vh;
@ -529,3 +525,26 @@ main .elv-toc + h1 .direct-link {
display: none ;
}
}
.header-anchor {
text-decoration: none;
}
.header-anchor:hover::after {
content: " 🔗";
}
.mdi {
display: inline-block;
width: 1em;
height: 1em;
background-color: currentColor;
-webkit-mask: no-repeat center / 100%;
mask: no-repeat center / 100%;
-webkit-mask-image: var(--svg);
mask-image: var(--svg);
}
.mdi.mdi-content-copy {
--svg: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24' width='24' height='24'%3E%3Cpath fill='black' d='M19 21H8V7h11m0-2H8a2 2 0 0 0-2 2v14a2 2 0 0 0 2 2h11a2 2 0 0 0 2-2V7a2 2 0 0 0-2-2m-3-4H4a2 2 0 0 0-2 2v14h2V3h12V1Z'/%3E%3C/svg%3E");
}

View File

@ -3,6 +3,7 @@
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" href="docs/static/borgmatic.png" type="image/x-icon">
<title>{{ subtitle + ' - ' if subtitle}}{{ title }}</title>
{%- set css %}
{% include 'index.css' %}
@ -22,6 +23,6 @@
<body>
{{ content | safe }}
{% initClipboardJS %}
</body>
</html>

22
docs/docker-compose.yaml Normal file
View File

@ -0,0 +1,22 @@
version: '3'
services:
docs:
image: borgmatic-docs
container_name: docs
ports:
- 8080:80
build:
dockerfile: docs/Dockerfile
context: ..
args:
ENVIRONMENT: dev
message:
image: alpine
container_name: message
command:
- sh
- -c
- |
echo; echo "You can view dev docs at http://localhost:8080"; echo
depends_on:
- docs

View File

@ -1,17 +1,18 @@
---
title: How to add preparation and cleanup steps to backups
eleventyNavigation:
key: Add preparation and cleanup steps
key: 🧹 Add preparation and cleanup steps
parent: How-to guides
order: 8
order: 9
---
## Preparation and cleanup hooks
If you find yourself performing prepraration tasks before your backup runs, or
If you find yourself performing preparation tasks before your backup runs, or
cleanup work afterwards, borgmatic hooks may be of interest. Hooks are shell
commands that borgmatic executes for you at various points, and they're
configured in the `hooks` section of your configuration file. But if you're
looking to backup a database, it's probably easier to use the [database backup
commands that borgmatic executes for you at various points as it runs, and
they're configured in the `hooks` section of your configuration file. But if
you're looking to backup a database, it's probably easier to use the [database
backup
feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
instead.
@ -27,15 +28,65 @@ hooks:
- umount /some/filesystem
```
The `before_backup` and `after_backup` hooks each run once per configuration
file. `before_backup` hooks run prior to backups of all repositories in a
configuration file, right before the `create` action. `after_backup` hooks run
afterwards, but not if an error occurs in a previous hook or in the backups
themselves.
If your command contains a special YAML character such as a colon, you may
need to quote the entire string (or use a [multiline
string](https://yaml-multiline.info/)) to avoid an error:
There are additional hooks for the `prune` and `check` actions as well.
`before_prune` and `after_prune` run if there are any `prune` actions, while
`before_check` and `after_check` run if there are any `check` actions.
```yaml
hooks:
before_backup:
- "echo Backup: start"
```
<span class="minilink minilink-addedin">New in version 1.6.0</span> The
`before_backup` and `after_backup` hooks each run once per repository in a
configuration file. `before_backup` hooks runs right before the `create`
action for a particular repository, and `after_backup` hooks run afterwards,
but not if an error occurs in a previous hook or in the backups themselves.
(Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file
rather than once per repository.)
There are additional hooks that run before/after other actions as well. For
instance, `before_prune` runs before a `prune` action for a repository, while
`after_prune` runs after it.
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. These hooks are a good
place to run per-repository steps like mounting/unmounting a remote
filesystem.
## Variable interpolation
The before and after action hooks support interpolating particular runtime
variables into the hook command. Here's an example that assumes you provide a
separate shell script:
```yaml
hooks:
after_prune:
- record-prune.sh "{configuration_filename}" "{repository}"
```
In this example, when the hook is triggered, borgmatic interpolates runtime
values into the hook command: the borgmatic configuration filename and the
paths of the current Borg repository. Here's the full set of supported
variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the
hook was defined
* `log_file`
<span class="minilink minilink-addedin">New in version 1.7.12</span>:
path of the borgmatic log file, only set when the `--log-file` flag is used
* `repository`: path of the current repository as configured in the current
borgmatic configuration file
Note that you can also interpolate in [arbitrary environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
## Global hooks
You can also use `before_everything` and `after_everything` hooks to perform
global setup or cleanup:
@ -58,6 +109,8 @@ but only if there is a `create` action. It runs even if an error occurs during
a backup or a backup hook, but not if an error occurs during a
`before_everything` hook.
## Error hooks
borgmatic also runs `on_error` hooks if an error occurs, either when creating
a backup or running a backup hook. See the [monitoring and alerting
documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/)

View File

@ -1,9 +1,9 @@
---
title: How to backup to a removable drive or an intermittent server
eleventyNavigation:
key: Backup to a removable drive or server
key: 💾 Backup to a removable drive/server
parent: How-to guides
order: 9
order: 10
---
## Occasional backups
@ -49,9 +49,12 @@ location:
- /home
repositories:
- /mnt/removable/backup.borg
- path: /mnt/removable/backup.borg
```
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list.
Then, write a `before_backup` hook in that same configuration file that uses
the external `findmnt` utility to see whether the drive is mounted before
proceeding.
@ -68,6 +71,9 @@ borgmatic. borgmatic logs the soft failure, skips all further actions in that
configurable file, and proceeds onward to any other borgmatic configuration
files you may have.
Note that `before_backup` only runs on the `create` action. See below about
optionally using `before_actions` instead.
You can imagine a similar check for the sometimes-online server case:
```yaml
@ -76,13 +82,16 @@ location:
- /home
repositories:
- me@buddys-server.org:backup.borg
- path: ssh://me@buddys-server.org/./backup.borg
hooks:
before_backup:
- ping -q -c 1 buddys-server.org > /dev/null || exit 75
```
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list.
Or to only run backups if the battery level is high enough:
```yaml
@ -93,6 +102,12 @@ hooks:
(Writing the battery script is left as an exercise to the reader.)
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. So if you'd like your soft
failure command hook to run regardless of action, consider using
`before_actions` instead of `before_backup`.
## Caveats and details
@ -101,8 +116,8 @@ There are some caveats you should be aware of with this feature.
* You'll generally want to put a soft failure command in the `before_backup`
hook, so as to gate whether the backup action occurs. While a soft failure is
also supported in the `after_backup` hook, returning a soft failure there
won't prevent any actions from occuring, because they've already occurred!
Similiarly, you can return a soft failure from an `on_error` hook, but at
won't prevent any actions from occurring, because they've already occurred!
Similarly, you can return a soft failure from an `on_error` hook, but at
that point it's too late to prevent the error.
* Returning a soft failure does prevent further commands in the same hook from
executing. So, like a standard error, it is an "early out". Unlike a standard
@ -115,6 +130,6 @@ There are some caveats you should be aware of with this feature.
* The soft failure doesn't have to apply to a repository. You can even perform
a test to make sure that individual source directories are mounted and
available. Use your imagination!
* The soft failure feature also works for `before_prune`, `after_prune`,
`before_check`, and `after_check` hooks. But it is not implemented for
`before_everything` or `after_everything`.
* The soft failure feature also works for before/after hooks for other
actions as well. But it is not implemented for `before_everything` or
`after_everything`.

View File

@ -1,9 +1,9 @@
---
title: How to backup your databases
eleventyNavigation:
key: Backup your databases
key: 🗄️ Backup your databases
parent: How-to guides
order: 7
order: 8
---
## Database dump hooks
@ -15,7 +15,7 @@ consistent snapshot that is more suited for backups.
Fortunately, borgmatic includes built-in support for creating database dumps
prior to running backups. For example, here is everything you need to dump and
backup a couple of local PostgreSQL databases and a MySQL/MariaDB database:
backup a couple of local PostgreSQL databases and a MySQL/MariaDB database.
```yaml
hooks:
@ -26,10 +26,31 @@ hooks:
- name: posts
```
<span class="minilink minilink-addedin">New in version 1.5.22</span> You can
also dump MongoDB databases. For example:
```yaml
hooks:
mongodb_databases:
- name: messages
```
<span class="minilink minilink-addedin">New in version 1.7.9</span>
Additionally, you can dump SQLite databases. For example:
```yaml
hooks:
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
```
As part of each backup, borgmatic streams a database dump for each configured
database directly to Borg, so it's included in the backup without consuming
additional disk space. (The one exception is PostgreSQL's "directory" dump
format, which can't stream and therefore does consume temporary disk space.)
additional disk space. (The exceptions are the PostgreSQL/MongoDB "directory"
dump formats, which can't stream and therefore do consume temporary disk
space. Additionally, prior to borgmatic 1.5.3, all database dumps consumed
temporary disk space.)
To support this, borgmatic creates temporary named pipes in `~/.borgmatic` by
default. To customize this path, set the `borgmatic_source_directory` option
@ -47,6 +68,8 @@ hooks:
postgresql_databases:
- name: users
hostname: database1.example.org
- name: orders
hostname: database2.example.org
port: 5433
username: postgres
password: trustsome1
@ -54,13 +77,32 @@ hooks:
options: "--role=someone"
mysql_databases:
- name: posts
hostname: database2.example.org
hostname: database3.example.org
port: 3307
username: root
password: trustsome1
options: "--skip-comments"
mongodb_databases:
- name: messages
hostname: database4.example.org
port: 27018
username: dbuser
password: trustsome1
authentication_database: mongousers
options: "--ssl"
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
```
See your [borgmatic configuration
file](https://torsion.org/borgmatic/docs/reference/configuration/) for
additional customization of the options passed to database commands (when
listing databases, restoring databases, etc.).
### All databases
If you want to dump all databases on a host, use `all` for the database name:
```yaml
@ -69,13 +111,87 @@ hooks:
- name: all
mysql_databases:
- name: all
mongodb_databases:
- name: all
```
Note that you may need to use a `username` of the `postgres` superuser for
this to work with PostgreSQL.
If you would like to backup databases only and not source directories, you can
specify an empty `source_directories` value because it is a mandatory field:
The SQLite hook in particular does not consider "all" a special database name.
<span class="minilink minilink-addedin">New in version 1.7.6</span> With
PostgreSQL and MySQL, you can optionally dump "all" databases to separate
files instead of one combined dump file, allowing more convenient restores of
individual databases. Enable this by specifying your desired database dump
`format`:
```yaml
hooks:
postgresql_databases:
- name: all
format: custom
mysql_databases:
- name: all
format: sql
```
### Containers
If your database is running within a container and borgmatic is too, no
problem—configure borgmatic to connect to the container's name on its exposed
port. For instance:
```yaml
hooks:
postgresql_databases:
- name: users
hostname: your-database-container-name
port: 5433
username: postgres
password: trustsome1
```
But what if borgmatic is running on the host? You can still connect to a
database container if its ports are properly exposed to the host. For
instance, when running the database container, you can specify `--publish
127.0.0.1:5433:5432` so that it exposes the container's port 5432 to port 5433
on the host (only reachable on localhost, in this case). Or the same thing
with Docker Compose:
```yaml
services:
your-database-container-name:
image: postgres
ports:
- 127.0.0.1:5433:5432
```
And then you can connect to the database from borgmatic running on the host:
```yaml
hooks:
postgresql_databases:
- name: users
hostname: 127.0.0.1
port: 5433
username: postgres
password: trustsome1
```
You can alter the ports in these examples to suit your particular database
system.
### No source directories
<span class="minilink minilink-addedin">New in version 1.7.1</span> If you
would like to backup databases only and not source directories, you can omit
`source_directories` entirely.
<span class="minilink minilink-addedin">Prior to version 1.7.1</span> In older
versions of borgmatic, instead specify an empty `source_directories` value, as
it is a mandatory option there:
```yaml
location:
@ -86,6 +202,15 @@ hooks:
```
### External passwords
If you don't want to keep your database passwords in your borgmatic
configuration file, you can instead pass them in via [environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/)
or command-line [configuration
overrides](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides).
### Configuration backups
An important note about this database configuration: You'll need the
@ -97,38 +222,37 @@ bring back any missing configuration files in order to restore a database.
## Supported databases
As of now, borgmatic supports PostgreSQL and MySQL/MariaDB databases
directly. But see below about general-purpose preparation and cleanup hooks as
a work-around with other database systems. Also, please [file a
ticket](https://torsion.org/borgmatic/#issues) for additional database systems
that you'd like supported.
As of now, borgmatic supports PostgreSQL, MySQL/MariaDB, MongoDB, and SQLite
databases directly. But see below about general-purpose preparation and
cleanup hooks as a work-around with other database systems. Also, please [file
a ticket](https://torsion.org/borgmatic/#issues) for additional database
systems that you'd like supported.
## Database restoration
To restore a database dump from an archive, use the `borgmatic restore`
action. But the first step is to figure out which archive to restore from. A
good way to do that is to use the `list` action:
good way to do that is to use the `rlist` action:
```bash
borgmatic list
borgmatic rlist
```
(No borgmatic `list` action? Try the old-style `--list`, or upgrade
borgmatic!)
(No borgmatic `rlist` action? Try `list` instead or upgrade borgmatic!)
That should yield output looking something like:
```text
host-2019-01-01T04:05:06.070809 Tue, 2019-01-01 04:05:06 [...]
host-2019-01-02T04:06:07.080910 Wed, 2019-01-02 04:06:07 [...]
host-2023-01-01T04:05:06.070809 Tue, 2023-01-01 04:05:06 [...]
host-2023-01-02T04:06:07.080910 Wed, 2023-01-02 04:06:07 [...]
```
Assuming that you want to restore all database dumps from the archive with the
most up-to-date files and therefore the latest timestamp, run a command like:
```bash
borgmatic restore --archive host-2019-01-02T04:06:07.080910
borgmatic restore --archive host-2023-01-02T04:06:07.080910
```
(No borgmatic `restore` action? Upgrade borgmatic!)
@ -154,10 +278,11 @@ If you have a single repository in your borgmatic configuration file(s), no
problem: the `restore` action figures out which repository to use.
But if you have multiple repositories configured, then you'll need to specify
the repository path containing the archive to restore. Here's an example:
the repository to use via the `--repository` flag. This can be done either
with the repository's path or its label as configured in your borgmatic configuration file.
```bash
borgmatic restore --repository repo.borg --archive host-2019-...
borgmatic restore --repository repo.borg --archive host-2023-...
```
### Restore particular databases
@ -167,9 +292,50 @@ restore one of them, use the `--database` flag to select one or more
databases. For instance:
```bash
borgmatic restore --archive host-2019-... --database users
borgmatic restore --archive host-2023-... --database users
```
<span class="minilink minilink-addedin">New in version 1.7.6</span> You can
also restore individual databases even if you dumped them as "all"—as long as
you dumped them into separate files via use of the "format" option. See above
for more information.
### Restore all databases
To restore all databases:
```bash
borgmatic restore --archive host-2023-... --database all
```
Or omit the `--database` flag entirely:
```bash
borgmatic restore --archive host-2023-...
```
Prior to borgmatic version 1.7.6, this restores a combined "all" database
dump from the archive.
<span class="minilink minilink-addedin">New in version 1.7.6</span> Restoring
"all" databases restores each database found in the selected archive. That
includes any combined dump file named "all" and any other individual database
dumps found in the archive.
### Restore particular schemas
<span class="minilink minilink-addedin">New in version 1.7.13</span> With
PostgreSQL and MongoDB, you can limit the restore to a single schema found
within the database dump:
```bash
borgmatic restore --archive latest --database users --schema tentant1
```
### Limitations
There are a few important limitations with borgmatic's current database
@ -183,21 +349,38 @@ borgmatic's own configuration file. So include your configuration file in
backups to avoid getting caught without a way to restore a database.
3. borgmatic does not currently support backing up or restoring multiple
databases that share the exact same name on different hosts.
4. Because database hooks implicitly enable the `read_special` configuration
setting to support dump and restore streaming, you'll need to ensure that any
special files are excluded from backups (named pipes, block devices, and
character devices) to prevent hanging. Try a command like `find / -type c,b,p`
to find such files. Common directories to exclude are `/dev` and `/run`, but
that may not be exhaustive.
4. Because database hooks implicitly enable the `read_special` configuration,
any special files are excluded from backups (named pipes, block devices,
character devices, and sockets) to prevent hanging. Try a command like `find
/your/source/path -type b -or -type c -or -type p -or -type s` to find such
files. Common directories to exclude are `/dev` and `/run`, but that may not
be exhaustive. <span class="minilink minilink-addedin">New in version
1.7.3</span> When database hooks are enabled, borgmatic automatically excludes
special files (and symlinks to special files) that may cause Borg to hang, so
generally you no longer need to manually exclude them. There are potential
edge cases though in which applications on your system create new special files
*after* borgmatic constructs its exclude list, resulting in Borg hangs. If that
occurs, you can resort to the manual excludes described above. And to opt out
of the auto-exclude feature entirely, explicitly set `read_special` to true.
### Manual restoration
If you prefer to restore a database without the help of borgmatic, first
[extract](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/) an
archive containing a database dump, and then manually restore the dump file
found within the extracted `~/.borgmatic/` path (e.g. with `pg_restore` or
`mysql` commands).
archive containing a database dump.
borgmatic extracts the dump file into the *`username`*`/.borgmatic/` directory
within the extraction destination path, where *`username`* is the user that
created the backup. For example, if you created the backup with the `root`
user and you're extracting to `/tmp`, then the dump will be in
`/tmp/root/.borgmatic`.
After extraction, you can manually restore the dump file using native database
commands like `pg_restore`, `mysql`, `mongorestore`, `sqlite`, or similar.
Also see the documentation on [listing database
dumps](https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#listing-database-dumps).
## Preparation and cleanup hooks
@ -212,6 +395,29 @@ dumps with any database system.
## Troubleshooting
### PostgreSQL/MySQL authentication errors
With PostgreSQL and MySQL/MariaDB, if you're getting authentication errors
when borgmatic tries to connect to your database, a natural reaction is to
increase your borgmatic verbosity with `--verbosity 2` and go looking in the
logs. You'll notice though that your database password does not show up in the
logs. This is likely not the cause of the authentication problem unless you
mistyped your password, however; borgmatic passes your password to the
database via an environment variable that does not appear in the logs.
The cause of an authentication error is often on the database side—in the
configuration of which users are allowed to connect and how they are
authenticated. For instance, with PostgreSQL, check your
[pg_hba.conf](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html)
file for that configuration.
Additionally, MySQL/MariaDB may be picking up some of your credentials from a
defaults file like `~/.my.cnf`. If that's the case, then it's possible
MySQL/MariaDB ends up using, say, a username from borgmatic's configuration
and a password from `~/.my.cnf`. This may result in authentication errors if
this combination of credentials is not what you intend.
### MySQL table lock errors
If you encounter table lock errors during a database dump with MySQL/MariaDB,
@ -230,5 +436,14 @@ hooks:
### borgmatic hangs during backup
See Limitations above about `read_special`. You may need to exclude certain
paths with named pipes, block devices, or character devices on which borgmatic
is hanging.
paths with named pipes, block devices, character devices, or sockets on which
borgmatic is hanging.
Alternatively, if excluding special files is too onerous, you can create two
separate borgmatic configuration files—one for your source files and a
separate one for backing up databases. That way, the database `read_special`
option will not be active when backing up special files.
<span class="minilink minilink-addedin">New in version 1.7.3</span> See
Limitations above about borgmatic's automatic exclusion of special files to
prevent Borg hangs.

View File

@ -1,86 +1,182 @@
---
title: How to deal with very large backups
eleventyNavigation:
key: Deal with very large backups
key: 📏 Deal with very large backups
parent: How-to guides
order: 3
order: 4
---
## Biggish data
Borg itself is great for efficiently de-duplicating data across successive
backup archives, even when dealing with very large repositories. But you may
find that while borgmatic's default mode of "prune, create, and check" works
well on small repositories, it's not so great on larger ones. That's because
running the default pruning and consistency checks take a long time on large
repositories.
find that while borgmatic's default actions of `create`, `prune`, `compact`,
and `check` works well on small repositories, it's not so great on larger
ones. That's because running the default pruning, compact, and consistency
checks take a long time on large repositories.
<span class="minilink minilink-addedin">Prior to version 1.7.9</span> The
default action ordering was `prune`, `compact`, `create`, and `check`.
### A la carte actions
If you find yourself in this situation, you have some options. First, you can
run borgmatic's pruning, creating, or checking actions separately. For
instance, the following optional actions are available:
If you find yourself wanting to customize the actions, you have some options.
First, you can run borgmatic's `prune`, `compact`, `create`, or `check`
actions separately. For instance, the following optional actions are
available (among others):
```bash
borgmatic prune
borgmatic create
borgmatic prune
borgmatic compact
borgmatic check
```
(No borgmatic `prune`, `create`, or `check` actions? Try the old-style
`--prune`, `--create`, or `--check`. Or upgrade borgmatic!)
You can run with only one of these actions provided, or you can mix and match
any number of them in a single borgmatic run. This supports approaches like
skipping certain actions while running others. For instance, this skips
`prune` and only runs `create` and `check`:
You can run borgmatic with only one of these actions provided, or you can mix
and match any number of them in a single borgmatic run. This supports
approaches like skipping certain actions while running others. For instance,
this skips `prune` and `compact` and only runs `create` and `check`:
```bash
borgmatic create check
```
Or, you can make backups with `create` on a frequent schedule (e.g. with
`borgmatic create` called from one cron job), while only running expensive
consistency checks with `check` on a much less frequent basis (e.g. with
`borgmatic check` called from a separate cron job).
<span class="minilink minilink-addedin">New in version 1.7.9</span> borgmatic
now respects your specified command-line action order, running actions in the
order you specify. In previous versions, borgmatic ran your specified actions
in a fixed ordering regardless of the order they appeared on the command-line.
But instead of running actions together, another option is to run backups with
`create` on a frequent schedule (e.g. with `borgmatic create` called from one
cron job), while only running expensive consistency checks with `check` on a
much less frequent basis (e.g. with `borgmatic check` called from a separate
cron job).
### Consistency check configuration
Another option is to customize your consistency checks. The default
consistency checks run both full-repository checks and per-archive checks
within each repository.
Another option is to customize your consistency checks. By default, if you
omit consistency checks from configuration, borgmatic runs full-repository
checks (`repository`) and per-archive checks (`archives`) within each
repository. (Although see below about check frequency.) This is equivalent to
what `borg check` does if run without options.
But if you find that archive checks are too slow, for example, you can
configure borgmatic to run repository checks only. Configure this in the
`consistency` section of borgmatic configuration:
```yaml
consistency:
checks:
- name: repository
```
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> The
`checks` option was a plain list of strings without the `name:` part, and
borgmatic ran each configured check every time checks were run. For example:
```yaml
consistency:
checks:
- repository
```
Here are the available checks from fastest to slowest:
* `repository`: Checks the consistency of the repository itself.
* `archives`: Checks all of the archives in the repository.
* `extract`: Performs an extraction dry-run of the most recent archive.
* `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data (implies `archives` as well).
* `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data.
See [Borg's check documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html) for more information.
Note that the `data` check is a more thorough version of the `archives` check,
so enabling the `data` check implicitly enables the `archives` check as well.
See [Borg's check
documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html)
for more information.
### Check frequency
<span class="minilink minilink-addedin">New in version 1.6.2</span> You can
optionally configure checks to run on a periodic basis rather than every time
borgmatic runs checks. For instance:
```yaml
consistency:
checks:
- name: repository
frequency: 2 weeks
- name: archives
frequency: 1 month
```
This tells borgmatic to run the `repository` consistency check at most once
every two weeks for a given repository and the `archives` check at most once a
month. The `frequency` value is a number followed by a unit of time, e.g. "3
days", "1 week", "2 months", etc.
The `frequency` defaults to `always` for a check configured without a
`frequency`, which means run this check every time checks run. But if you omit
consistency checks from configuration entirely, borgmatic runs full-repository
checks (`repository`) and per-archive checks (`archives`) within each
repository, at most once a month.
Unlike a real scheduler like cron, borgmatic only makes a best effort to run
checks on the configured frequency. It compares that frequency with how long
it's been since the last check for a given repository (as recorded in a file
within `~/.borgmatic/checks`). If it hasn't been long enough, the check is
skipped. And you still have to run `borgmatic check` (or `borgmatic` without
actions) in order for checks to run, even when a `frequency` is configured!
This also applies *across* configuration files that have the same repository
configured. Make sure you have the same check frequency configured in each
though—or the most frequently configured check will apply.
If you want to temporarily ignore your configured frequencies, you can invoke
`borgmatic check --force` to run checks unconditionally.
### Running only checks
<span class="minilink minilink-addedin">New in version 1.7.1</span> If you
would like to only run consistency checks without creating backups (for
instance with the `check` action on the command-line), you can omit
the `source_directories` option entirely.
<span class="minilink minilink-addedin">Prior to version 1.7.1</span> In older
versions of borgmatic, instead specify an empty `source_directories` value, as
it is a mandatory option there:
```yaml
location:
source_directories: []
```
### Disabling checks
If that's still too slow, you can disable consistency checks entirely,
either for a single repository or for all repositories.
Disabling all consistency checks looks like this:
```yaml
consistency:
checks:
- name: disabled
```
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> `checks`
was a plain list of strings without the `name:` part. For instance:
```yaml
consistency:
checks:
- disabled
```
Or, if you have multiple repositories in your borgmatic configuration file,
If you have multiple repositories in your borgmatic configuration file,
you can keep running consistency checks, but only against a subset of the
repositories:
@ -98,7 +194,8 @@ borgmatic check --only data --only extract
```
This is useful for running slow consistency checks on an infrequent basis,
separate from your regular checks.
separate from your regular checks. It is still subject to any configured
check frequencies unless the `--force` flag is used.
## Troubleshooting

View File

@ -1,32 +1,32 @@
---
title: How to develop on borgmatic
eleventyNavigation:
key: Develop on borgmatic
key: 🏗️ Develop on borgmatic
parent: How-to guides
order: 12
order: 13
---
## Source code
To get set up to hack on borgmatic, first clone master via HTTPS or SSH:
To get set up to hack on borgmatic, first clone it via HTTPS or SSH:
```bash
git clone https://projects.torsion.org/witten/borgmatic.git
git clone https://projects.torsion.org/borgmatic-collective/borgmatic.git
```
Or:
```bash
git clone ssh://git@projects.torsion.org:3022/witten/borgmatic.git
git clone ssh://git@projects.torsion.org:3022/borgmatic-collective/borgmatic.git
```
Then, install borgmatic
"[editable](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs)"
"[editable](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs)"
so that you can run borgmatic commands while you're hacking on them to
make sure your changes work.
```bash
cd borgmatic/
pip3 install --editable --user .
cd borgmatic
pip3 install --user --editable .
```
Note that this will typically install the borgmatic commands into
@ -51,7 +51,6 @@ pip3 install --user tox
Finally, to actually run tests, run:
```bash
cd borgmatic
tox
```
@ -66,8 +65,6 @@ following:
tox -e black
```
Note that Black requires at minimum Python 3.6.
And if you get a complaint from the
[isort](https://github.com/timothycrosley/isort) Python import orderer, you
can ask isort to order your imports for you:
@ -76,24 +73,58 @@ can ask isort to order your imports for you:
tox -e isort
```
Similarly, if you get errors about spelling mistakes in source code, you can
ask [codespell](https://github.com/codespell-project/codespell) to correct
them:
```bash
tox -e codespell
```
### End-to-end tests
borgmatic additionally includes some end-to-end tests that integration test
with Borg and supported databases for a few representative scenarios. These
tests don't run by default when running `tox`, because they're relatively slow
and depend on Docker containers for runtime dependencies. These tests tests do
run on the continuous integration (CI) server, and running them on your
developer machine is the closest thing to CI test parity.
and depend on containers for runtime dependencies. These tests do run on the
continuous integration (CI) server, and running them on your developer machine
is the closest thing to CI-test parity.
If you would like to run the full test suite, first install Docker and [Docker
Compose](https://docs.docker.com/compose/install/). Then run:
If you would like to run the full test suite, first install Docker (or Podman;
see below) and [Docker Compose](https://docs.docker.com/compose/install/).
Then run:
```bash
scripts/run-full-dev-tests
scripts/run-end-to-end-dev-tests
```
Note that this scripts assumes you have permission to run Docker. If you
don't, then you may need to run with `sudo`.
This script assumes you have permission to run `docker`. If you don't, then
you may need to run with `sudo`.
#### Podman
<span class="minilink minilink-addedin">New in version 1.7.12</span>
borgmatic's end-to-end tests optionally support using
[rootless](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md)
[Podman](https://podman.io/) instead of Docker.
Setting up Podman is outside the scope of this documentation, but here are
some key points to double-check:
* Install Podman and your desired networking support.
* Configure `/etc/subuid` and `/etc/subgid` to map users/groups for the
non-root user who will run tests.
* Create a non-root Podman socket for that user:
```bash
systemctl --user enable --now podman.socket
systemctl --user start --now podman.socket
```
Then you'll be able to run end-to-end tests as per normal, and the test script
will automatically use your non-root Podman socket instead of a Docker socket.
## Code style
@ -103,10 +134,10 @@ the following deviations from it:
* For strings, prefer single quotes over double quotes.
* Limit all lines to a maximum of 100 characters.
* Use trailing commas within multiline values or argument lists.
* For multiline constructs, put opening and closing delimeters on lines
* For multiline constructs, put opening and closing delimiters on lines
separate from their contents.
* Within multiline constructs, use standard four-space indentation. Don't align
indentation with an opening delimeter.
indentation with an opening delimiter.
borgmatic code uses the [Black](https://black.readthedocs.io/en/stable/) code
formatter, the [Flake8](http://flake8.pycqa.org/en/latest/) code checker, and
@ -118,7 +149,7 @@ See the Black, Flake8, and isort documentation for more information.
Each pull request triggers a continuous integration build which runs the test
suite. You can view these builds on
[build.torsion.org](https://build.torsion.org/witten/borgmatic), and they're
[build.torsion.org](https://build.torsion.org/borgmatic-collective/borgmatic), and they're
also linked from the commits list on each pull request.
## Documentation development
@ -131,11 +162,12 @@ To build and view a copy of the documentation with your local changes, run the
following from the root of borgmatic's source code:
```bash
sudo scripts/dev-docs
scripts/dev-docs
```
This requires Docker to be installed on your system. You may not need to use
sudo if your non-root user has permissions to run Docker.
This requires Docker (or Podman; see below) to be installed on your system.
This script assumes you have permission to run `docker`. If you don't, then
you may need to run with `sudo`.
After you run the script, you can point your web browser at
http://localhost:8080 to view the documentation with your changes.
@ -143,3 +175,15 @@ http://localhost:8080 to view the documentation with your changes.
To close the documentation server, ctrl-C the script. Note that it does not
currently auto-reload, so you'll need to stop it and re-run it for any
additional documentation changes to take effect.
#### Podman
<span class="minilink minilink-addedin">New in version 1.7.12</span>
borgmatic's developer build for documentation optionally supports using
[rootless](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md)
[Podman](https://podman.io/) instead of Docker.
Setting up Podman is outside the scope of this documentation. But once you
install and configure Podman, then `scripts/dev-docs` should automatically use
Podman instead of Docker.

View File

@ -1,41 +1,39 @@
---
title: How to extract a backup
eleventyNavigation:
key: Extract a backup
key: 📤 Extract a backup
parent: How-to guides
order: 6
order: 7
---
## Extract
When the worst happens—or you want to test your backups—the first step is
to figure out which archive to extract. A good way to do that is to use the
`list` action:
`rlist` action:
```bash
borgmatic list
borgmatic rlist
```
(No borgmatic `list` action? Try the old-style `--list`, or upgrade
borgmatic!)
(No borgmatic `rlist` action? Try `list` instead or upgrade borgmatic!)
That should yield output looking something like:
```text
host-2019-01-01T04:05:06.070809 Tue, 2019-01-01 04:05:06 [...]
host-2019-01-02T04:06:07.080910 Wed, 2019-01-02 04:06:07 [...]
host-2023-01-01T04:05:06.070809 Tue, 2023-01-01 04:05:06 [...]
host-2023-01-02T04:06:07.080910 Wed, 2023-01-02 04:06:07 [...]
```
Assuming that you want to extract the archive with the most up-to-date files
and therefore the latest timestamp, run a command like:
```bash
borgmatic extract --archive host-2019-01-02T04:06:07.080910
borgmatic extract --archive host-2023-01-02T04:06:07.080910
```
(No borgmatic `extract` action? Try the old-style `--extract`, or upgrade
borgmatic!)
(No borgmatic `extract` action? Upgrade borgmatic!)
With newer versions of borgmatic, you can simplify this to:
Or simplify this to:
```bash
borgmatic extract --archive latest
@ -43,7 +41,8 @@ borgmatic extract --archive latest
The `--archive` value is the name of the archive to extract. This extracts the
entire contents of the archive to the current directory, so make sure you're
in the right place before running the command.
in the right place before running the command—or see below about the
`--destination` flag.
## Repository selection
@ -52,10 +51,11 @@ If you have a single repository in your borgmatic configuration file(s), no
problem: the `extract` action figures out which repository to use.
But if you have multiple repositories configured, then you'll need to specify
the repository path containing the archive to extract. Here's an example:
the repository to use via the `--repository` flag. This can be done either
with the repository's path or its label as configured in your borgmatic configuration file.
```bash
borgmatic extract --repository repo.borg --archive host-2019-...
borgmatic extract --repository repo.borg --archive host-2023-...
```
## Extract particular files
@ -65,13 +65,22 @@ everything from an archive. To do that, tack on one or more `--path` values.
For instance:
```bash
borgmatic extract --archive host-2019-... --path path/1 path/2
borgmatic extract --archive latest --path path/1 path/2
```
Note that the specified restore paths should not have a leading slash. Like a
whole-archive extract, this also extracts into the current directory. So for
example, if you happen to be in the directory `/var` and you run the `extract`
command above, borgmatic will extract `/var/path/1` and `/var/path/2`.
whole-archive extract, this also extracts into the current directory by
default. So for example, if you happen to be in the directory `/var` and you
run the `extract` command above, borgmatic will extract `/var/path/1` and
`/var/path/2`.
### Searching for files
If you're not sure which archive contains the files you're looking for, you
can [search across
archives](https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#searching-for-a-file).
## Extract to a particular destination
@ -80,7 +89,7 @@ extract files to a particular destination directory, use the `--destination`
flag:
```bash
borgmatic extract --archive host-2019-... --destination /tmp
borgmatic extract --archive latest --destination /tmp
```
When using the `--destination` flag, be careful not to overwrite your system's
@ -104,7 +113,7 @@ archive as a [FUSE](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)
filesystem, you can use the `borgmatic mount` action. Here's an example:
```bash
borgmatic mount --archive host-2019-... --mount-point /mnt
borgmatic mount --archive latest --mount-point /mnt
```
This mounts the entire archive on the given mount point `/mnt`, so that you
@ -116,7 +125,7 @@ Omit the `--archive` flag to mount all archives (lazy-loaded):
borgmatic mount --mount-point /mnt
```
Or use the "latest" value for the archive to mount the latest successful archive:
Or use the "latest" value for the archive to mount the latest archive:
```bash
borgmatic mount --archive latest --mount-point /mnt
@ -127,7 +136,7 @@ your archive, use the `--path` flag, similar to the `extract` action above.
For instance:
```bash
borgmatic mount --archive host-2019-... --mount-point /mnt --path var/lib
borgmatic mount --archive latest --mount-point /mnt --path var/lib
```
When you're all done exploring your files, unmount your mount point. No

View File

@ -1,9 +1,9 @@
---
title: How to inspect your backups
eleventyNavigation:
key: Inspect your backups
key: 🔎 Inspect your backups
parent: How-to guides
order: 4
order: 5
---
## Backup progress
@ -24,6 +24,15 @@ Or, for even more progress and debug spew:
borgmatic --verbosity 2
```
The full set of verbosity levels are:
* `-2`: disable output entirely <span class="minilink minilink-addedin">New in borgmatic 1.7.14</span>
* `-1`: only show errors
* `0`: default output
* `1`: some additional output (informational level)
* `2`: lots of additional output (debug level)
## Backup summary
If you're less concerned with progress during a backup, and you only want to
@ -37,18 +46,72 @@ borgmatic --stats
## Existing backups
borgmatic provides convenient actions for Borg's
[list](https://borgbackup.readthedocs.io/en/stable/usage/list.html) and
[info](https://borgbackup.readthedocs.io/en/stable/usage/info.html)
[`list`](https://borgbackup.readthedocs.io/en/stable/usage/list.html) and
[`info`](https://borgbackup.readthedocs.io/en/stable/usage/info.html)
functionality:
```bash
borgmatic list
borgmatic info
```
(No borgmatic `list` or `info` actions? Try the old-style `--list` or
`--info`. Or upgrade borgmatic!)
You can change the output format of `borgmatic list` by specifying your own
with `--format`. Refer to the [borg list --format
documentation](https://borgbackup.readthedocs.io/en/stable/usage/list.html#the-format-specifier-syntax)
for available values.
*(No borgmatic `list` or `info` actions? Upgrade borgmatic!)*
<span class="minilink minilink-addedin">New in borgmatic version 1.7.0</span>
There are also `rlist` and `rinfo` actions for displaying repository
information with Borg 2.x:
```bash
borgmatic rlist
borgmatic rinfo
```
See the [borgmatic command-line
reference](https://torsion.org/borgmatic/docs/reference/command-line/) for
more information.
### Searching for a file
<span class="minilink minilink-addedin">New in version 1.6.3</span> Let's say
you've accidentally deleted a file and want to find the backup archive(s)
containing it. `borgmatic list` provides a `--find` flag for exactly this
purpose. For instance, if you're looking for a `foo.txt`:
```bash
borgmatic list --find foo.txt
```
This will list your archives and indicate those with files matching
`*foo.txt*` anywhere in the archive. The `--find` parameter can alternatively
be a [Borg
pattern](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-patterns).
To limit the archives searched, use the standard `list` parameters for
filtering archives such as `--last`, `--archive`, `--match-archives`, etc. For
example, to search only the last five archives:
```bash
borgmatic list --find foo.txt --last 5
```
## Listing database dumps
If you have enabled borgmatic's [database
hooks](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/), you
can list backed up database dumps via borgmatic. For example:
```bash
borgmatic list --archive latest --find .borgmatic/*_databases
```
This gives you a listing of all database dump files contained in the latest
archive, complete with file sizes.
## Logging
@ -57,7 +120,7 @@ By default, borgmatic logs to a local syslog-compatible daemon if one is
present and borgmatic is running in a non-interactive console. Where those
logs show up depends on your particular system. If you're using systemd, try
running `journalctl -xe`. Otherwise, try viewing `/var/log/syslog` or
similiar.
similar.
You can customize the log level used for syslog logging with the
`--syslog-verbosity` flag, and this is independent from the console logging
@ -100,5 +163,39 @@ borgmatic --log-file /path/to/file.log
Note that if you use the `--log-file` flag, you are responsible for rotating
the log file so it doesn't grow too large, for example with
[logrotate](https://wiki.archlinux.org/index.php/Logrotate). Also, there is a
`--log-file-verbosity` flag to customize the log file's log level.
[logrotate](https://wiki.archlinux.org/index.php/Logrotate).
You can the `--log-file-verbosity` flag to customize the log file's log level:
```bash
borgmatic --log-file /path/to/file.log --log-file-verbosity 2
```
<span class="minilink minilink-addedin">New in version 1.7.11</span> Use the
`--log-file-format` flag to override the default log message format. This
format string can contain a series of named placeholders wrapped in curly
brackets. For instance, the default log format is: `[{asctime}] {levelname}:
{message}`. This means each log message is recorded as the log time (in square
brackets), a logging level name, a colon, and the actual log message.
So if you only want each log message to get logged *without* a timestamp or a
logging level name:
```bash
borgmatic --log-file /path/to/file.log --log-file-format "{message}"
```
Here is a list of available placeholders:
* `{asctime}`: time the log message was created
* `{levelname}`: level of the log message (`INFO`, `DEBUG`, etc.)
* `{lineno}`: line number in the source file where the log message originated
* `{message}`: actual log message
* `{pathname}`: path of the source file where the log message originated
See the [Python logging
documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes)
for additional placeholders.
Note that this `--log-file-format` flg only applies to the specified
`--log-file` and not to syslog or other logging.

Some files were not shown because too many files have changed in this diff Show More