Compare commits
215 Commits
1.9.12
...
929d343214
| Author | SHA1 | Date | |
|---|---|---|---|
| 929d343214 | |||
| 9ea55d9aa3 | |||
| 3eabda45f2 | |||
| 09212961a4 | |||
| 3f25f3f0ff | |||
| e8542f3613 | |||
| 9407f24674 | |||
| 1c9d25b892 | |||
| 248999c23e | |||
| d0a5aa63be | |||
| d2c3ed26a9 | |||
| bbf6f27715 | |||
| 9301ab13cc | |||
| d5d04b89dc | |||
| 364200c65a | |||
| 4e55547235 | |||
| 96ec66de79 | |||
| 7a0c56878b | |||
| 4065c5d0f7 | |||
| affe7cdc1b | |||
| 017cbae4f9 | |||
| e96db2e100 | |||
| af97b95e2b | |||
| 6a61259f1a | |||
| 5490a83d77 | |||
| 8c907bb5a3 | |||
| f166111b9b | |||
| 10fb02c40a | |||
| cf477bdc1c | |||
| 6f07402407 | |||
| ab01e97a5e | |||
| 92ebc77597 | |||
| 863c954144 | |||
| f7e4d38762 | |||
| de4d7af507 | |||
| 5cea1e1b72 | |||
| fd8c11eb0a | |||
| 92de539bf9 | |||
| 5716e61f8f | |||
| 3e05eeb4de | |||
| 65d1b9235d | |||
| cffb8e88da | |||
| a8362f2618 | |||
| 36265eea7d | |||
| 8101e5c56f | |||
| c7feb16ab5 | |||
| da324ebeb7 | |||
| 59f9d56aae | |||
|
|
dbf2e78f62 | ||
| f6929f8891 | |||
|
|
2716d9d0b0 | ||
| 668f767bfc | |||
| 0182dbd914 | |||
| 1c27e0dadc | |||
|
|
8b3a682edf | ||
| 975a6e4540 | |||
|
|
7020f0530a | ||
| 5bf2f546b9 | |||
| b4c558d013 | |||
| 79bf641668 | |||
| 50beb334dc | |||
|
|
26fd41da92 | ||
| 088da19012 | |||
| 4c6674e0ad | |||
| 486bec698d | |||
| 7a766c717e | |||
| 520fb78a00 | |||
|
|
acc2814f11 | ||
| 996b037946 | |||
|
|
9356924418 | ||
| 79e4e089ee | |||
| d2714cb706 | |||
| 5a0430b9c8 | |||
| 23efbb8df3 | |||
| 9e694e4df9 | |||
| 76f7c53a1c | |||
|
|
203e84b91f | ||
|
|
ea5a2d8a46 | ||
|
|
a8726c408a | ||
|
|
3542673446 | ||
| 532a97623c | |||
| e1fdfe4c2f | |||
| 83a56a3fef | |||
|
|
b60cf2449a | ||
|
|
e7f14bca87 | ||
|
|
4bca7bb198 | ||
|
|
fa3b140590 | ||
|
|
a1d2f7f221 | ||
| 6a470be924 | |||
| d651813601 | |||
| 65b1d8e8b2 | |||
| 16a1121649 | |||
| 423627e67b | |||
| 9f7c71265e | |||
| ba75958a2f | |||
| 57721937a3 | |||
| f222bf2c1a | |||
| dc9da3832d | |||
| f8eda92379 | |||
| cc14421460 | |||
|
|
a750d58a2d | ||
| 2045706faa | |||
| 976fb8f343 | |||
| 5246a10b99 | |||
| 524ec6b3cb | |||
| 6f1c77bc7d | |||
| 7904ffb641 | |||
| cd5ba81748 | |||
| 5c11052b8c | |||
| 514ade6609 | |||
| 201469e2c2 | |||
| 9ac2a2e286 | |||
|
|
a16d138afc | ||
|
|
81a3a99578 | ||
| f3cc3b1b65 | |||
| 587d31de7c | |||
| cbfc0bead1 | |||
|
|
8aaa5ba8a6 | ||
| 7d989f727d | |||
|
|
5525b467ef | ||
| 89c98de122 | |||
| c2409d9968 | |||
| 624a7de622 | |||
| 3119c924b4 | |||
| ed6022d4a9 | |||
| 3e21cdb579 | |||
| d02d31f445 | |||
| 1097a6576f | |||
| 63b0c69794 | |||
|
|
4e2805918d | ||
| 711f5fa6cb | |||
| 93e7da823c | |||
| 903308864c | |||
| d75c8609c5 | |||
| c926f0bd5d | |||
| 7b14e8c7f2 | |||
| 87b9ad5aea | |||
| eca78fbc2c | |||
|
|
6adb0fd44c | ||
| 05900c188f | |||
| 1d5713c4c5 | |||
| f9612cc685 | |||
| 5742a1a2d9 | |||
|
|
c84815bfb0 | ||
| e1ff51ff1e | |||
| 1c92d84e09 | |||
| 1d94fb501f | |||
| 92279d3c71 | |||
|
|
1b4c94ad1e | ||
| 901e668c76 | |||
| bcb224a243 | |||
| 6b6e1e0336 | |||
| f5c9bc4fa9 | |||
| cdd0e6f052 | |||
| 7bdbadbac2 | |||
| d3413e0907 | |||
| 8a20ee7304 | |||
| 325f53c286 | |||
| b4d24798bf | |||
| 7965eb9de3 | |||
| 8817364e6d | |||
| 965740c778 | |||
| 2a0319f02f | |||
| fbdb09b87d | |||
| bec5a0c0ca | |||
| 4ee7f72696 | |||
| 9941d7dc57 | |||
| ec88bb2e9c | |||
| 68b6d01071 | |||
| b52339652f | |||
| 4fd22b2df0 | |||
| 86b138e73b | |||
| 5ab766b51c | |||
| 45c114973c | |||
| 6a96a78cf1 | |||
| e06c6740f2 | |||
| 10bd1c7b41 | |||
| d4f48a3a9e | |||
| c76a108422 | |||
| eb5dc128bf | |||
| 1d486d024b | |||
| 5a8f27d75c | |||
| a926b413bc | |||
| 18ffd96d62 | |||
| c0135864c2 | |||
| ddfd3c6ca1 | |||
| dbe82ff11e | |||
| 55c0ab1610 | |||
| 1f86100f26 | |||
| 2a16ffab1b | |||
| 4b2f7e03af | |||
| 024006f4c0 | |||
| 4c71e600ca | |||
| 114f5702b2 | |||
| 54afe87a9f | |||
| 25b6a49df7 | |||
| b97372adf2 | |||
| 6bc9a592d9 | |||
| 839862cff0 | |||
| 06b065cb09 | |||
| 1e5c256d54 | |||
| baf5fec78d | |||
| 48a4fbaa89 | |||
| 1e274d7153 | |||
| c41b743819 | |||
| 36d0073375 | |||
| 0bd418836e | |||
| 923fa7d82f | |||
| dce0528057 | |||
| 8a6c6c84d2 | |||
| 1e21c8f97b | |||
|
|
2eab74a521 | ||
| 3bca686707 | |||
| 8854b9ad20 | |||
| bcc463688a |
65
NEWS
65
NEWS
@@ -1,3 +1,68 @@
|
||||
2.0.0.dev0
|
||||
* TL;DR: More flexible, completely revamped command hooks. All config options settable on the
|
||||
command-line. Config option defaults for many command-line flags. New "key import" and "recreate"
|
||||
actions. Almost everything is backwards compatible.
|
||||
* #262: Add a "default_actions" option that supports disabling default actions when borgmatic is
|
||||
run without any command-line arguments.
|
||||
* #303: Deprecate the "--override" flag in favor of direct command-line flags for every borgmatic
|
||||
configuration option. See the documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides
|
||||
* #303: Add configuration options that serve as defaults for some (but not all) command-line
|
||||
action flags. For example, each entry in "repositories:" now has an "encryption" option that
|
||||
applies to the "repo-create" action, serving as a default for the "--encryption" flag. See the
|
||||
documentation for more information: https://torsion.org/borgmatic/docs/reference/configuration/
|
||||
* #345: Add a "key import" action to import a repository key from backup.
|
||||
* #422: Add home directory expansion to file-based and KeePassXC credential hooks.
|
||||
* #610: Add a "recreate" action for recreating archives, for instance for retroactively excluding
|
||||
particular files from existing archives.
|
||||
* #790, #821: Deprecate all "before_*", "after_*" and "on_error" command hooks in favor of more
|
||||
flexible "commands:". See the documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
|
||||
* #790: BREAKING: For both new and deprecated command hooks, run a configured "after" hook even if
|
||||
an error occurs first. This allows you to perform cleanup steps that correspond to "before"
|
||||
preparation commands—even when something goes wrong.
|
||||
* #790: BREAKING: Run all command hooks (both new and deprecated) respecting the
|
||||
"working_directory" option if configured, meaning that hook commands are run in that directory.
|
||||
* #836: Add a custom command option for the SQLite hook.
|
||||
* #837: Add custom command options for the MongoDB hook.
|
||||
* #1010: When using Borg 2, don't pass the "--stats" flag to "borg prune".
|
||||
* #1020: Document a database use case involving a temporary database client container:
|
||||
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
|
||||
* #1037: Fix an error with the "extract" action when both a remote repository and a
|
||||
"working_directory" are used.
|
||||
* #1044: Fix an error in the systemd credential hook when the credential name contains a "."
|
||||
character.
|
||||
* #1047: Add "key-file" and "yubikey" options to the KeePassXC credential hook.
|
||||
* #1048: Fix a "no such file or directory" error in ZFS, Btrfs, and LVM hooks with nested
|
||||
directories that reside on separate devices/filesystems.
|
||||
* #1050: Fix a failure in the "spot" check when the archive contains a symlink.
|
||||
* #1051: Add configuration filename to the "Successfully ran configuration file" log message.
|
||||
|
||||
1.9.14
|
||||
* #409: With the PagerDuty monitoring hook, send borgmatic logs to PagerDuty so they show up in the
|
||||
incident UI. See the documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook
|
||||
* #936: Clarify Zabbix monitoring hook documentation about creating items:
|
||||
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#zabbix-hook
|
||||
* #1017: Fix a regression in which some MariaDB/MySQL passwords were not escaped correctly.
|
||||
* #1021: Fix a regression in which the "exclude_patterns" option didn't expand "~" (the user's
|
||||
home directory). This fix means that all "patterns" and "patterns_from" also now expand "~".
|
||||
* #1023: Fix an error in the Btrfs hook when attempting to snapshot a read-only subvolume. Now,
|
||||
read-only subvolumes are ignored since Btrfs can't actually snapshot them.
|
||||
|
||||
1.9.13
|
||||
* #975: Add a "compression" option to the PostgreSQL database hook.
|
||||
* #1001: Fix a ZFS error during snapshot cleanup.
|
||||
* #1003: In the Zabbix monitoring hook, support Zabbix 7.2's authentication changes.
|
||||
* #1009: Send database passwords to MariaDB and MySQL via anonymous pipe, which is more secure than
|
||||
using an environment variable.
|
||||
* #1013: Send database passwords to MongoDB via anonymous pipe, which is more secure than using
|
||||
"--password" on the command-line!
|
||||
* #1015: When ctrl-C is pressed, more strongly encourage Borg to actually exit.
|
||||
* Add a "verify_tls" option to the Uptime Kuma monitoring hook for disabling TLS verification.
|
||||
* Add "tls" options to the MariaDB and MySQL database hooks to enable or disable TLS encryption
|
||||
between client and server.
|
||||
|
||||
1.9.12
|
||||
* #1005: Fix the credential hooks to avoid using Python 3.12+ string features. Now borgmatic will
|
||||
work with Python 3.9, 3.10, and 3.11 again.
|
||||
|
||||
@@ -170,7 +170,7 @@ def filter_checks_on_frequency(
|
||||
|
||||
if calendar.day_name[datetime_now().weekday()] not in days:
|
||||
logger.info(
|
||||
f"Skipping {check} check due to day of the week; check only runs on {'/'.join(days)} (use --force to check anyway)"
|
||||
f"Skipping {check} check due to day of the week; check only runs on {'/'.join(day.title() for day in days)} (use --force to check anyway)"
|
||||
)
|
||||
filtered_checks.remove(check)
|
||||
continue
|
||||
@@ -372,7 +372,7 @@ def collect_spot_check_source_paths(
|
||||
borgmatic.borg.create.make_base_create_command(
|
||||
dry_run=True,
|
||||
repository_path=repository['path'],
|
||||
config=config,
|
||||
config=dict(config, list_details=True),
|
||||
patterns=borgmatic.actions.create.process_patterns(
|
||||
borgmatic.actions.create.collect_patterns(config),
|
||||
working_directory,
|
||||
@@ -382,7 +382,6 @@ def collect_spot_check_source_paths(
|
||||
borgmatic_runtime_directory=borgmatic_runtime_directory,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
list_files=True,
|
||||
stream_processes=stream_processes,
|
||||
)
|
||||
)
|
||||
@@ -483,10 +482,12 @@ def compare_spot_check_hashes(
|
||||
)
|
||||
source_sample_paths = tuple(random.sample(source_paths, sample_count))
|
||||
working_directory = borgmatic.config.paths.get_working_directory(config)
|
||||
existing_source_sample_paths = {
|
||||
hashable_source_sample_path = {
|
||||
source_path
|
||||
for source_path in source_sample_paths
|
||||
if os.path.exists(os.path.join(working_directory or '', source_path))
|
||||
for full_source_path in (os.path.join(working_directory or '', source_path),)
|
||||
if os.path.exists(full_source_path)
|
||||
if not os.path.islink(full_source_path)
|
||||
}
|
||||
logger.debug(
|
||||
f'Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
|
||||
@@ -509,7 +510,7 @@ def compare_spot_check_hashes(
|
||||
hash_output = borgmatic.execute.execute_command_and_capture_output(
|
||||
(spot_check_config.get('xxh64sum_command', 'xxh64sum'),)
|
||||
+ tuple(
|
||||
path for path in source_sample_paths_subset if path in existing_source_sample_paths
|
||||
path for path in source_sample_paths_subset if path in hashable_source_sample_path
|
||||
),
|
||||
working_directory=working_directory,
|
||||
)
|
||||
@@ -517,11 +518,13 @@ def compare_spot_check_hashes(
|
||||
source_hashes.update(
|
||||
**dict(
|
||||
(reversed(line.split(' ', 1)) for line in hash_output.splitlines()),
|
||||
# Represent non-existent files as having empty hashes so the comparison below still works.
|
||||
# Represent non-existent files as having empty hashes so the comparison below still
|
||||
# works. Same thing for filesystem links, since Borg produces empty archive hashes
|
||||
# for them.
|
||||
**{
|
||||
path: ''
|
||||
for path in source_sample_paths_subset
|
||||
if path not in existing_source_sample_paths
|
||||
if path not in hashable_source_sample_path
|
||||
},
|
||||
)
|
||||
)
|
||||
@@ -682,7 +685,6 @@ def run_check(
|
||||
config_filename,
|
||||
repository,
|
||||
config,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
check_arguments,
|
||||
global_arguments,
|
||||
@@ -699,15 +701,6 @@ def run_check(
|
||||
):
|
||||
return
|
||||
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('before_check'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'pre-check',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
|
||||
logger.info('Running consistency checks')
|
||||
|
||||
repository_id = borgmatic.borg.check.get_repository_id(
|
||||
@@ -772,12 +765,3 @@ def run_check(
|
||||
borgmatic_runtime_directory,
|
||||
)
|
||||
write_check_time(make_check_time_path(config, repository_id, 'spot'))
|
||||
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('after_check'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'post-check',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
|
||||
@@ -12,7 +12,6 @@ def run_compact(
|
||||
config_filename,
|
||||
repository,
|
||||
config,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
compact_arguments,
|
||||
global_arguments,
|
||||
@@ -28,14 +27,6 @@ def run_compact(
|
||||
):
|
||||
return
|
||||
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('before_compact'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'pre-compact',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
|
||||
logger.info(f'Compacting segments{dry_run_label}')
|
||||
borgmatic.borg.compact.compact_segments(
|
||||
@@ -46,18 +37,7 @@ def run_compact(
|
||||
global_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
progress=compact_arguments.progress,
|
||||
cleanup_commits=compact_arguments.cleanup_commits,
|
||||
threshold=compact_arguments.threshold,
|
||||
)
|
||||
else: # pragma: nocover
|
||||
logger.info('Skipping compact (only available/needed in Borg 1.2+)')
|
||||
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('after_compact'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'post-compact',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
|
||||
@@ -119,7 +119,9 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
|
||||
bootstrap_arguments.repository,
|
||||
archive_name,
|
||||
[config_path.lstrip(os.path.sep) for config_path in manifest_config_paths],
|
||||
config,
|
||||
# Only add progress here and not the extract_archive() call above, because progress
|
||||
# conflicts with extract_to_stdout.
|
||||
dict(config, progress=bootstrap_arguments.progress or False),
|
||||
local_borg_version,
|
||||
global_arguments,
|
||||
local_path=bootstrap_arguments.local_path,
|
||||
@@ -127,5 +129,4 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
|
||||
extract_to_stdout=False,
|
||||
destination_path=bootstrap_arguments.destination,
|
||||
strip_components=bootstrap_arguments.strip_components,
|
||||
progress=bootstrap_arguments.progress,
|
||||
)
|
||||
|
||||
@@ -130,8 +130,11 @@ def expand_directory(directory, working_directory):
|
||||
def expand_patterns(patterns, working_directory=None, skip_paths=None):
|
||||
'''
|
||||
Given a sequence of borgmatic.borg.pattern.Pattern instances and an optional working directory,
|
||||
expand tildes and globs in each root pattern. Return all the resulting patterns (not just the
|
||||
root patterns) as a tuple.
|
||||
expand tildes and globs in each root pattern and expand just tildes in each non-root pattern.
|
||||
The idea is that non-root patterns may be regular expressions or other pattern styles containing
|
||||
"*" that borgmatic should not expand as a shell glob.
|
||||
|
||||
Return all the resulting patterns as a tuple.
|
||||
|
||||
If a set of paths are given to skip, then don't expand any patterns matching them.
|
||||
'''
|
||||
@@ -153,7 +156,15 @@ def expand_patterns(patterns, working_directory=None, skip_paths=None):
|
||||
)
|
||||
if pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
|
||||
and pattern.path not in (skip_paths or ())
|
||||
else (pattern,)
|
||||
else (
|
||||
borgmatic.borg.pattern.Pattern(
|
||||
os.path.expanduser(pattern.path),
|
||||
pattern.type,
|
||||
pattern.style,
|
||||
pattern.device,
|
||||
pattern.source,
|
||||
),
|
||||
)
|
||||
)
|
||||
for pattern in patterns
|
||||
)
|
||||
@@ -261,7 +272,6 @@ def run_create(
|
||||
repository,
|
||||
config,
|
||||
config_paths,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
create_arguments,
|
||||
global_arguments,
|
||||
@@ -279,14 +289,15 @@ def run_create(
|
||||
):
|
||||
return
|
||||
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('before_backup'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'pre-backup',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
if config.get('list_details') and config.get('progress'):
|
||||
raise ValueError(
|
||||
'With the create action, only one of --list/--files/list_details and --progress/progress can be used.'
|
||||
)
|
||||
|
||||
if config.get('list_details') and create_arguments.json:
|
||||
raise ValueError(
|
||||
'With the create action, only one of --list/--files/list_details and --json can be used.'
|
||||
)
|
||||
|
||||
logger.info(f'Creating archive{dry_run_label}')
|
||||
working_directory = borgmatic.config.paths.get_working_directory(config)
|
||||
@@ -326,10 +337,7 @@ def run_create(
|
||||
borgmatic_runtime_directory,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
progress=create_arguments.progress,
|
||||
stats=create_arguments.stats,
|
||||
json=create_arguments.json,
|
||||
list_files=create_arguments.list_files,
|
||||
stream_processes=stream_processes,
|
||||
)
|
||||
|
||||
@@ -343,12 +351,3 @@ def run_create(
|
||||
borgmatic_runtime_directory,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('after_backup'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'post-backup',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
|
||||
@@ -43,6 +43,5 @@ def run_export_tar(
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
tar_filter=export_tar_arguments.tar_filter,
|
||||
list_files=export_tar_arguments.list_files,
|
||||
strip_components=export_tar_arguments.strip_components,
|
||||
)
|
||||
|
||||
@@ -12,7 +12,6 @@ def run_extract(
|
||||
config_filename,
|
||||
repository,
|
||||
config,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
extract_arguments,
|
||||
global_arguments,
|
||||
@@ -22,14 +21,6 @@ def run_extract(
|
||||
'''
|
||||
Run the "extract" action for the given repository.
|
||||
'''
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('before_extract'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'pre-extract',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, extract_arguments.repository
|
||||
):
|
||||
@@ -54,13 +45,4 @@ def run_extract(
|
||||
remote_path=remote_path,
|
||||
destination_path=extract_arguments.destination,
|
||||
strip_components=extract_arguments.strip_components,
|
||||
progress=extract_arguments.progress,
|
||||
)
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('after_extract'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'post-extract',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
|
||||
33
borgmatic/actions/import_key.py
Normal file
33
borgmatic/actions/import_key.py
Normal file
@@ -0,0 +1,33 @@
|
||||
import logging
|
||||
|
||||
import borgmatic.borg.import_key
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_import_key(
|
||||
repository,
|
||||
config,
|
||||
local_borg_version,
|
||||
import_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "key import" action for the given repository.
|
||||
'''
|
||||
if import_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, import_arguments.repository
|
||||
):
|
||||
logger.info('Importing repository key')
|
||||
borgmatic.borg.import_key.import_key(
|
||||
repository['path'],
|
||||
config,
|
||||
local_borg_version,
|
||||
import_arguments,
|
||||
global_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
||||
@@ -11,7 +11,6 @@ def run_prune(
|
||||
config_filename,
|
||||
repository,
|
||||
config,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
prune_arguments,
|
||||
global_arguments,
|
||||
@@ -27,14 +26,6 @@ def run_prune(
|
||||
):
|
||||
return
|
||||
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('before_prune'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'pre-prune',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
logger.info(f'Pruning archives{dry_run_label}')
|
||||
borgmatic.borg.prune.prune_archives(
|
||||
global_arguments.dry_run,
|
||||
@@ -46,11 +37,3 @@ def run_prune(
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
config.get('after_prune'),
|
||||
config.get('umask'),
|
||||
config_filename,
|
||||
'post-prune',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
|
||||
53
borgmatic/actions/recreate.py
Normal file
53
borgmatic/actions/recreate.py
Normal file
@@ -0,0 +1,53 @@
|
||||
import logging
|
||||
|
||||
import borgmatic.borg.recreate
|
||||
import borgmatic.config.validate
|
||||
from borgmatic.actions.create import collect_patterns, process_patterns
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_recreate(
|
||||
repository,
|
||||
config,
|
||||
local_borg_version,
|
||||
recreate_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "recreate" action for the given repository.
|
||||
'''
|
||||
if recreate_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, recreate_arguments.repository
|
||||
):
|
||||
if recreate_arguments.archive:
|
||||
logger.answer(f'Recreating archive {recreate_arguments.archive}')
|
||||
else:
|
||||
logger.answer('Recreating repository')
|
||||
|
||||
# Collect and process patterns.
|
||||
processed_patterns = process_patterns(
|
||||
collect_patterns(config), borgmatic.config.paths.get_working_directory(config)
|
||||
)
|
||||
|
||||
borgmatic.borg.recreate.recreate_archive(
|
||||
repository['path'],
|
||||
borgmatic.borg.repo_list.resolve_archive_name(
|
||||
repository['path'],
|
||||
recreate_arguments.archive,
|
||||
config,
|
||||
local_borg_version,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
),
|
||||
config,
|
||||
local_borg_version,
|
||||
recreate_arguments,
|
||||
global_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
patterns=processed_patterns,
|
||||
)
|
||||
@@ -24,18 +24,38 @@ def run_repo_create(
|
||||
return
|
||||
|
||||
logger.info('Creating repository')
|
||||
|
||||
encryption_mode = repo_create_arguments.encryption_mode or repository.get('encryption')
|
||||
|
||||
if not encryption_mode:
|
||||
raise ValueError(
|
||||
'With the repo-create action, either the --encryption flag or the repository encryption option is required.'
|
||||
)
|
||||
|
||||
borgmatic.borg.repo_create.create_repository(
|
||||
global_arguments.dry_run,
|
||||
repository['path'],
|
||||
config,
|
||||
local_borg_version,
|
||||
global_arguments,
|
||||
repo_create_arguments.encryption_mode,
|
||||
encryption_mode,
|
||||
repo_create_arguments.source_repository,
|
||||
repo_create_arguments.copy_crypt_key,
|
||||
repo_create_arguments.append_only,
|
||||
repo_create_arguments.storage_quota,
|
||||
repo_create_arguments.make_parent_dirs,
|
||||
(
|
||||
repository.get('append_only')
|
||||
if repo_create_arguments.append_only is None
|
||||
else repo_create_arguments.append_only
|
||||
),
|
||||
(
|
||||
repository.get('storage_quota')
|
||||
if repo_create_arguments.storage_quota is None
|
||||
else repo_create_arguments.storage_quota
|
||||
),
|
||||
(
|
||||
repository.get('make_parent_directories')
|
||||
if repo_create_arguments.make_parent_directories is None
|
||||
else repo_create_arguments.make_parent_directories
|
||||
),
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
||||
|
||||
@@ -17,7 +17,13 @@ def run_transfer(
|
||||
'''
|
||||
Run the "transfer" action for the given repository.
|
||||
'''
|
||||
if transfer_arguments.archive and config.get('match_archives'):
|
||||
raise ValueError(
|
||||
'With the transfer action, only one of --archive and --match-archives/match_archives can be used.'
|
||||
)
|
||||
|
||||
logger.info('Transferring archives to repository')
|
||||
|
||||
borgmatic.borg.transfer.transfer_archives(
|
||||
global_arguments.dry_run,
|
||||
repository['path'],
|
||||
|
||||
@@ -32,7 +32,7 @@ def make_archive_filter_flags(local_borg_version, config, checks, check_argument
|
||||
if prefix
|
||||
else (
|
||||
flags.make_match_archives_flags(
|
||||
check_arguments.match_archives or config.get('match_archives'),
|
||||
config.get('match_archives'),
|
||||
config.get('archive_name_format'),
|
||||
local_borg_version,
|
||||
)
|
||||
@@ -170,7 +170,7 @@ def check_archives(
|
||||
+ (('--log-json',) if global_arguments.log_json else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ verbosity_flags
|
||||
+ (('--progress',) if check_arguments.progress else ())
|
||||
+ (('--progress',) if config.get('progress') else ())
|
||||
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
|
||||
+ flags.make_repository_flags(repository_path, local_borg_version)
|
||||
)
|
||||
@@ -180,7 +180,7 @@ def check_archives(
|
||||
# The Borg repair option triggers an interactive prompt, which won't work when output is
|
||||
# captured. And progress messes with the terminal directly.
|
||||
output_file=(
|
||||
DO_NOT_CAPTURE if check_arguments.repair or check_arguments.progress else None
|
||||
DO_NOT_CAPTURE if check_arguments.repair or config.get('progress') else None
|
||||
),
|
||||
environment=environment.make_environment(config),
|
||||
working_directory=working_directory,
|
||||
|
||||
@@ -15,9 +15,7 @@ def compact_segments(
|
||||
global_arguments,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
progress=False,
|
||||
cleanup_commits=False,
|
||||
threshold=None,
|
||||
):
|
||||
'''
|
||||
Given dry-run flag, a local or remote repository path, a configuration dict, and the local Borg
|
||||
@@ -26,6 +24,7 @@ def compact_segments(
|
||||
umask = config.get('umask', None)
|
||||
lock_wait = config.get('lock_wait', None)
|
||||
extra_borg_options = config.get('extra_borg_options', {}).get('compact', '')
|
||||
threshold = config.get('compact_threshold')
|
||||
|
||||
full_command = (
|
||||
(local_path, 'compact')
|
||||
@@ -33,7 +32,7 @@ def compact_segments(
|
||||
+ (('--umask', str(umask)) if umask else ())
|
||||
+ (('--log-json',) if global_arguments.log_json else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--progress',) if progress else ())
|
||||
+ (('--progress',) if config.get('progress') else ())
|
||||
+ (('--cleanup-commits',) if cleanup_commits else ())
|
||||
+ (('--threshold', str(threshold)) if threshold else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
|
||||
@@ -196,7 +196,7 @@ def check_all_root_patterns_exist(patterns):
|
||||
|
||||
if missing_paths:
|
||||
raise ValueError(
|
||||
f"Source directories / root pattern paths do not exist: {', '.join(missing_paths)}"
|
||||
f"Source directories or root pattern paths do not exist: {', '.join(missing_paths)}"
|
||||
)
|
||||
|
||||
|
||||
@@ -213,9 +213,7 @@ def make_base_create_command(
|
||||
borgmatic_runtime_directory,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
progress=False,
|
||||
json=False,
|
||||
list_files=False,
|
||||
stream_processes=None,
|
||||
):
|
||||
'''
|
||||
@@ -293,7 +291,7 @@ def make_base_create_command(
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (
|
||||
('--list', '--filter', list_filter_flags)
|
||||
if list_files and not json and not progress
|
||||
if config.get('list_details') and not json and not config.get('progress')
|
||||
else ()
|
||||
)
|
||||
+ (('--dry-run',) if dry_run else ())
|
||||
@@ -361,10 +359,7 @@ def create_archive(
|
||||
borgmatic_runtime_directory,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
progress=False,
|
||||
stats=False,
|
||||
json=False,
|
||||
list_files=False,
|
||||
stream_processes=None,
|
||||
):
|
||||
'''
|
||||
@@ -389,28 +384,26 @@ def create_archive(
|
||||
borgmatic_runtime_directory,
|
||||
local_path,
|
||||
remote_path,
|
||||
progress,
|
||||
json,
|
||||
list_files,
|
||||
stream_processes,
|
||||
)
|
||||
|
||||
if json:
|
||||
output_log_level = None
|
||||
elif list_files or (stats and not dry_run):
|
||||
elif config.get('list_details') or (config.get('statistics') and not dry_run):
|
||||
output_log_level = logging.ANSWER
|
||||
else:
|
||||
output_log_level = logging.INFO
|
||||
|
||||
# The progress output isn't compatible with captured and logged output, as progress messes with
|
||||
# the terminal directly.
|
||||
output_file = DO_NOT_CAPTURE if progress else None
|
||||
output_file = DO_NOT_CAPTURE if config.get('progress') else None
|
||||
|
||||
create_flags += (
|
||||
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
|
||||
+ (('--stats',) if stats and not json and not dry_run else ())
|
||||
+ (('--stats',) if config.get('statistics') and not json and not dry_run else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
|
||||
+ (('--progress',) if progress else ())
|
||||
+ (('--progress',) if config.get('progress') else ())
|
||||
+ (('--json',) if json else ())
|
||||
)
|
||||
borg_exit_codes = config.get('borg_exit_codes')
|
||||
|
||||
@@ -34,7 +34,7 @@ def make_delete_command(
|
||||
+ borgmatic.borg.flags.make_flags('umask', config.get('umask'))
|
||||
+ borgmatic.borg.flags.make_flags('log-json', global_arguments.log_json)
|
||||
+ borgmatic.borg.flags.make_flags('lock-wait', config.get('lock_wait'))
|
||||
+ borgmatic.borg.flags.make_flags('list', delete_arguments.list_archives)
|
||||
+ borgmatic.borg.flags.make_flags('list', config.get('list_details'))
|
||||
+ (
|
||||
(('--force',) + (('--force',) if delete_arguments.force >= 2 else ()))
|
||||
if delete_arguments.force
|
||||
@@ -48,9 +48,17 @@ def make_delete_command(
|
||||
local_borg_version=local_borg_version,
|
||||
default_archive_name_format='*',
|
||||
)
|
||||
+ (('--stats',) if config.get('statistics') else ())
|
||||
+ borgmatic.borg.flags.make_flags_from_arguments(
|
||||
delete_arguments,
|
||||
excludes=('list_archives', 'force', 'match_archives', 'archive', 'repository'),
|
||||
excludes=(
|
||||
'list_details',
|
||||
'statistics',
|
||||
'force',
|
||||
'match_archives',
|
||||
'archive',
|
||||
'repository',
|
||||
),
|
||||
)
|
||||
+ borgmatic.borg.flags.make_repository_flags(repository['path'], local_borg_version)
|
||||
)
|
||||
@@ -98,7 +106,7 @@ def delete_archives(
|
||||
|
||||
repo_delete_arguments = argparse.Namespace(
|
||||
repository=repository['path'],
|
||||
list_archives=delete_arguments.list_archives,
|
||||
list_details=delete_arguments.list_details,
|
||||
force=delete_arguments.force,
|
||||
cache_only=delete_arguments.cache_only,
|
||||
keep_security_info=delete_arguments.keep_security_info,
|
||||
|
||||
@@ -74,7 +74,7 @@ def make_environment(config):
|
||||
os.write(write_file_descriptor, passphrase.encode('utf-8'))
|
||||
os.close(write_file_descriptor)
|
||||
|
||||
# This, plus subprocess.Popen(..., close_fds=False) in execute.py, is necessary for the Borg
|
||||
# This plus subprocess.Popen(..., close_fds=False) in execute.py is necessary for the Borg
|
||||
# child process to inherit the file descriptor.
|
||||
os.set_inheritable(read_file_descriptor, True)
|
||||
environment['BORG_PASSPHRASE_FD'] = str(read_file_descriptor)
|
||||
|
||||
@@ -20,7 +20,6 @@ def export_tar_archive(
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
tar_filter=None,
|
||||
list_files=False,
|
||||
strip_components=None,
|
||||
):
|
||||
'''
|
||||
@@ -43,7 +42,7 @@ def export_tar_archive(
|
||||
+ (('--log-json',) if global_arguments.log_json else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--list',) if list_files else ())
|
||||
+ (('--list',) if config.get('list_details') else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--dry-run',) if dry_run else ())
|
||||
+ (('--tar-filter', tar_filter) if tar_filter else ())
|
||||
@@ -57,7 +56,7 @@ def export_tar_archive(
|
||||
+ (tuple(paths) if paths else ())
|
||||
)
|
||||
|
||||
if list_files:
|
||||
if config.get('list_details'):
|
||||
output_log_level = logging.ANSWER
|
||||
else:
|
||||
output_log_level = logging.INFO
|
||||
|
||||
@@ -77,7 +77,6 @@ def extract_archive(
|
||||
remote_path=None,
|
||||
destination_path=None,
|
||||
strip_components=None,
|
||||
progress=False,
|
||||
extract_to_stdout=False,
|
||||
):
|
||||
'''
|
||||
@@ -92,8 +91,8 @@ def extract_archive(
|
||||
umask = config.get('umask', None)
|
||||
lock_wait = config.get('lock_wait', None)
|
||||
|
||||
if progress and extract_to_stdout:
|
||||
raise ValueError('progress and extract_to_stdout cannot both be set')
|
||||
if config.get('progress') and extract_to_stdout:
|
||||
raise ValueError('progress and extract to stdout cannot both be set')
|
||||
|
||||
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
|
||||
numeric_ids_flags = ('--numeric-ids',) if config.get('numeric_ids') else ()
|
||||
@@ -128,15 +127,13 @@ def extract_archive(
|
||||
+ (('--debug', '--list', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--dry-run',) if dry_run else ())
|
||||
+ (('--strip-components', str(strip_components)) if strip_components else ())
|
||||
+ (('--progress',) if progress else ())
|
||||
+ (('--progress',) if config.get('progress') else ())
|
||||
+ (('--stdout',) if extract_to_stdout else ())
|
||||
+ flags.make_repository_archive_flags(
|
||||
# Make the repository path absolute so the destination directory used below via changing
|
||||
# the working directory doesn't prevent Borg from finding the repo. But also apply the
|
||||
# user's configured working directory (if any) to the repo path.
|
||||
borgmatic.config.validate.normalize_repository_path(
|
||||
os.path.join(working_directory or '', repository)
|
||||
),
|
||||
borgmatic.config.validate.normalize_repository_path(repository, working_directory),
|
||||
archive,
|
||||
local_borg_version,
|
||||
)
|
||||
@@ -150,7 +147,7 @@ def extract_archive(
|
||||
|
||||
# The progress output isn't compatible with captured and logged output, as progress messes with
|
||||
# the terminal directly.
|
||||
if progress:
|
||||
if config.get('progress'):
|
||||
return execute_command(
|
||||
full_command,
|
||||
output_file=DO_NOT_CAPTURE,
|
||||
|
||||
@@ -17,6 +17,7 @@ class Feature(Enum):
|
||||
MATCH_ARCHIVES = 11
|
||||
EXCLUDED_FILES_MINUS = 12
|
||||
ARCHIVE_SERIES = 13
|
||||
NO_PRUNE_STATS = 14
|
||||
|
||||
|
||||
FEATURE_TO_MINIMUM_BORG_VERSION = {
|
||||
@@ -33,6 +34,7 @@ FEATURE_TO_MINIMUM_BORG_VERSION = {
|
||||
Feature.MATCH_ARCHIVES: parse('2.0.0b3'), # borg --match-archives
|
||||
Feature.EXCLUDED_FILES_MINUS: parse('2.0.0b5'), # --list --filter uses "-" for excludes
|
||||
Feature.ARCHIVE_SERIES: parse('2.0.0b11'), # identically named archives form a series
|
||||
Feature.NO_PRUNE_STATS: parse('2.0.0b10'), # prune --stats is not available
|
||||
}
|
||||
|
||||
|
||||
|
||||
70
borgmatic/borg/import_key.py
Normal file
70
borgmatic/borg/import_key.py
Normal file
@@ -0,0 +1,70 @@
|
||||
import logging
|
||||
import os
|
||||
|
||||
import borgmatic.config.paths
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, flags
|
||||
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def import_key(
|
||||
repository_path,
|
||||
config,
|
||||
local_borg_version,
|
||||
import_arguments,
|
||||
global_arguments,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a configuration dict, the local Borg version, import
|
||||
arguments, and optional local and remote Borg paths, import the repository key from the
|
||||
path indicated in the import arguments.
|
||||
|
||||
If the path is empty or "-", then read the key from stdin.
|
||||
|
||||
Raise ValueError if the path is given and it does not exist.
|
||||
'''
|
||||
umask = config.get('umask', None)
|
||||
lock_wait = config.get('lock_wait', None)
|
||||
working_directory = borgmatic.config.paths.get_working_directory(config)
|
||||
|
||||
if import_arguments.path and import_arguments.path != '-':
|
||||
if not os.path.exists(os.path.join(working_directory or '', import_arguments.path)):
|
||||
raise ValueError(f'Path {import_arguments.path} does not exist. Aborting.')
|
||||
|
||||
input_file = None
|
||||
else:
|
||||
input_file = DO_NOT_CAPTURE
|
||||
|
||||
full_command = (
|
||||
(local_path, 'key', 'import')
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (('--umask', str(umask)) if umask else ())
|
||||
+ (('--log-json',) if global_arguments.log_json else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ flags.make_flags('paper', import_arguments.paper)
|
||||
+ flags.make_repository_flags(
|
||||
repository_path,
|
||||
local_borg_version,
|
||||
)
|
||||
+ ((import_arguments.path,) if input_file is None else ())
|
||||
)
|
||||
|
||||
if global_arguments.dry_run:
|
||||
logger.info('Skipping key import (dry run)')
|
||||
return
|
||||
|
||||
execute_command(
|
||||
full_command,
|
||||
input_file=input_file,
|
||||
output_log_level=logging.INFO,
|
||||
environment=environment.make_environment(config),
|
||||
working_directory=working_directory,
|
||||
borg_local_path=local_path,
|
||||
borg_exit_codes=config.get('borg_exit_codes'),
|
||||
)
|
||||
@@ -48,9 +48,7 @@ def make_info_command(
|
||||
if info_arguments.prefix
|
||||
else (
|
||||
flags.make_match_archives_flags(
|
||||
info_arguments.match_archives
|
||||
or info_arguments.archive
|
||||
or config.get('match_archives'),
|
||||
info_arguments.archive or config.get('match_archives'),
|
||||
config.get('archive_name_format'),
|
||||
local_borg_version,
|
||||
)
|
||||
|
||||
@@ -41,7 +41,7 @@ def make_prune_flags(config, prune_arguments, local_borg_version):
|
||||
if prefix
|
||||
else (
|
||||
flags.make_match_archives_flags(
|
||||
prune_arguments.match_archives or config.get('match_archives'),
|
||||
config.get('match_archives'),
|
||||
config.get('archive_name_format'),
|
||||
local_borg_version,
|
||||
)
|
||||
@@ -75,20 +75,26 @@ def prune_archives(
|
||||
+ (('--umask', str(umask)) if umask else ())
|
||||
+ (('--log-json',) if global_arguments.log_json else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--stats',) if prune_arguments.stats and not dry_run else ())
|
||||
+ (
|
||||
('--stats',)
|
||||
if config.get('statistics')
|
||||
and not dry_run
|
||||
and not feature.available(feature.Feature.NO_PRUNE_STATS, local_borg_version)
|
||||
else ()
|
||||
)
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ flags.make_flags_from_arguments(
|
||||
prune_arguments,
|
||||
excludes=('repository', 'match_archives', 'stats', 'list_archives'),
|
||||
excludes=('repository', 'match_archives', 'statistics', 'list_details'),
|
||||
)
|
||||
+ (('--list',) if prune_arguments.list_archives else ())
|
||||
+ (('--list',) if config.get('list_details') else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--dry-run',) if dry_run else ())
|
||||
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
|
||||
+ flags.make_repository_flags(repository_path, local_borg_version)
|
||||
)
|
||||
|
||||
if prune_arguments.stats or prune_arguments.list_archives:
|
||||
if config.get('statistics') or config.get('list_details'):
|
||||
output_log_level = logging.ANSWER
|
||||
else:
|
||||
output_log_level = logging.INFO
|
||||
|
||||
103
borgmatic/borg/recreate.py
Normal file
103
borgmatic/borg/recreate.py
Normal file
@@ -0,0 +1,103 @@
|
||||
import logging
|
||||
import shlex
|
||||
|
||||
import borgmatic.borg.environment
|
||||
import borgmatic.borg.feature
|
||||
import borgmatic.config.paths
|
||||
import borgmatic.execute
|
||||
from borgmatic.borg import flags
|
||||
from borgmatic.borg.create import make_exclude_flags, make_list_filter_flags, write_patterns_file
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def recreate_archive(
|
||||
repository,
|
||||
archive,
|
||||
config,
|
||||
local_borg_version,
|
||||
recreate_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path=None,
|
||||
patterns=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, an archive name, a configuration dict, the local Borg
|
||||
version string, an argparse.Namespace of recreate arguments, an argparse.Namespace of global
|
||||
arguments, optional local and remote Borg paths, executes the recreate command with the given
|
||||
arguments.
|
||||
'''
|
||||
lock_wait = config.get('lock_wait', None)
|
||||
exclude_flags = make_exclude_flags(config)
|
||||
compression = config.get('compression', None)
|
||||
chunker_params = config.get('chunker_params', None)
|
||||
# Available recompress MODES: "if-different", "always", "never" (default)
|
||||
recompress = config.get('recompress', None)
|
||||
|
||||
# Write patterns to a temporary file and use that file with --patterns-from.
|
||||
patterns_file = write_patterns_file(
|
||||
patterns, borgmatic.config.paths.get_working_directory(config)
|
||||
)
|
||||
|
||||
recreate_command = (
|
||||
(local_path, 'recreate')
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (('--log-json',) if global_arguments.log_json else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait is not None else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--patterns-from', patterns_file.name) if patterns_file else ())
|
||||
+ (
|
||||
(
|
||||
'--list',
|
||||
'--filter',
|
||||
make_list_filter_flags(local_borg_version, global_arguments.dry_run),
|
||||
)
|
||||
if config.get('list_details')
|
||||
else ()
|
||||
)
|
||||
# Flag --target works only for a single archive.
|
||||
+ (('--target', recreate_arguments.target) if recreate_arguments.target and archive else ())
|
||||
+ (
|
||||
('--comment', shlex.quote(recreate_arguments.comment))
|
||||
if recreate_arguments.comment
|
||||
else ()
|
||||
)
|
||||
+ (('--timestamp', recreate_arguments.timestamp) if recreate_arguments.timestamp else ())
|
||||
+ (('--compression', compression) if compression else ())
|
||||
+ (('--chunker-params', chunker_params) if chunker_params else ())
|
||||
+ (('--recompress', recompress) if recompress else ())
|
||||
+ exclude_flags
|
||||
+ (
|
||||
(
|
||||
flags.make_repository_flags(repository, local_borg_version)
|
||||
+ flags.make_match_archives_flags(
|
||||
archive or config.get('match_archives'),
|
||||
config.get('archive_name_format'),
|
||||
local_borg_version,
|
||||
)
|
||||
)
|
||||
if borgmatic.borg.feature.available(
|
||||
borgmatic.borg.feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version
|
||||
)
|
||||
else (
|
||||
flags.make_repository_archive_flags(repository, archive, local_borg_version)
|
||||
if archive
|
||||
else flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
if global_arguments.dry_run:
|
||||
logger.info('Skipping the archive recreation (dry run)')
|
||||
return
|
||||
|
||||
borgmatic.execute.execute_command(
|
||||
full_command=recreate_command,
|
||||
output_log_level=logging.INFO,
|
||||
environment=borgmatic.borg.environment.make_environment(config),
|
||||
working_directory=borgmatic.config.paths.get_working_directory(config),
|
||||
borg_local_path=local_path,
|
||||
borg_exit_codes=config.get('borg_exit_codes'),
|
||||
)
|
||||
@@ -24,7 +24,7 @@ def create_repository(
|
||||
copy_crypt_key=False,
|
||||
append_only=None,
|
||||
storage_quota=None,
|
||||
make_parent_dirs=False,
|
||||
make_parent_directories=False,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
@@ -79,7 +79,7 @@ def create_repository(
|
||||
+ (('--copy-crypt-key',) if copy_crypt_key else ())
|
||||
+ (('--append-only',) if append_only else ())
|
||||
+ (('--storage-quota', storage_quota) if storage_quota else ())
|
||||
+ (('--make-parent-dirs',) if make_parent_dirs else ())
|
||||
+ (('--make-parent-dirs',) if make_parent_directories else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--log-json',) if global_arguments.log_json else ())
|
||||
|
||||
@@ -39,14 +39,14 @@ def make_repo_delete_command(
|
||||
+ borgmatic.borg.flags.make_flags('umask', config.get('umask'))
|
||||
+ borgmatic.borg.flags.make_flags('log-json', global_arguments.log_json)
|
||||
+ borgmatic.borg.flags.make_flags('lock-wait', config.get('lock_wait'))
|
||||
+ borgmatic.borg.flags.make_flags('list', repo_delete_arguments.list_archives)
|
||||
+ borgmatic.borg.flags.make_flags('list', config.get('list_details'))
|
||||
+ (
|
||||
(('--force',) + (('--force',) if repo_delete_arguments.force >= 2 else ()))
|
||||
if repo_delete_arguments.force
|
||||
else ()
|
||||
)
|
||||
+ borgmatic.borg.flags.make_flags_from_arguments(
|
||||
repo_delete_arguments, excludes=('list_archives', 'force', 'repository')
|
||||
repo_delete_arguments, excludes=('list_details', 'force', 'repository')
|
||||
)
|
||||
+ borgmatic.borg.flags.make_repository_flags(repository['path'], local_borg_version)
|
||||
)
|
||||
|
||||
@@ -113,7 +113,7 @@ def make_repo_list_command(
|
||||
if repo_list_arguments.prefix
|
||||
else (
|
||||
flags.make_match_archives_flags(
|
||||
repo_list_arguments.match_archives or config.get('match_archives'),
|
||||
config.get('match_archives'),
|
||||
config.get('archive_name_format'),
|
||||
local_borg_version,
|
||||
)
|
||||
|
||||
@@ -32,17 +32,22 @@ def transfer_archives(
|
||||
+ flags.make_flags('remote-path', remote_path)
|
||||
+ flags.make_flags('umask', config.get('umask'))
|
||||
+ flags.make_flags('log-json', global_arguments.log_json)
|
||||
+ flags.make_flags('lock-wait', config.get('lock_wait', None))
|
||||
+ flags.make_flags('lock-wait', config.get('lock_wait'))
|
||||
+ flags.make_flags('progress', config.get('progress'))
|
||||
+ (
|
||||
flags.make_flags_from_arguments(
|
||||
transfer_arguments,
|
||||
excludes=('repository', 'source_repository', 'archive', 'match_archives'),
|
||||
excludes=(
|
||||
'repository',
|
||||
'source_repository',
|
||||
'archive',
|
||||
'match_archives',
|
||||
'progress',
|
||||
),
|
||||
)
|
||||
or (
|
||||
flags.make_match_archives_flags(
|
||||
transfer_arguments.match_archives
|
||||
or transfer_arguments.archive
|
||||
or config.get('match_archives'),
|
||||
transfer_arguments.archive or config.get('match_archives'),
|
||||
config.get('archive_name_format'),
|
||||
local_borg_version,
|
||||
)
|
||||
@@ -56,7 +61,7 @@ def transfer_archives(
|
||||
return execute_command(
|
||||
full_command,
|
||||
output_log_level=logging.ANSWER,
|
||||
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
|
||||
output_file=DO_NOT_CAPTURE if config.get('progress') else None,
|
||||
environment=environment.make_environment(config),
|
||||
working_directory=borgmatic.config.paths.get_working_directory(config),
|
||||
borg_local_path=local_path,
|
||||
|
||||
@@ -1,8 +1,13 @@
|
||||
import collections
|
||||
import io
|
||||
import itertools
|
||||
import re
|
||||
import sys
|
||||
from argparse import ArgumentParser
|
||||
|
||||
import ruamel.yaml
|
||||
|
||||
import borgmatic.config.schema
|
||||
from borgmatic.config import collect
|
||||
|
||||
ACTION_ALIASES = {
|
||||
@@ -27,6 +32,7 @@ ACTION_ALIASES = {
|
||||
'break-lock': [],
|
||||
'key': [],
|
||||
'borg': [],
|
||||
'recreate': [],
|
||||
}
|
||||
|
||||
|
||||
@@ -63,9 +69,9 @@ def get_subactions_for_actions(action_parsers):
|
||||
|
||||
def omit_values_colliding_with_action_names(unparsed_arguments, parsed_arguments):
|
||||
'''
|
||||
Given a sequence of string arguments and a dict from action name to parsed argparse.Namespace
|
||||
arguments, return the string arguments with any values omitted that happen to be the same as
|
||||
the name of a borgmatic action.
|
||||
Given unparsed arguments as a sequence of strings and a dict from action name to parsed
|
||||
argparse.Namespace arguments, return the string arguments with any values omitted that happen to
|
||||
be the same as the name of a borgmatic action.
|
||||
|
||||
This prevents, for instance, "check --only extract" from triggering the "extract" action.
|
||||
'''
|
||||
@@ -282,17 +288,270 @@ def parse_arguments_for_actions(unparsed_arguments, action_parsers, global_parse
|
||||
)
|
||||
|
||||
|
||||
def make_parsers():
|
||||
OMITTED_FLAG_NAMES = {'match-archives', 'progress', 'statistics', 'list-details'}
|
||||
|
||||
|
||||
def make_argument_description(schema, flag_name):
|
||||
'''
|
||||
Build a global arguments parser, individual action parsers, and a combined parser containing
|
||||
both. Return them as a tuple. The global parser is useful for parsing just global arguments
|
||||
while ignoring actions, and the combined parser is handy for displaying help that includes
|
||||
everything: global flags, a list of actions, etc.
|
||||
Given a configuration schema dict and a flag name for it, extend the schema's description with
|
||||
an example or additional information as appropriate based on its type. Return the updated
|
||||
description for use in a command-line argument.
|
||||
'''
|
||||
description = schema.get('description')
|
||||
schema_type = schema.get('type')
|
||||
example = schema.get('example')
|
||||
pieces = [description] if description else []
|
||||
|
||||
if '[0]' in flag_name:
|
||||
pieces.append(
|
||||
' To specify a different list element, replace the "[0]" with another array index ("[1]", "[2]", etc.).'
|
||||
)
|
||||
|
||||
if example and schema_type in ('array', 'object'):
|
||||
example_buffer = io.StringIO()
|
||||
yaml = ruamel.yaml.YAML(typ='safe')
|
||||
yaml.default_flow_style = True
|
||||
yaml.dump(example, example_buffer)
|
||||
|
||||
pieces.append(f'Example value: "{example_buffer.getvalue().strip()}"')
|
||||
|
||||
return ' '.join(pieces).replace('%', '%%')
|
||||
|
||||
|
||||
def add_array_element_arguments(arguments_group, unparsed_arguments, flag_name):
|
||||
r'''
|
||||
Given an argparse._ArgumentGroup instance, a sequence of unparsed argument strings, and a dotted
|
||||
flag name, add command-line array element flags that correspond to the given unparsed arguments.
|
||||
|
||||
Here's the background. We want to support flags that can have arbitrary indices like:
|
||||
|
||||
--foo.bar[1].baz
|
||||
|
||||
But argparse doesn't support that natively because the index can be an arbitrary number. We
|
||||
won't let that stop us though, will we?
|
||||
|
||||
If the current flag name has an array component in it (e.g. a name with "[0]"), then make a
|
||||
pattern that would match the flag name regardless of the number that's in it. The idea is that
|
||||
we want to look for unparsed arguments that appear like the flag name, but instead of "[0]" they
|
||||
have, say, "[1]" or "[123]".
|
||||
|
||||
Next, we check each unparsed argument against that pattern. If one of them matches, add an
|
||||
argument flag for it to the argument parser group. Example:
|
||||
|
||||
Let's say flag_name is:
|
||||
|
||||
--foo.bar[0].baz
|
||||
|
||||
... then the regular expression pattern will be:
|
||||
|
||||
^--foo\.bar\[\d+\]\.baz
|
||||
|
||||
... and, if that matches an unparsed argument of:
|
||||
|
||||
--foo.bar[1].baz
|
||||
|
||||
... then an argument flag will get added equal to that unparsed argument. And so the unparsed
|
||||
argument will match it when parsing is performed! In this manner, we're using the actual user
|
||||
CLI input to inform what exact flags we support.
|
||||
'''
|
||||
if '[0]' not in flag_name or not unparsed_arguments or '--help' in unparsed_arguments:
|
||||
return
|
||||
|
||||
pattern = re.compile(fr'^--{flag_name.replace("[0]", r"\[\d+\]").replace(".", r"\.")}$')
|
||||
|
||||
try:
|
||||
# Find an existing list index flag (and its action) corresponding to the given flag name.
|
||||
(argument_action, existing_flag_name) = next(
|
||||
(action, action_flag_name)
|
||||
for action in arguments_group._group_actions
|
||||
for action_flag_name in action.option_strings
|
||||
if pattern.match(action_flag_name)
|
||||
if f'--{flag_name}'.startswith(action_flag_name)
|
||||
)
|
||||
|
||||
# Based on the type of the action (e.g. argparse._StoreTrueAction), look up the corresponding
|
||||
# action registry name (e.g., "store_true") to pass to add_argument(action=...) below.
|
||||
action_registry_name = next(
|
||||
registry_name
|
||||
for registry_name, action_type in arguments_group._registries['action'].items()
|
||||
# Not using isinstance() here because we only want an exact match—no parent classes.
|
||||
if type(argument_action) is action_type
|
||||
)
|
||||
except StopIteration:
|
||||
return
|
||||
|
||||
for unparsed in unparsed_arguments:
|
||||
unparsed_flag_name = unparsed.split('=', 1)[0]
|
||||
destination_name = unparsed_flag_name.lstrip('-').replace('-', '_')
|
||||
|
||||
if not pattern.match(unparsed_flag_name) or unparsed_flag_name == existing_flag_name:
|
||||
continue
|
||||
|
||||
if action_registry_name in ('store_true', 'store_false'):
|
||||
arguments_group.add_argument(
|
||||
unparsed_flag_name,
|
||||
action=action_registry_name,
|
||||
default=argument_action.default,
|
||||
dest=destination_name,
|
||||
required=argument_action.nargs,
|
||||
)
|
||||
else:
|
||||
arguments_group.add_argument(
|
||||
unparsed_flag_name,
|
||||
action=action_registry_name,
|
||||
choices=argument_action.choices,
|
||||
default=argument_action.default,
|
||||
dest=destination_name,
|
||||
nargs=argument_action.nargs,
|
||||
required=argument_action.nargs,
|
||||
type=argument_action.type,
|
||||
)
|
||||
|
||||
|
||||
def add_arguments_from_schema(arguments_group, schema, unparsed_arguments, names=None):
|
||||
'''
|
||||
Given an argparse._ArgumentGroup instance, a configuration schema dict, and a sequence of
|
||||
unparsed argument strings, convert the entire schema into corresponding command-line flags and
|
||||
add them to the arguments group.
|
||||
|
||||
For instance, given a schema of:
|
||||
|
||||
{
|
||||
'type': 'object',
|
||||
'properties': {
|
||||
'foo': {
|
||||
'type': 'object',
|
||||
'properties': {
|
||||
'bar': {'type': 'integer'}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
... the following flag will be added to the arguments group:
|
||||
|
||||
--foo.bar
|
||||
|
||||
If "foo" is instead an array of objects, both of the following will get added:
|
||||
|
||||
--foo
|
||||
--foo[0].bar
|
||||
|
||||
And if names are also passed in, they are considered to be the name components of an option
|
||||
(e.g. "foo" and "bar") and are used to construct a resulting flag.
|
||||
|
||||
Bail if the schema is not a dict.
|
||||
'''
|
||||
if names is None:
|
||||
names = ()
|
||||
|
||||
if not isinstance(schema, dict):
|
||||
return
|
||||
|
||||
schema_type = schema.get('type')
|
||||
|
||||
# If this option has multiple types, just use the first one (that isn't "null").
|
||||
if isinstance(schema_type, list):
|
||||
try:
|
||||
schema_type = next(single_type for single_type in schema_type if single_type != 'null')
|
||||
except StopIteration:
|
||||
raise ValueError(f'Unknown type in configuration schema: {schema_type}')
|
||||
|
||||
# If this is an "object" type, recurse for each child option ("property").
|
||||
if schema_type == 'object':
|
||||
properties = schema.get('properties')
|
||||
|
||||
# If there are child properties, recurse for each one. But if there are no child properties,
|
||||
# fall through so that a flag gets added below for the (empty) object.
|
||||
if properties:
|
||||
for name, child in properties.items():
|
||||
add_arguments_from_schema(
|
||||
arguments_group, child, unparsed_arguments, names + (name,)
|
||||
)
|
||||
|
||||
return
|
||||
|
||||
# If this is an "array" type, recurse for each items type child option. Don't return yet so that
|
||||
# a flag also gets added below for the array itself.
|
||||
if schema_type == 'array':
|
||||
items = schema.get('items', {})
|
||||
properties = borgmatic.config.schema.get_properties(items)
|
||||
|
||||
if properties:
|
||||
for name, child in properties.items():
|
||||
add_arguments_from_schema(
|
||||
arguments_group,
|
||||
child,
|
||||
unparsed_arguments,
|
||||
names[:-1] + (f'{names[-1]}[0]',) + (name,),
|
||||
)
|
||||
# If there aren't any children, then this is an array of scalars. Recurse accordingly.
|
||||
else:
|
||||
add_arguments_from_schema(
|
||||
arguments_group, items, unparsed_arguments, names[:-1] + (f'{names[-1]}[0]',)
|
||||
)
|
||||
|
||||
flag_name = '.'.join(names).replace('_', '-')
|
||||
|
||||
# Certain options already have corresponding flags on individual actions (like "create
|
||||
# --progress"), so don't bother adding them to the global flags.
|
||||
if not flag_name or flag_name in OMITTED_FLAG_NAMES:
|
||||
return
|
||||
|
||||
metavar = names[-1].upper()
|
||||
description = make_argument_description(schema, flag_name)
|
||||
|
||||
# The object=str and array=str given here is to support specifying an object or an array as a
|
||||
# YAML string on the command-line.
|
||||
argument_type = borgmatic.config.schema.parse_type(schema_type, object=str, array=str)
|
||||
|
||||
# As a UX nicety, add separate true and false flags for boolean options.
|
||||
if schema_type == 'boolean':
|
||||
arguments_group.add_argument(
|
||||
f'--{flag_name}',
|
||||
action='store_true',
|
||||
default=None,
|
||||
help=description,
|
||||
)
|
||||
|
||||
if names[-1].startswith('no_'):
|
||||
no_flag_name = '.'.join(names[:-1] + (names[-1][len('no_') :],)).replace('_', '-')
|
||||
else:
|
||||
no_flag_name = '.'.join(names[:-1] + ('no-' + names[-1],)).replace('_', '-')
|
||||
|
||||
arguments_group.add_argument(
|
||||
f'--{no_flag_name}',
|
||||
dest=flag_name.replace('-', '_'),
|
||||
action='store_false',
|
||||
default=None,
|
||||
help=f'Set the --{flag_name} value to false.',
|
||||
)
|
||||
else:
|
||||
arguments_group.add_argument(
|
||||
f'--{flag_name}',
|
||||
type=argument_type,
|
||||
metavar=metavar,
|
||||
help=description,
|
||||
)
|
||||
|
||||
add_array_element_arguments(arguments_group, unparsed_arguments, flag_name)
|
||||
|
||||
|
||||
def make_parsers(schema, unparsed_arguments):
|
||||
'''
|
||||
Given a configuration schema dict and unparsed arguments as a sequence of strings, build a
|
||||
global arguments parser, individual action parsers, and a combined parser containing both.
|
||||
Return them as a tuple. The global parser is useful for parsing just global arguments while
|
||||
ignoring actions, and the combined parser is handy for displaying help that includes everything:
|
||||
global flags, a list of actions, etc.
|
||||
'''
|
||||
config_paths = collect.get_default_config_paths(expand_home=True)
|
||||
unexpanded_config_paths = collect.get_default_config_paths(expand_home=False)
|
||||
|
||||
global_parser = ArgumentParser(add_help=False)
|
||||
# Using allow_abbrev=False here prevents the global parser from erroring about "ambiguous"
|
||||
# options like --encryption. Such options are intended for an action parser rather than the
|
||||
# global parser, and so we don't want to error on them here.
|
||||
global_parser = ArgumentParser(allow_abbrev=False, add_help=False)
|
||||
global_group = global_parser.add_argument_group('global arguments')
|
||||
|
||||
global_group.add_argument(
|
||||
@@ -309,9 +568,6 @@ def make_parsers():
|
||||
action='store_true',
|
||||
help='Go through the motions, but do not actually write to any repositories',
|
||||
)
|
||||
global_group.add_argument(
|
||||
'-nc', '--no-color', dest='no_color', action='store_true', help='Disable colored output'
|
||||
)
|
||||
global_group.add_argument(
|
||||
'-v',
|
||||
'--verbosity',
|
||||
@@ -388,6 +644,7 @@ def make_parsers():
|
||||
action='store_true',
|
||||
help='Display installed version number of borgmatic and exit',
|
||||
)
|
||||
add_arguments_from_schema(global_group, schema, unparsed_arguments)
|
||||
|
||||
global_plus_action_parser = ArgumentParser(
|
||||
description='''
|
||||
@@ -415,7 +672,6 @@ def make_parsers():
|
||||
'--encryption',
|
||||
dest='encryption_mode',
|
||||
help='Borg repository encryption mode',
|
||||
required=True,
|
||||
)
|
||||
repo_create_group.add_argument(
|
||||
'--source-repository',
|
||||
@@ -434,6 +690,7 @@ def make_parsers():
|
||||
)
|
||||
repo_create_group.add_argument(
|
||||
'--append-only',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Create an append-only repository',
|
||||
)
|
||||
@@ -443,6 +700,8 @@ def make_parsers():
|
||||
)
|
||||
repo_create_group.add_argument(
|
||||
'--make-parent-dirs',
|
||||
dest='make_parent_directories',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Create any missing parent directories of the repository directory',
|
||||
)
|
||||
@@ -477,7 +736,7 @@ def make_parsers():
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'--progress',
|
||||
default=False,
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Display progress as each archive is transferred',
|
||||
)
|
||||
@@ -544,13 +803,17 @@ def make_parsers():
|
||||
)
|
||||
prune_group.add_argument(
|
||||
'--stats',
|
||||
dest='stats',
|
||||
default=False,
|
||||
dest='statistics',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Display statistics of the pruned archive',
|
||||
help='Display statistics of the pruned archive [Borg 1 only]',
|
||||
)
|
||||
prune_group.add_argument(
|
||||
'--list', dest='list_archives', action='store_true', help='List archives kept/pruned'
|
||||
'--list',
|
||||
dest='list_details',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='List archives kept/pruned',
|
||||
)
|
||||
prune_group.add_argument(
|
||||
'--oldest',
|
||||
@@ -588,8 +851,7 @@ def make_parsers():
|
||||
)
|
||||
compact_group.add_argument(
|
||||
'--progress',
|
||||
dest='progress',
|
||||
default=False,
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Display progress as each segment is compacted',
|
||||
)
|
||||
@@ -603,7 +865,7 @@ def make_parsers():
|
||||
compact_group.add_argument(
|
||||
'--threshold',
|
||||
type=int,
|
||||
dest='threshold',
|
||||
dest='compact_threshold',
|
||||
help='Minimum saved space percentage threshold for compacting a segment, defaults to 10',
|
||||
)
|
||||
compact_group.add_argument(
|
||||
@@ -624,20 +886,24 @@ def make_parsers():
|
||||
)
|
||||
create_group.add_argument(
|
||||
'--progress',
|
||||
dest='progress',
|
||||
default=False,
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Display progress for each file as it is backed up',
|
||||
)
|
||||
create_group.add_argument(
|
||||
'--stats',
|
||||
dest='stats',
|
||||
default=False,
|
||||
dest='statistics',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Display statistics of archive',
|
||||
)
|
||||
create_group.add_argument(
|
||||
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
|
||||
'--list',
|
||||
'--files',
|
||||
dest='list_details',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Show per-file details',
|
||||
)
|
||||
create_group.add_argument(
|
||||
'--json', dest='json', default=False, action='store_true', help='Output results as JSON'
|
||||
@@ -658,8 +924,7 @@ def make_parsers():
|
||||
)
|
||||
check_group.add_argument(
|
||||
'--progress',
|
||||
dest='progress',
|
||||
default=False,
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Display progress for each file as it is checked',
|
||||
)
|
||||
@@ -716,12 +981,15 @@ def make_parsers():
|
||||
)
|
||||
delete_group.add_argument(
|
||||
'--list',
|
||||
dest='list_archives',
|
||||
dest='list_details',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Show details for the deleted archives',
|
||||
)
|
||||
delete_group.add_argument(
|
||||
'--stats',
|
||||
dest='statistics',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Display statistics for the deleted archives',
|
||||
)
|
||||
@@ -826,8 +1094,7 @@ def make_parsers():
|
||||
)
|
||||
extract_group.add_argument(
|
||||
'--progress',
|
||||
dest='progress',
|
||||
default=False,
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Display progress for each file as it is extracted',
|
||||
)
|
||||
@@ -902,8 +1169,7 @@ def make_parsers():
|
||||
)
|
||||
config_bootstrap_group.add_argument(
|
||||
'--progress',
|
||||
dest='progress',
|
||||
default=False,
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Display progress for each file as it is extracted',
|
||||
)
|
||||
@@ -996,7 +1262,12 @@ def make_parsers():
|
||||
'--tar-filter', help='Name of filter program to pipe data through'
|
||||
)
|
||||
export_tar_group.add_argument(
|
||||
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
|
||||
'--list',
|
||||
'--files',
|
||||
dest='list_details',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Show per-file details',
|
||||
)
|
||||
export_tar_group.add_argument(
|
||||
'--strip-components',
|
||||
@@ -1107,7 +1378,8 @@ def make_parsers():
|
||||
)
|
||||
repo_delete_group.add_argument(
|
||||
'--list',
|
||||
dest='list_archives',
|
||||
dest='list_details',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Show details for the archives in the given repository',
|
||||
)
|
||||
@@ -1479,6 +1751,31 @@ def make_parsers():
|
||||
'-h', '--help', action='help', help='Show this help message and exit'
|
||||
)
|
||||
|
||||
key_import_parser = key_parsers.add_parser(
|
||||
'import',
|
||||
help='Import a copy of the repository key from backup',
|
||||
description='Import a copy of the repository key from backup',
|
||||
add_help=False,
|
||||
)
|
||||
key_import_group = key_import_parser.add_argument_group('key import arguments')
|
||||
key_import_group.add_argument(
|
||||
'--paper',
|
||||
action='store_true',
|
||||
help='Import interactively from a backup done with --paper',
|
||||
)
|
||||
key_import_group.add_argument(
|
||||
'--repository',
|
||||
help='Path of repository to import the key from, defaults to the configured repository if there is only one, quoted globs supported',
|
||||
)
|
||||
key_import_group.add_argument(
|
||||
'--path',
|
||||
metavar='PATH',
|
||||
help='Path to import the key from backup, defaults to stdin',
|
||||
)
|
||||
key_import_group.add_argument(
|
||||
'-h', '--help', action='help', help='Show this help message and exit'
|
||||
)
|
||||
|
||||
key_change_passphrase_parser = key_parsers.add_parser(
|
||||
'change-passphrase',
|
||||
help='Change the passphrase protecting the repository key',
|
||||
@@ -1496,6 +1793,56 @@ def make_parsers():
|
||||
'-h', '--help', action='help', help='Show this help message and exit'
|
||||
)
|
||||
|
||||
recreate_parser = action_parsers.add_parser(
|
||||
'recreate',
|
||||
aliases=ACTION_ALIASES['recreate'],
|
||||
help='Recreate an archive in a repository (with Borg 1.2+, you must run compact afterwards to actually free space)',
|
||||
description='Recreate an archive in a repository (with Borg 1.2+, you must run compact afterwards to actually free space)',
|
||||
add_help=False,
|
||||
)
|
||||
recreate_group = recreate_parser.add_argument_group('recreate arguments')
|
||||
recreate_group.add_argument(
|
||||
'--repository',
|
||||
help='Path of repository containing archive to recreate, defaults to the configured repository if there is only one, quoted globs supported',
|
||||
)
|
||||
recreate_group.add_argument(
|
||||
'--archive',
|
||||
help='Archive name, hash, or series to recreate',
|
||||
)
|
||||
recreate_group.add_argument(
|
||||
'--list',
|
||||
dest='list_details',
|
||||
default=None,
|
||||
action='store_true',
|
||||
help='Show per-file details',
|
||||
)
|
||||
recreate_group.add_argument(
|
||||
'--target',
|
||||
metavar='TARGET',
|
||||
help='Create a new archive from the specified archive (via --archive), without replacing it',
|
||||
)
|
||||
recreate_group.add_argument(
|
||||
'--comment',
|
||||
metavar='COMMENT',
|
||||
help='Add a comment text to the archive or, if an archive is not provided, to all matching archives',
|
||||
)
|
||||
recreate_group.add_argument(
|
||||
'--timestamp',
|
||||
metavar='TIMESTAMP',
|
||||
help='Manually override the archive creation date/time (UTC)',
|
||||
)
|
||||
recreate_group.add_argument(
|
||||
'-a',
|
||||
'--match-archives',
|
||||
'--glob-archives',
|
||||
dest='match_archives',
|
||||
metavar='PATTERN',
|
||||
help='Only consider archive names, hashes, or series matching this pattern [Borg 2.x+ only]',
|
||||
)
|
||||
recreate_group.add_argument(
|
||||
'-h', '--help', action='help', help='Show this help message and exit'
|
||||
)
|
||||
|
||||
borg_parser = action_parsers.add_parser(
|
||||
'borg',
|
||||
aliases=ACTION_ALIASES['borg'],
|
||||
@@ -1523,15 +1870,18 @@ def make_parsers():
|
||||
return global_parser, action_parsers, global_plus_action_parser
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
def parse_arguments(schema, *unparsed_arguments):
|
||||
'''
|
||||
Given command-line arguments with which this script was invoked, parse the arguments and return
|
||||
them as a dict mapping from action name (or "global") to an argparse.Namespace instance.
|
||||
Given a configuration schema dict and the command-line arguments with which this script was
|
||||
invoked and unparsed arguments as a sequence of strings, parse the arguments and return them as
|
||||
a dict mapping from action name (or "global") to an argparse.Namespace instance.
|
||||
|
||||
Raise ValueError if the arguments cannot be parsed.
|
||||
Raise SystemExit with an error code of 0 if "--help" was requested.
|
||||
'''
|
||||
global_parser, action_parsers, global_plus_action_parser = make_parsers()
|
||||
global_parser, action_parsers, global_plus_action_parser = make_parsers(
|
||||
schema, unparsed_arguments
|
||||
)
|
||||
arguments, remaining_action_arguments = parse_arguments_for_actions(
|
||||
unparsed_arguments, action_parsers.choices, global_parser
|
||||
)
|
||||
@@ -1559,15 +1909,6 @@ def parse_arguments(*unparsed_arguments):
|
||||
f"Unrecognized argument{'s' if len(unknown_arguments) > 1 else ''}: {' '.join(unknown_arguments)}"
|
||||
)
|
||||
|
||||
if 'create' in arguments and arguments['create'].list_files and arguments['create'].progress:
|
||||
raise ValueError(
|
||||
'With the create action, only one of --list (--files) and --progress flags can be used.'
|
||||
)
|
||||
if 'create' in arguments and arguments['create'].list_files and arguments['create'].json:
|
||||
raise ValueError(
|
||||
'With the create action, only one of --list (--files) and --json flags can be used.'
|
||||
)
|
||||
|
||||
if (
|
||||
('list' in arguments and 'repo-info' in arguments and arguments['list'].json)
|
||||
or ('list' in arguments and 'info' in arguments and arguments['list'].json)
|
||||
@@ -1575,15 +1916,6 @@ def parse_arguments(*unparsed_arguments):
|
||||
):
|
||||
raise ValueError('With the --json flag, multiple actions cannot be used together.')
|
||||
|
||||
if (
|
||||
'transfer' in arguments
|
||||
and arguments['transfer'].archive
|
||||
and arguments['transfer'].match_archives
|
||||
):
|
||||
raise ValueError(
|
||||
'With the transfer action, only one of --archive and --match-archives flags can be used.'
|
||||
)
|
||||
|
||||
if 'list' in arguments and (arguments['list'].prefix and arguments['list'].match_archives):
|
||||
raise ValueError(
|
||||
'With the list action, only one of --prefix or --match-archives flags can be used.'
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,5 +1,7 @@
|
||||
import borgmatic.commands.arguments
|
||||
import borgmatic.commands.completion.actions
|
||||
import borgmatic.commands.completion.flag
|
||||
import borgmatic.config.validate
|
||||
|
||||
|
||||
def parser_flags(parser):
|
||||
@@ -7,7 +9,12 @@ def parser_flags(parser):
|
||||
Given an argparse.ArgumentParser instance, return its argument flags in a space-separated
|
||||
string.
|
||||
'''
|
||||
return ' '.join(option for action in parser._actions for option in action.option_strings)
|
||||
return ' '.join(
|
||||
flag_variant
|
||||
for action in parser._actions
|
||||
for flag_name in action.option_strings
|
||||
for flag_variant in borgmatic.commands.completion.flag.variants(flag_name)
|
||||
)
|
||||
|
||||
|
||||
def bash_completion():
|
||||
@@ -19,7 +26,10 @@ def bash_completion():
|
||||
unused_global_parser,
|
||||
action_parsers,
|
||||
global_plus_action_parser,
|
||||
) = borgmatic.commands.arguments.make_parsers()
|
||||
) = borgmatic.commands.arguments.make_parsers(
|
||||
schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
|
||||
unparsed_arguments=(),
|
||||
)
|
||||
global_flags = parser_flags(global_plus_action_parser)
|
||||
|
||||
# Avert your eyes.
|
||||
|
||||
@@ -4,6 +4,7 @@ from textwrap import dedent
|
||||
|
||||
import borgmatic.commands.arguments
|
||||
import borgmatic.commands.completion.actions
|
||||
import borgmatic.config.validate
|
||||
|
||||
|
||||
def has_file_options(action: Action):
|
||||
@@ -26,9 +27,11 @@ def has_choice_options(action: Action):
|
||||
def has_unknown_required_param_options(action: Action):
|
||||
'''
|
||||
A catch-all for options that take a required parameter, but we don't know what the parameter is.
|
||||
This should be used last. These are actions that take something like a glob, a list of numbers, or a string.
|
||||
This should be used last. These are actions that take something like a glob, a list of numbers,
|
||||
or a string.
|
||||
|
||||
Actions that match this pattern should not show the normal arguments, because those are unlikely to be valid.
|
||||
Actions that match this pattern should not show the normal arguments, because those are unlikely
|
||||
to be valid.
|
||||
'''
|
||||
return (
|
||||
action.required is True
|
||||
@@ -52,9 +55,9 @@ def has_exact_options(action: Action):
|
||||
|
||||
def exact_options_completion(action: Action):
|
||||
'''
|
||||
Given an argparse.Action instance, return a completion invocation that forces file completions, options completion,
|
||||
or just that some value follow the action, if the action takes such an argument and was the last action on the
|
||||
command line prior to the cursor.
|
||||
Given an argparse.Action instance, return a completion invocation that forces file completions,
|
||||
options completion, or just that some value follow the action, if the action takes such an
|
||||
argument and was the last action on the command line prior to the cursor.
|
||||
|
||||
Otherwise, return an empty string.
|
||||
'''
|
||||
@@ -80,8 +83,9 @@ def exact_options_completion(action: Action):
|
||||
|
||||
def dedent_strip_as_tuple(string: str):
|
||||
'''
|
||||
Dedent a string, then strip it to avoid requiring your first line to have content, then return a tuple of the string.
|
||||
Makes it easier to write multiline strings for completions when you join them with a tuple.
|
||||
Dedent a string, then strip it to avoid requiring your first line to have content, then return a
|
||||
tuple of the string. Makes it easier to write multiline strings for completions when you join
|
||||
them with a tuple.
|
||||
'''
|
||||
return (dedent(string).strip('\n'),)
|
||||
|
||||
@@ -95,7 +99,10 @@ def fish_completion():
|
||||
unused_global_parser,
|
||||
action_parsers,
|
||||
global_plus_action_parser,
|
||||
) = borgmatic.commands.arguments.make_parsers()
|
||||
) = borgmatic.commands.arguments.make_parsers(
|
||||
schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
|
||||
unparsed_arguments=(),
|
||||
)
|
||||
|
||||
all_action_parsers = ' '.join(action for action in action_parsers.choices.keys())
|
||||
|
||||
|
||||
13
borgmatic/commands/completion/flag.py
Normal file
13
borgmatic/commands/completion/flag.py
Normal file
@@ -0,0 +1,13 @@
|
||||
def variants(flag_name):
|
||||
'''
|
||||
Given a flag name as a string, yield it and any variations that should be complete-able as well.
|
||||
For instance, for a string like "--foo[0].bar", yield "--foo[0].bar", "--foo[1].bar", ...,
|
||||
"--foo[9].bar".
|
||||
'''
|
||||
if '[0]' in flag_name:
|
||||
for index in range(0, 10):
|
||||
yield flag_name.replace('[0]', f'[{index}]')
|
||||
|
||||
return
|
||||
|
||||
yield flag_name
|
||||
176
borgmatic/config/arguments.py
Normal file
176
borgmatic/config/arguments.py
Normal file
@@ -0,0 +1,176 @@
|
||||
import io
|
||||
import re
|
||||
|
||||
import ruamel.yaml
|
||||
|
||||
import borgmatic.config.schema
|
||||
|
||||
LIST_INDEX_KEY_PATTERN = re.compile(r'^(?P<list_name>[a-zA-z-]+)\[(?P<index>\d+)\]$')
|
||||
|
||||
|
||||
def set_values(config, keys, value):
|
||||
'''
|
||||
Given a configuration dict, a sequence of parsed key strings, and a string value, descend into
|
||||
the configuration hierarchy based on the given keys and set the value into the right place.
|
||||
For example, consider these keys:
|
||||
|
||||
('foo', 'bar', 'baz')
|
||||
|
||||
This looks up "foo" in the given configuration dict. And within that, it looks up "bar". And
|
||||
then within that, it looks up "baz" and sets it to the given value. Another example:
|
||||
|
||||
('mylist[0]', 'foo')
|
||||
|
||||
This looks for the zeroth element of "mylist" in the given configuration. And within that, it
|
||||
looks up "foo" and sets it to the given value.
|
||||
'''
|
||||
if not keys:
|
||||
return
|
||||
|
||||
first_key = keys[0]
|
||||
|
||||
# Support "mylist[0]" list index syntax.
|
||||
match = LIST_INDEX_KEY_PATTERN.match(first_key)
|
||||
|
||||
if match:
|
||||
list_key = match.group('list_name')
|
||||
list_index = int(match.group('index'))
|
||||
|
||||
try:
|
||||
if len(keys) == 1:
|
||||
config[list_key][list_index] = value
|
||||
|
||||
return
|
||||
|
||||
if list_key not in config:
|
||||
config[list_key] = []
|
||||
|
||||
set_values(config[list_key][list_index], keys[1:], value)
|
||||
except (IndexError, KeyError):
|
||||
raise ValueError(f'Argument list index {first_key} is out of range')
|
||||
|
||||
return
|
||||
|
||||
if len(keys) == 1:
|
||||
config[first_key] = value
|
||||
|
||||
return
|
||||
|
||||
if first_key not in config:
|
||||
config[first_key] = {}
|
||||
|
||||
set_values(config[first_key], keys[1:], value)
|
||||
|
||||
|
||||
def type_for_option(schema, option_keys):
|
||||
'''
|
||||
Given a configuration schema dict and a sequence of keys identifying a potentially nested
|
||||
option, e.g. ('extra_borg_options', 'create'), return the schema type of that option as a
|
||||
string.
|
||||
|
||||
Return None if the option or its type cannot be found in the schema.
|
||||
'''
|
||||
option_schema = schema
|
||||
|
||||
for key in option_keys:
|
||||
# Support "name[0]"-style list index syntax.
|
||||
match = LIST_INDEX_KEY_PATTERN.match(key)
|
||||
properties = borgmatic.config.schema.get_properties(option_schema)
|
||||
|
||||
try:
|
||||
if match:
|
||||
option_schema = properties[match.group('list_name')]['items']
|
||||
else:
|
||||
option_schema = properties[key]
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
try:
|
||||
return option_schema['type']
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
|
||||
def convert_value_type(value, option_type):
|
||||
'''
|
||||
Given a string value and its schema type as a string, determine its logical type (string,
|
||||
boolean, integer, etc.), and return it converted to that type.
|
||||
|
||||
If the destination option type is a string, then leave the value as-is so that special
|
||||
characters in it don't get interpreted as YAML during conversion.
|
||||
|
||||
And if the source value isn't a string, return it as-is.
|
||||
|
||||
Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
|
||||
Raise ValueError if the parsed value doesn't match the option type.
|
||||
'''
|
||||
if not isinstance(value, str):
|
||||
return value
|
||||
|
||||
if option_type == 'string':
|
||||
return value
|
||||
|
||||
try:
|
||||
parsed_value = ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
|
||||
except ruamel.yaml.error.YAMLError as error:
|
||||
raise ValueError(f'Argument value "{value}" is invalid: {error.problem}')
|
||||
|
||||
if not isinstance(parsed_value, borgmatic.config.schema.parse_type(option_type)):
|
||||
raise ValueError(f'Argument value "{value}" is not of the expected type: {option_type}')
|
||||
|
||||
return parsed_value
|
||||
|
||||
|
||||
def prepare_arguments_for_config(global_arguments, schema):
|
||||
'''
|
||||
Given global arguments as an argparse.Namespace and a configuration schema dict, parse each
|
||||
argument that corresponds to an option in the schema and return a sequence of tuples (keys,
|
||||
values) for that option, where keys is a sequence of strings. For instance, given the following
|
||||
arguments:
|
||||
|
||||
argparse.Namespace(**{'my_option.sub_option': 'value1', 'other_option': 'value2'})
|
||||
|
||||
... return this:
|
||||
|
||||
(
|
||||
(('my_option', 'sub_option'), 'value1'),
|
||||
(('other_option',), 'value2'),
|
||||
)
|
||||
'''
|
||||
prepared_values = []
|
||||
|
||||
for argument_name, value in global_arguments.__dict__.items():
|
||||
if value is None:
|
||||
continue
|
||||
|
||||
keys = tuple(argument_name.split('.'))
|
||||
option_type = type_for_option(schema, keys)
|
||||
|
||||
# The argument doesn't correspond to any option in the schema, so ignore it. It's
|
||||
# probably a flag that borgmatic has on the command-line but not in configuration.
|
||||
if option_type is None:
|
||||
continue
|
||||
|
||||
prepared_values.append(
|
||||
(
|
||||
keys,
|
||||
convert_value_type(value, option_type),
|
||||
)
|
||||
)
|
||||
|
||||
return tuple(prepared_values)
|
||||
|
||||
|
||||
def apply_arguments_to_config(config, schema, arguments):
|
||||
'''
|
||||
Given a configuration dict, a corresponding configuration schema dict, and arguments as a dict
|
||||
from action name to argparse.Namespace, set those given argument values into their corresponding
|
||||
configuration options in the configuration dict.
|
||||
|
||||
This supports argument flags of the from "--foo.bar.baz" where each dotted component is a nested
|
||||
configuration object. Additionally, flags like "--foo.bar[0].baz" are supported to update a list
|
||||
element in the configuration.
|
||||
'''
|
||||
for action_arguments in arguments.values():
|
||||
for keys, value in prepare_arguments_for_config(action_arguments, schema):
|
||||
set_values(config, keys, value)
|
||||
@@ -5,6 +5,7 @@ import re
|
||||
|
||||
import ruamel.yaml
|
||||
|
||||
import borgmatic.config.schema
|
||||
from borgmatic.config import load, normalize
|
||||
|
||||
INDENT = 4
|
||||
@@ -21,45 +22,59 @@ def insert_newline_before_comment(config, field_name):
|
||||
)
|
||||
|
||||
|
||||
def get_properties(schema):
|
||||
'''
|
||||
Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
|
||||
potential properties, returned their merged properties instead.
|
||||
'''
|
||||
if 'oneOf' in schema:
|
||||
return dict(
|
||||
collections.ChainMap(*[sub_schema['properties'] for sub_schema in schema['oneOf']])
|
||||
)
|
||||
|
||||
return schema['properties']
|
||||
SCALAR_SCHEMA_TYPES = {'string', 'boolean', 'integer', 'number'}
|
||||
|
||||
|
||||
def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
|
||||
def schema_to_sample_configuration(schema, source_config=None, level=0, parent_is_sequence=False):
|
||||
'''
|
||||
Given a loaded configuration schema, generate and return sample config for it. Include comments
|
||||
for each option based on the schema "description".
|
||||
Given a loaded configuration schema and a source configuration, generate and return sample
|
||||
config for the schema. Include comments for each option based on the schema "description".
|
||||
|
||||
If a source config is given, walk it alongside the given schema so that both can be taken into
|
||||
account when commenting out particular options in add_comments_to_configuration_object().
|
||||
'''
|
||||
schema_type = schema.get('type')
|
||||
example = schema.get('example')
|
||||
if example is not None:
|
||||
return example
|
||||
|
||||
if schema_type == 'array' or (isinstance(schema_type, list) and 'array' in schema_type):
|
||||
if borgmatic.config.schema.compare_types(schema_type, {'array'}):
|
||||
config = ruamel.yaml.comments.CommentedSeq(
|
||||
[schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
|
||||
example
|
||||
if borgmatic.config.schema.compare_types(
|
||||
schema['items'].get('type'), SCALAR_SCHEMA_TYPES
|
||||
)
|
||||
else [
|
||||
schema_to_sample_configuration(
|
||||
schema['items'], source_config, level, parent_is_sequence=True
|
||||
)
|
||||
]
|
||||
)
|
||||
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
|
||||
elif schema_type == 'object' or (isinstance(schema_type, list) and 'object' in schema_type):
|
||||
config = ruamel.yaml.comments.CommentedMap(
|
||||
[
|
||||
(field_name, schema_to_sample_configuration(sub_schema, level + 1))
|
||||
for field_name, sub_schema in get_properties(schema).items()
|
||||
]
|
||||
elif borgmatic.config.schema.compare_types(schema_type, {'object'}):
|
||||
if source_config and isinstance(source_config, list) and isinstance(source_config[0], dict):
|
||||
source_config = dict(collections.ChainMap(*source_config))
|
||||
|
||||
config = (
|
||||
ruamel.yaml.comments.CommentedMap(
|
||||
[
|
||||
(
|
||||
field_name,
|
||||
schema_to_sample_configuration(
|
||||
sub_schema, (source_config or {}).get(field_name, {}), level + 1
|
||||
),
|
||||
)
|
||||
for field_name, sub_schema in borgmatic.config.schema.get_properties(
|
||||
schema
|
||||
).items()
|
||||
]
|
||||
)
|
||||
or example
|
||||
)
|
||||
indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
|
||||
add_comments_to_configuration_object(
|
||||
config, schema, indent=indent, skip_first=parent_is_sequence
|
||||
config, schema, source_config, indent=indent, skip_first=parent_is_sequence
|
||||
)
|
||||
elif borgmatic.config.schema.compare_types(schema_type, SCALAR_SCHEMA_TYPES, match=all):
|
||||
return example
|
||||
else:
|
||||
raise ValueError(f'Schema at level {level} is unsupported: {schema}')
|
||||
|
||||
@@ -164,7 +179,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
|
||||
return
|
||||
|
||||
for field_name in config[0].keys():
|
||||
field_schema = get_properties(schema['items']).get(field_name, {})
|
||||
field_schema = borgmatic.config.schema.get_properties(schema['items']).get(field_name, {})
|
||||
description = field_schema.get('description')
|
||||
|
||||
# No description to use? Skip it.
|
||||
@@ -178,26 +193,35 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
|
||||
return
|
||||
|
||||
|
||||
REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
|
||||
DEFAULT_KEYS = {'source_directories', 'repositories', 'keep_daily'}
|
||||
COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
|
||||
|
||||
|
||||
def add_comments_to_configuration_object(config, schema, indent=0, skip_first=False):
|
||||
def add_comments_to_configuration_object(
|
||||
config, schema, source_config=None, indent=0, skip_first=False
|
||||
):
|
||||
'''
|
||||
Using descriptions from a schema as a source, add those descriptions as comments to the given
|
||||
config mapping, before each field. Indent the comment the given number of characters.
|
||||
configuration dict, putting them before each field. Indent the comment the given number of
|
||||
characters.
|
||||
|
||||
And a sentinel for commenting out options that are neither in DEFAULT_KEYS nor the the given
|
||||
source configuration dict. The idea is that any options used in the source configuration should
|
||||
stay active in the generated configuration.
|
||||
'''
|
||||
for index, field_name in enumerate(config.keys()):
|
||||
if skip_first and index == 0:
|
||||
continue
|
||||
|
||||
field_schema = get_properties(schema).get(field_name, {})
|
||||
field_schema = borgmatic.config.schema.get_properties(schema).get(field_name, {})
|
||||
description = field_schema.get('description', '').strip()
|
||||
|
||||
# If this is an optional key, add an indicator to the comment flagging it to be commented
|
||||
# If this isn't a default key, add an indicator to the comment flagging it to be commented
|
||||
# out from the sample configuration. This sentinel is consumed by downstream processing that
|
||||
# does the actual commenting out.
|
||||
if field_name not in REQUIRED_KEYS:
|
||||
if field_name not in DEFAULT_KEYS and (
|
||||
source_config is None or field_name not in source_config
|
||||
):
|
||||
description = (
|
||||
'\n'.join((description, COMMENTED_OUT_SENTINEL))
|
||||
if description
|
||||
@@ -217,21 +241,6 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
|
||||
RUAMEL_YAML_COMMENTS_INDEX = 1
|
||||
|
||||
|
||||
def remove_commented_out_sentinel(config, field_name):
|
||||
'''
|
||||
Given a configuration CommentedMap and a top-level field name in it, remove any "commented out"
|
||||
sentinel found at the end of its YAML comments. This prevents the given field name from getting
|
||||
commented out by downstream processing that consumes the sentinel.
|
||||
'''
|
||||
try:
|
||||
last_comment_value = config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX][-1].value
|
||||
except KeyError:
|
||||
return
|
||||
|
||||
if last_comment_value == f'# {COMMENTED_OUT_SENTINEL}\n':
|
||||
config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX].pop()
|
||||
|
||||
|
||||
def merge_source_configuration_into_destination(destination_config, source_config):
|
||||
'''
|
||||
Deep merge the given source configuration dict into the destination configuration CommentedMap,
|
||||
@@ -246,12 +255,6 @@ def merge_source_configuration_into_destination(destination_config, source_confi
|
||||
return source_config
|
||||
|
||||
for field_name, source_value in source_config.items():
|
||||
# Since this key/value is from the source configuration, leave it uncommented and remove any
|
||||
# sentinel that would cause it to get commented out.
|
||||
remove_commented_out_sentinel(
|
||||
ruamel.yaml.comments.CommentedMap(destination_config), field_name
|
||||
)
|
||||
|
||||
# This is a mapping. Recurse for this key/value.
|
||||
if isinstance(source_value, collections.abc.Mapping):
|
||||
destination_config[field_name] = merge_source_configuration_into_destination(
|
||||
@@ -297,7 +300,7 @@ def generate_sample_configuration(
|
||||
normalize.normalize(source_filename, source_config)
|
||||
|
||||
destination_config = merge_source_configuration_into_destination(
|
||||
schema_to_sample_configuration(schema), source_config
|
||||
schema_to_sample_configuration(schema, source_config), source_config
|
||||
)
|
||||
|
||||
if dry_run:
|
||||
|
||||
@@ -58,6 +58,90 @@ def normalize_sections(config_filename, config):
|
||||
return []
|
||||
|
||||
|
||||
def make_command_hook_deprecation_log(config_filename, option_name): # pragma: no cover
|
||||
'''
|
||||
Given a configuration filename and the name of a configuration option, return a deprecation
|
||||
warning log for it.
|
||||
'''
|
||||
return logging.makeLogRecord(
|
||||
dict(
|
||||
levelno=logging.WARNING,
|
||||
levelname='WARNING',
|
||||
msg=f'{config_filename}: {option_name} is deprecated and support will be removed from a future release. Use commands: instead.',
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def normalize_commands(config_filename, config):
|
||||
'''
|
||||
Given a configuration filename and a configuration dict, transform any "before_*"- and
|
||||
"after_*"-style command hooks into "commands:".
|
||||
'''
|
||||
logs = []
|
||||
|
||||
# Normalize "before_actions" and "after_actions".
|
||||
for preposition in ('before', 'after'):
|
||||
option_name = f'{preposition}_actions'
|
||||
commands = config.pop(option_name, None)
|
||||
|
||||
if commands:
|
||||
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
|
||||
config.setdefault('commands', []).append(
|
||||
{
|
||||
preposition: 'repository',
|
||||
'run': commands,
|
||||
}
|
||||
)
|
||||
|
||||
# Normalize "before_backup", "before_prune", "after_backup", "after_prune", etc.
|
||||
for action_name in ('create', 'prune', 'compact', 'check', 'extract'):
|
||||
for preposition in ('before', 'after'):
|
||||
option_name = f'{preposition}_{"backup" if action_name == "create" else action_name}'
|
||||
commands = config.pop(option_name, None)
|
||||
|
||||
if not commands:
|
||||
continue
|
||||
|
||||
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
|
||||
config.setdefault('commands', []).append(
|
||||
{
|
||||
preposition: 'action',
|
||||
'when': [action_name],
|
||||
'run': commands,
|
||||
}
|
||||
)
|
||||
|
||||
# Normalize "on_error".
|
||||
commands = config.pop('on_error', None)
|
||||
|
||||
if commands:
|
||||
logs.append(make_command_hook_deprecation_log(config_filename, 'on_error'))
|
||||
config.setdefault('commands', []).append(
|
||||
{
|
||||
'after': 'error',
|
||||
'when': ['create', 'prune', 'compact', 'check'],
|
||||
'run': commands,
|
||||
}
|
||||
)
|
||||
|
||||
# Normalize "before_everything" and "after_everything".
|
||||
for preposition in ('before', 'after'):
|
||||
option_name = f'{preposition}_everything'
|
||||
commands = config.pop(option_name, None)
|
||||
|
||||
if commands:
|
||||
logs.append(make_command_hook_deprecation_log(config_filename, option_name))
|
||||
config.setdefault('commands', []).append(
|
||||
{
|
||||
preposition: 'everything',
|
||||
'when': ['create'],
|
||||
'run': commands,
|
||||
}
|
||||
)
|
||||
|
||||
return logs
|
||||
|
||||
|
||||
def normalize(config_filename, config):
|
||||
'''
|
||||
Given a configuration filename and a configuration dict of its loaded contents, apply particular
|
||||
@@ -67,6 +151,7 @@ def normalize(config_filename, config):
|
||||
Raise ValueError the configuration cannot be normalized.
|
||||
'''
|
||||
logs = normalize_sections(config_filename, config)
|
||||
logs += normalize_commands(config_filename, config)
|
||||
|
||||
if config.get('borgmatic_source_directory'):
|
||||
logs.append(
|
||||
@@ -241,7 +326,11 @@ def normalize(config_filename, config):
|
||||
config['repositories'] = []
|
||||
|
||||
for repository_dict in repositories:
|
||||
repository_path = repository_dict['path']
|
||||
repository_path = repository_dict.get('path')
|
||||
|
||||
if repository_path is None:
|
||||
continue
|
||||
|
||||
if '~' in repository_path:
|
||||
logs.append(
|
||||
logging.makeLogRecord(
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
import io
|
||||
import logging
|
||||
|
||||
import ruamel.yaml
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def set_values(config, keys, value):
|
||||
'''
|
||||
@@ -134,6 +137,11 @@ def apply_overrides(config, schema, raw_overrides):
|
||||
'''
|
||||
overrides = parse_overrides(raw_overrides, schema)
|
||||
|
||||
if overrides:
|
||||
logger.warning(
|
||||
"The --override flag is deprecated and will be removed from a future release. Instead, use a command-line flag corresponding to the configuration option you'd like to set."
|
||||
)
|
||||
|
||||
for keys, value in overrides:
|
||||
set_values(config, keys, value)
|
||||
set_values(config, strip_section_names(keys), value)
|
||||
|
||||
@@ -134,7 +134,7 @@ class Runtime_directory:
|
||||
'''
|
||||
return self.runtime_path
|
||||
|
||||
def __exit__(self, exception, value, traceback):
|
||||
def __exit__(self, exception_type, exception, traceback):
|
||||
'''
|
||||
Delete any temporary directory that was created as part of initialization.
|
||||
'''
|
||||
|
||||
72
borgmatic/config/schema.py
Normal file
72
borgmatic/config/schema.py
Normal file
@@ -0,0 +1,72 @@
|
||||
import decimal
|
||||
import itertools
|
||||
|
||||
|
||||
def get_properties(schema):
|
||||
'''
|
||||
Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
|
||||
potential properties, return their merged properties instead (interleaved so the first
|
||||
properties of each sub-schema come first). The idea is that the user should see all possible
|
||||
options even if they're not all possible together.
|
||||
'''
|
||||
if 'oneOf' in schema:
|
||||
return dict(
|
||||
item
|
||||
for item in itertools.chain(
|
||||
*itertools.zip_longest(
|
||||
*[sub_schema['properties'].items() for sub_schema in schema['oneOf']]
|
||||
)
|
||||
)
|
||||
if item is not None
|
||||
)
|
||||
|
||||
return schema.get('properties', {})
|
||||
|
||||
|
||||
SCHEMA_TYPE_TO_PYTHON_TYPE = {
|
||||
'array': list,
|
||||
'boolean': bool,
|
||||
'integer': int,
|
||||
'number': decimal.Decimal,
|
||||
'object': dict,
|
||||
'string': str,
|
||||
}
|
||||
|
||||
|
||||
def parse_type(schema_type, **overrides):
|
||||
'''
|
||||
Given a schema type as a string, return the corresponding Python type.
|
||||
|
||||
If any overrides are given in the from of a schema type string to a Python type, then override
|
||||
the default type mapping with them.
|
||||
|
||||
Raise ValueError if the schema type is unknown.
|
||||
'''
|
||||
try:
|
||||
return dict(
|
||||
SCHEMA_TYPE_TO_PYTHON_TYPE,
|
||||
**overrides,
|
||||
)[schema_type]
|
||||
except KeyError:
|
||||
raise ValueError(f'Unknown type in configuration schema: {schema_type}')
|
||||
|
||||
|
||||
def compare_types(schema_type, target_types, match=any):
|
||||
'''
|
||||
Given a schema type as a string or a list of strings (representing multiple types) and a set of
|
||||
target type strings, return whether every schema type is in the set of target types.
|
||||
|
||||
If the schema type is a list of strings, use the given match function (such as any or all) to
|
||||
compare elements. For instance, if match is given as all, then every element of the schema_type
|
||||
list must be in the target types.
|
||||
'''
|
||||
if isinstance(schema_type, list):
|
||||
if match(element_schema_type in target_types for element_schema_type in schema_type):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
if schema_type in target_types:
|
||||
return True
|
||||
|
||||
return False
|
||||
@@ -33,13 +33,47 @@ properties:
|
||||
type: object
|
||||
required:
|
||||
- path
|
||||
additionalProperties: false
|
||||
properties:
|
||||
path:
|
||||
type: string
|
||||
example: ssh://user@backupserver/./{fqdn}
|
||||
description: The local path or Borg URL of the repository.
|
||||
example: ssh://user@backupserver/./sourcehostname.borg
|
||||
label:
|
||||
type: string
|
||||
description: |
|
||||
An optional label for the repository, used in logging
|
||||
and to make selecting the repository easier on the
|
||||
command-line.
|
||||
example: backupserver
|
||||
encryption:
|
||||
type: string
|
||||
description: |
|
||||
The encryption mode with which to create the repository,
|
||||
only used for the repo-create action. To see the
|
||||
available encryption modes, run "borg init --help" with
|
||||
Borg 1 or "borg repo-create --help" with Borg 2.
|
||||
example: repokey-blake2
|
||||
append_only:
|
||||
type: boolean
|
||||
description: |
|
||||
Whether the repository should be created append-only,
|
||||
only used for the repo-create action. Defaults to false.
|
||||
example: true
|
||||
storage_quota:
|
||||
type: string
|
||||
description: |
|
||||
The storage quota with which to create the repository,
|
||||
only used for the repo-create action. Defaults to no
|
||||
quota.
|
||||
example: 5G
|
||||
make_parent_directories:
|
||||
type: boolean
|
||||
description: |
|
||||
Whether any missing parent directories of the repository
|
||||
path should be created, only used for the repo-create
|
||||
action. Defaults to false.
|
||||
example: true
|
||||
description: |
|
||||
A required list of local or remote repositories with paths and
|
||||
optional labels (which can be used with the --repository flag to
|
||||
@@ -48,8 +82,7 @@ properties:
|
||||
output of "borg help placeholders" for details. See ssh_command for
|
||||
SSH options like identity file or port. If systemd service is used,
|
||||
then add local repository paths in the systemd service file to the
|
||||
ReadWritePaths list. Prior to borgmatic 1.7.10, repositories was a
|
||||
list of plain path strings.
|
||||
ReadWritePaths list.
|
||||
example:
|
||||
- path: ssh://user@backupserver/./sourcehostname.borg
|
||||
label: backupserver
|
||||
@@ -99,13 +132,13 @@ properties:
|
||||
used when backing up special devices such as /dev/zero. Defaults to
|
||||
false. But when a database hook is used, the setting here is ignored
|
||||
and read_special is considered true.
|
||||
example: false
|
||||
example: true
|
||||
flags:
|
||||
type: boolean
|
||||
description: |
|
||||
Record filesystem flags (e.g. NODUMP, IMMUTABLE) in archive.
|
||||
Defaults to true.
|
||||
example: true
|
||||
example: false
|
||||
files_cache:
|
||||
type: string
|
||||
description: |
|
||||
@@ -284,6 +317,22 @@ properties:
|
||||
http://borgbackup.readthedocs.io/en/stable/usage/create.html for
|
||||
details. Defaults to "lz4".
|
||||
example: lz4
|
||||
recompress:
|
||||
type: string
|
||||
enum: ['if-different', 'always', 'never']
|
||||
description: |
|
||||
Mode for recompressing data chunks according to MODE.
|
||||
Possible modes are:
|
||||
* "if-different": Recompress if the current compression
|
||||
is with a different compression algorithm.
|
||||
* "always": Recompress even if the current compression
|
||||
is with the same compression algorithm. Use this to change
|
||||
the compression level.
|
||||
* "never": Do not recompress. Use this option to explicitly
|
||||
prevent recompression.
|
||||
See https://borgbackup.readthedocs.io/en/stable/usage/recreate.html
|
||||
for details. Defaults to "never".
|
||||
example: if-different
|
||||
upload_rate_limit:
|
||||
type: integer
|
||||
description: |
|
||||
@@ -426,19 +475,19 @@ properties:
|
||||
type: boolean
|
||||
description: |
|
||||
Bypass Borg error about a repository that has been moved. Defaults
|
||||
to not bypassing.
|
||||
to false.
|
||||
example: true
|
||||
unknown_unencrypted_repo_access_is_ok:
|
||||
type: boolean
|
||||
description: |
|
||||
Bypass Borg error about a previously unknown unencrypted repository.
|
||||
Defaults to not bypassing.
|
||||
Defaults to false.
|
||||
example: true
|
||||
check_i_know_what_i_am_doing:
|
||||
type: boolean
|
||||
description: |
|
||||
Bypass Borg confirmation about check with repair option. Defaults to
|
||||
an interactive prompt from Borg.
|
||||
false and an interactive prompt from Borg.
|
||||
example: true
|
||||
extra_borg_options:
|
||||
type: object
|
||||
@@ -518,6 +567,12 @@ properties:
|
||||
not specified, borgmatic defaults to matching archives based on the
|
||||
archive_name_format (see above).
|
||||
example: sourcehostname
|
||||
compact_threshold:
|
||||
type: integer
|
||||
description: |
|
||||
Minimum saved space percentage threshold for compacting a segment,
|
||||
defaults to 10.
|
||||
example: 20
|
||||
checks:
|
||||
type: array
|
||||
items:
|
||||
@@ -733,6 +788,10 @@ properties:
|
||||
List of one or more consistency checks to run on a periodic basis
|
||||
(if "frequency" is set) or every time borgmatic runs checks (if
|
||||
"frequency" is omitted).
|
||||
example:
|
||||
- name: archives
|
||||
frequency: 2 weeks
|
||||
- name: repository
|
||||
check_repositories:
|
||||
type: array
|
||||
items:
|
||||
@@ -754,9 +813,29 @@ properties:
|
||||
color:
|
||||
type: boolean
|
||||
description: |
|
||||
Apply color to console output. Can be overridden with --no-color
|
||||
command-line flag. Defaults to true.
|
||||
Apply color to console output. Defaults to true.
|
||||
example: false
|
||||
progress:
|
||||
type: boolean
|
||||
description: |
|
||||
Display progress as each file or archive is processed when running
|
||||
supported actions. Corresponds to the "--progress" flag on those
|
||||
actions. Defaults to false.
|
||||
example: true
|
||||
statistics:
|
||||
type: boolean
|
||||
description: |
|
||||
Display statistics for an archive when running supported actions.
|
||||
Corresponds to the "--stats" flag on those actions. Defaults to
|
||||
false.
|
||||
example: true
|
||||
list_details:
|
||||
type: boolean
|
||||
description: |
|
||||
Display details for each file or archive as it is processed when
|
||||
running supported actions. Corresponds to the "--list" flag on those
|
||||
actions. Defaults to false.
|
||||
example: true
|
||||
skip_actions:
|
||||
type: array
|
||||
items:
|
||||
@@ -767,6 +846,7 @@ properties:
|
||||
- prune
|
||||
- compact
|
||||
- create
|
||||
- recreate
|
||||
- check
|
||||
- delete
|
||||
- extract
|
||||
@@ -796,8 +876,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute before all
|
||||
the actions for each repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute before all the actions for each
|
||||
repository.
|
||||
example:
|
||||
- "echo Starting actions."
|
||||
before_backup:
|
||||
@@ -805,8 +886,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute before
|
||||
creating a backup, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute before creating a backup, run once
|
||||
per repository.
|
||||
example:
|
||||
- "echo Starting a backup."
|
||||
before_prune:
|
||||
@@ -814,8 +896,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute before
|
||||
pruning, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute before pruning, run once per
|
||||
repository.
|
||||
example:
|
||||
- "echo Starting pruning."
|
||||
before_compact:
|
||||
@@ -823,8 +906,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute before
|
||||
compaction, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute before compaction, run once per
|
||||
repository.
|
||||
example:
|
||||
- "echo Starting compaction."
|
||||
before_check:
|
||||
@@ -832,8 +916,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute before
|
||||
consistency checks, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute before consistency checks, run once
|
||||
per repository.
|
||||
example:
|
||||
- "echo Starting checks."
|
||||
before_extract:
|
||||
@@ -841,8 +926,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute before
|
||||
extracting a backup, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute before extracting a backup, run once
|
||||
per repository.
|
||||
example:
|
||||
- "echo Starting extracting."
|
||||
after_backup:
|
||||
@@ -850,8 +936,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute after
|
||||
creating a backup, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute after creating a backup, run once per
|
||||
repository.
|
||||
example:
|
||||
- "echo Finished a backup."
|
||||
after_compact:
|
||||
@@ -859,8 +946,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute after
|
||||
compaction, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute after compaction, run once per
|
||||
repository.
|
||||
example:
|
||||
- "echo Finished compaction."
|
||||
after_prune:
|
||||
@@ -868,8 +956,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute after
|
||||
pruning, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute after pruning, run once per
|
||||
repository.
|
||||
example:
|
||||
- "echo Finished pruning."
|
||||
after_check:
|
||||
@@ -877,8 +966,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute after
|
||||
consistency checks, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute after consistency checks, run once
|
||||
per repository.
|
||||
example:
|
||||
- "echo Finished checks."
|
||||
after_extract:
|
||||
@@ -886,8 +976,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute after
|
||||
extracting a backup, run once per repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute after extracting a backup, run once
|
||||
per repository.
|
||||
example:
|
||||
- "echo Finished extracting."
|
||||
after_actions:
|
||||
@@ -895,8 +986,9 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute after all
|
||||
actions for each repository.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute after all actions for each
|
||||
repository.
|
||||
example:
|
||||
- "echo Finished actions."
|
||||
on_error:
|
||||
@@ -904,9 +996,10 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute when an
|
||||
exception occurs during a "create", "prune", "compact", or "check"
|
||||
action or an associated before/after hook.
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute when an exception occurs during a
|
||||
"create", "prune", "compact", or "check" action or an associated
|
||||
before/after hook.
|
||||
example:
|
||||
- "echo Error during create/prune/compact/check."
|
||||
before_everything:
|
||||
@@ -914,10 +1007,10 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute before
|
||||
running all actions (if one of them is "create"). These are
|
||||
collected from all configuration files and then run once before all
|
||||
of them (prior to all actions).
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute before running all actions (if one of
|
||||
them is "create"). These are collected from all configuration files
|
||||
and then run once before all of them (prior to all actions).
|
||||
example:
|
||||
- "echo Starting actions."
|
||||
after_everything:
|
||||
@@ -925,14 +1018,157 @@ properties:
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to execute after
|
||||
running all actions (if one of them is "create"). These are
|
||||
collected from all configuration files and then run once after all
|
||||
of them (after any action).
|
||||
Deprecated. Use "commands:" instead. List of one or more shell
|
||||
commands or scripts to execute after running all actions (if one of
|
||||
them is "create"). These are collected from all configuration files
|
||||
and then run once after all of them (after any action).
|
||||
example:
|
||||
- "echo Completed actions."
|
||||
commands:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
oneOf:
|
||||
- required: [before, run]
|
||||
additionalProperties: false
|
||||
properties:
|
||||
before:
|
||||
type: string
|
||||
enum:
|
||||
- action
|
||||
- repository
|
||||
- configuration
|
||||
- everything
|
||||
description: |
|
||||
Name for the point in borgmatic's execution that
|
||||
the commands should be run before (required if
|
||||
"after" isn't set):
|
||||
* "action" runs before each action for each
|
||||
repository.
|
||||
* "repository" runs before all actions for each
|
||||
repository.
|
||||
* "configuration" runs before all actions and
|
||||
repositories in the current configuration file.
|
||||
* "everything" runs before all configuration
|
||||
files.
|
||||
example: action
|
||||
when:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
enum:
|
||||
- repo-create
|
||||
- transfer
|
||||
- prune
|
||||
- compact
|
||||
- create
|
||||
- recreate
|
||||
- check
|
||||
- delete
|
||||
- extract
|
||||
- config
|
||||
- export-tar
|
||||
- mount
|
||||
- umount
|
||||
- repo-delete
|
||||
- restore
|
||||
- repo-list
|
||||
- list
|
||||
- repo-info
|
||||
- info
|
||||
- break-lock
|
||||
- key
|
||||
- borg
|
||||
description: |
|
||||
List of actions for which the commands will be
|
||||
run. Defaults to running for all actions.
|
||||
example: [create, prune, compact, check]
|
||||
run:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to
|
||||
run when this command hook is triggered. Required.
|
||||
example:
|
||||
- "echo Doing stuff."
|
||||
- required: [after, run]
|
||||
additionalProperties: false
|
||||
properties:
|
||||
after:
|
||||
type: string
|
||||
enum:
|
||||
- action
|
||||
- repository
|
||||
- configuration
|
||||
- everything
|
||||
- error
|
||||
description: |
|
||||
Name for the point in borgmatic's execution that
|
||||
the commands should be run after (required if
|
||||
"before" isn't set):
|
||||
* "action" runs after each action for each
|
||||
repository.
|
||||
* "repository" runs after all actions for each
|
||||
repository.
|
||||
* "configuration" runs after all actions and
|
||||
repositories in the current configuration file.
|
||||
* "everything" runs after all configuration
|
||||
files.
|
||||
* "error" runs after an error occurs.
|
||||
example: action
|
||||
when:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
enum:
|
||||
- repo-create
|
||||
- transfer
|
||||
- prune
|
||||
- compact
|
||||
- create
|
||||
- recreate
|
||||
- check
|
||||
- delete
|
||||
- extract
|
||||
- config
|
||||
- export-tar
|
||||
- mount
|
||||
- umount
|
||||
- repo-delete
|
||||
- restore
|
||||
- repo-list
|
||||
- list
|
||||
- repo-info
|
||||
- info
|
||||
- break-lock
|
||||
- key
|
||||
- borg
|
||||
description: |
|
||||
Only trigger the hook when borgmatic is run with
|
||||
particular actions listed here. Defaults to
|
||||
running for all actions.
|
||||
example: [create, prune, compact, check]
|
||||
run:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: |
|
||||
List of one or more shell commands or scripts to
|
||||
run when this command hook is triggered. Required.
|
||||
example:
|
||||
- "echo Doing stuff."
|
||||
description: |
|
||||
List of one or more command hooks to execute, triggered at
|
||||
particular points during borgmatic's execution. For each command
|
||||
hook, specify one of "before" or "after", not both.
|
||||
example:
|
||||
- before: action
|
||||
when: [create]
|
||||
run: [echo Backing up.]
|
||||
bootstrap:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
store_config_files:
|
||||
type: boolean
|
||||
@@ -1040,6 +1276,18 @@ properties:
|
||||
individual databases. See the pg_dump documentation for
|
||||
more about formats.
|
||||
example: directory
|
||||
compression:
|
||||
type: ["string", "integer"]
|
||||
description: |
|
||||
Database dump compression level (integer) or method
|
||||
("gzip", "lz4", "zstd", or "none") and optional
|
||||
colon-separated detail. Defaults to moderate "gzip" for
|
||||
"custom" and "directory" formats and no compression for
|
||||
the "plain" format. Compression is not supported for the
|
||||
"tar" format. Be aware that Borg does its own
|
||||
compression as well, so you may not need it in both
|
||||
places.
|
||||
example: none
|
||||
ssl_mode:
|
||||
type: string
|
||||
enum: ['disable', 'allow', 'prefer',
|
||||
@@ -1076,11 +1324,11 @@ properties:
|
||||
Command to use instead of "pg_dump" or "pg_dumpall".
|
||||
This can be used to run a specific pg_dump version
|
||||
(e.g., one inside a running container). If you run it
|
||||
from within a container, make sure to mount your
|
||||
host's ".borgmatic" folder into the container using
|
||||
the same directory structure. Defaults to "pg_dump"
|
||||
for single database dump or "pg_dumpall" to dump all
|
||||
databases.
|
||||
from within a container, make sure to mount the path in
|
||||
the "user_runtime_directory" option from the host into
|
||||
the container at the same location. Defaults to
|
||||
"pg_dump" for single database dump or "pg_dumpall" to
|
||||
dump all databases.
|
||||
example: docker exec my_pg_container pg_dump
|
||||
pg_restore_command:
|
||||
type: string
|
||||
@@ -1133,6 +1381,9 @@ properties:
|
||||
https://www.postgresql.org/docs/current/app-pgdump.html and
|
||||
https://www.postgresql.org/docs/current/libpq-ssl.html for
|
||||
details.
|
||||
example:
|
||||
- name: users
|
||||
hostname: database.example.org
|
||||
mariadb_databases:
|
||||
type: array
|
||||
items:
|
||||
@@ -1198,15 +1449,30 @@ properties:
|
||||
Defaults to the "password" option. Supports the
|
||||
"{credential ...}" syntax.
|
||||
example: trustsome1
|
||||
tls:
|
||||
type: boolean
|
||||
description: |
|
||||
Whether to TLS-encrypt data transmitted between the
|
||||
client and server. The default varies based on the
|
||||
MariaDB version.
|
||||
example: false
|
||||
restore_tls:
|
||||
type: boolean
|
||||
description: |
|
||||
Whether to TLS-encrypt data transmitted between the
|
||||
client and restore server. The default varies based on
|
||||
the MariaDB version.
|
||||
example: false
|
||||
mariadb_dump_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "mariadb-dump". This can be
|
||||
used to run a specific mariadb_dump version (e.g., one
|
||||
inside a running container). If you run it from within
|
||||
a container, make sure to mount your host's
|
||||
".borgmatic" folder into the container using the same
|
||||
directory structure. Defaults to "mariadb-dump".
|
||||
inside a running container). If you run it from within a
|
||||
container, make sure to mount the path in the
|
||||
"user_runtime_directory" option from the host into the
|
||||
container at the same location. Defaults to
|
||||
"mariadb-dump".
|
||||
example: docker exec mariadb_container mariadb-dump
|
||||
mariadb_command:
|
||||
type: string
|
||||
@@ -1263,6 +1529,9 @@ properties:
|
||||
added to your source directories at runtime and streamed directly
|
||||
to Borg. Requires mariadb-dump/mariadb commands. See
|
||||
https://mariadb.com/kb/en/library/mysqldump/ for details.
|
||||
example:
|
||||
- name: users
|
||||
hostname: database.example.org
|
||||
mysql_databases:
|
||||
type: array
|
||||
items:
|
||||
@@ -1328,15 +1597,29 @@ properties:
|
||||
Defaults to the "password" option. Supports the
|
||||
"{credential ...}" syntax.
|
||||
example: trustsome1
|
||||
tls:
|
||||
type: boolean
|
||||
description: |
|
||||
Whether to TLS-encrypt data transmitted between the
|
||||
client and server. The default varies based on the
|
||||
MySQL installation.
|
||||
example: false
|
||||
restore_tls:
|
||||
type: boolean
|
||||
description: |
|
||||
Whether to TLS-encrypt data transmitted between the
|
||||
client and restore server. The default varies based on
|
||||
the MySQL installation.
|
||||
example: false
|
||||
mysql_dump_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "mysqldump". This can be
|
||||
used to run a specific mysql_dump version (e.g., one
|
||||
inside a running container). If you run it from within
|
||||
a container, make sure to mount your host's
|
||||
".borgmatic" folder into the container using the same
|
||||
directory structure. Defaults to "mysqldump".
|
||||
Command to use instead of "mysqldump". This can be used
|
||||
to run a specific mysql_dump version (e.g., one inside a
|
||||
running container). If you run it from within a
|
||||
container, make sure to mount the path in the
|
||||
"user_runtime_directory" option from the host into the
|
||||
container at the same location. Defaults to "mysqldump".
|
||||
example: docker exec mysql_container mysqldump
|
||||
mysql_command:
|
||||
type: string
|
||||
@@ -1394,6 +1677,9 @@ properties:
|
||||
to Borg. Requires mysqldump/mysql commands. See
|
||||
https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html for
|
||||
details.
|
||||
example:
|
||||
- name: users
|
||||
hostname: database.example.org
|
||||
sqlite_databases:
|
||||
type: array
|
||||
items:
|
||||
@@ -1423,6 +1709,33 @@ properties:
|
||||
Path to the SQLite database file to restore to. Defaults
|
||||
to the "path" option.
|
||||
example: /var/lib/sqlite/users.db
|
||||
sqlite_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "sqlite3". This can be used to
|
||||
run a specific sqlite3 version (e.g., one inside a
|
||||
running container). If you run it from within a
|
||||
container, make sure to mount the path in the
|
||||
"user_runtime_directory" option from the host into the
|
||||
container at the same location. Defaults to "sqlite3".
|
||||
example: docker exec sqlite_container sqlite3
|
||||
sqlite_restore_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to run when restoring a database instead
|
||||
of "sqlite3". This can be used to run a specific
|
||||
sqlite3 version (e.g., one inside a running container).
|
||||
Defaults to "sqlite3".
|
||||
example: docker exec sqlite_container sqlite3
|
||||
description: |
|
||||
List of one or more SQLite databases to dump before creating a
|
||||
backup, run once per configuration file. The database dumps are
|
||||
added to your source directories at runtime and streamed directly to
|
||||
Borg. Requires the sqlite3 command. See https://sqlite.org/cli.html
|
||||
for details.
|
||||
example:
|
||||
- name: users
|
||||
path: /var/lib/db.sqlite
|
||||
mongodb_databases:
|
||||
type: array
|
||||
items:
|
||||
@@ -1518,6 +1831,25 @@ properties:
|
||||
dump command, without performing any validation on them.
|
||||
See mongorestore documentation for details.
|
||||
example: --restoreDbUsersAndRoles
|
||||
mongodump_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "mongodump". This can be used
|
||||
to run a specific mongodump version (e.g., one inside a
|
||||
running container). If you run it from within a
|
||||
container, make sure to mount the path in the
|
||||
"user_runtime_directory" option from the host into the
|
||||
container at the same location. Defaults to
|
||||
"mongodump".
|
||||
example: docker exec mongodb_container mongodump
|
||||
mongorestore_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to run when restoring a database instead of
|
||||
"mongorestore". This can be used to run a specific
|
||||
mongorestore version (e.g., one inside a running
|
||||
container). Defaults to "mongorestore".
|
||||
example: docker exec mongodb_container mongorestore
|
||||
description: |
|
||||
List of one or more MongoDB databases to dump before creating a
|
||||
backup, run once per configuration file. The database dumps are
|
||||
@@ -1525,6 +1857,9 @@ properties:
|
||||
to Borg. Requires mongodump/mongorestore commands. See
|
||||
https://docs.mongodb.com/database-tools/mongodump/ and
|
||||
https://docs.mongodb.com/database-tools/mongorestore/ for details.
|
||||
example:
|
||||
- name: users
|
||||
hostname: database.example.org
|
||||
ntfy:
|
||||
type: object
|
||||
required: ['topic']
|
||||
@@ -1561,6 +1896,7 @@ properties:
|
||||
example: tk_AgQdq7mVBoFD37zQVN29RhuMzNIz2
|
||||
start:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
title:
|
||||
type: string
|
||||
@@ -1584,6 +1920,7 @@ properties:
|
||||
example: incoming_envelope
|
||||
finish:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
title:
|
||||
type: string
|
||||
@@ -1607,6 +1944,7 @@ properties:
|
||||
example: incoming_envelope
|
||||
fail:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
title:
|
||||
type: string
|
||||
@@ -1665,6 +2003,7 @@ properties:
|
||||
example: hwRwoWsXMBWwgrSecfa9EfPey55WSN
|
||||
start:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
@@ -1704,8 +2043,8 @@ properties:
|
||||
type: boolean
|
||||
description: |
|
||||
Set to True to enable HTML parsing of the message.
|
||||
Set to False for plain text.
|
||||
example: True
|
||||
Set to false for plain text.
|
||||
example: true
|
||||
sound:
|
||||
type: string
|
||||
description: |
|
||||
@@ -1740,6 +2079,7 @@ properties:
|
||||
example: Pushover Link
|
||||
finish:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
@@ -1779,8 +2119,8 @@ properties:
|
||||
type: boolean
|
||||
description: |
|
||||
Set to True to enable HTML parsing of the message.
|
||||
Set to False for plain text.
|
||||
example: True
|
||||
Set to false for plain text.
|
||||
example: true
|
||||
sound:
|
||||
type: string
|
||||
description: |
|
||||
@@ -1815,6 +2155,7 @@ properties:
|
||||
example: Pushover Link
|
||||
fail:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
@@ -1854,8 +2195,8 @@ properties:
|
||||
type: boolean
|
||||
description: |
|
||||
Set to True to enable HTML parsing of the message.
|
||||
Set to False for plain text.
|
||||
example: True
|
||||
Set to false for plain text.
|
||||
example: true
|
||||
sound:
|
||||
type: string
|
||||
description: |
|
||||
@@ -1907,6 +2248,8 @@ properties:
|
||||
zabbix:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
required:
|
||||
- server
|
||||
properties:
|
||||
itemid:
|
||||
type: integer
|
||||
@@ -1929,7 +2272,8 @@ properties:
|
||||
server:
|
||||
type: string
|
||||
description: |
|
||||
The address of your Zabbix instance.
|
||||
The API endpoint URL of your Zabbix instance, usually ending
|
||||
with "/api_jsonrpc.php". Required.
|
||||
example: https://zabbix.your-domain.com
|
||||
username:
|
||||
type: string
|
||||
@@ -1951,6 +2295,7 @@ properties:
|
||||
example: fakekey
|
||||
start:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
value:
|
||||
type: ["integer", "string"]
|
||||
@@ -1959,6 +2304,7 @@ properties:
|
||||
example: STARTED
|
||||
finish:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
value:
|
||||
type: ["integer", "string"]
|
||||
@@ -1967,6 +2313,7 @@ properties:
|
||||
example: FINISH
|
||||
fail:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
value:
|
||||
type: ["integer", "string"]
|
||||
@@ -1998,15 +2345,20 @@ properties:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
required:
|
||||
- url
|
||||
- label
|
||||
properties:
|
||||
url:
|
||||
type: string
|
||||
description: URL of this Apprise service.
|
||||
example: "gotify://hostname/token"
|
||||
label:
|
||||
type: string
|
||||
description: |
|
||||
Label used in borgmatic logs for this Apprise
|
||||
service.
|
||||
example: gotify
|
||||
description: |
|
||||
A list of Apprise services to publish to with URLs and
|
||||
@@ -2021,7 +2373,7 @@ properties:
|
||||
send_logs:
|
||||
type: boolean
|
||||
description: |
|
||||
Send borgmatic logs to Apprise services as part the
|
||||
Send borgmatic logs to Apprise services as part of the
|
||||
"finish", "fail", and "log" states. Defaults to true.
|
||||
example: false
|
||||
logs_size_limit:
|
||||
@@ -2034,6 +2386,7 @@ properties:
|
||||
start:
|
||||
type: object
|
||||
required: ['body']
|
||||
additionalProperties: false
|
||||
properties:
|
||||
title:
|
||||
type: string
|
||||
@@ -2049,6 +2402,7 @@ properties:
|
||||
finish:
|
||||
type: object
|
||||
required: ['body']
|
||||
additionalProperties: false
|
||||
properties:
|
||||
title:
|
||||
type: string
|
||||
@@ -2064,6 +2418,7 @@ properties:
|
||||
fail:
|
||||
type: object
|
||||
required: ['body']
|
||||
additionalProperties: false
|
||||
properties:
|
||||
title:
|
||||
type: string
|
||||
@@ -2079,6 +2434,7 @@ properties:
|
||||
log:
|
||||
type: object
|
||||
required: ['body']
|
||||
additionalProperties: false
|
||||
properties:
|
||||
title:
|
||||
type: string
|
||||
@@ -2132,7 +2488,7 @@ properties:
|
||||
send_logs:
|
||||
type: boolean
|
||||
description: |
|
||||
Send borgmatic logs to Healthchecks as part the "finish",
|
||||
Send borgmatic logs to Healthchecks as part of the "finish",
|
||||
"fail", and "log" states. Defaults to true.
|
||||
example: false
|
||||
ping_body_limit:
|
||||
@@ -2200,6 +2556,12 @@ properties:
|
||||
- start
|
||||
- finish
|
||||
- fail
|
||||
verify_tls:
|
||||
type: boolean
|
||||
description: |
|
||||
Verify the TLS certificate of the push URL host. Defaults to
|
||||
true.
|
||||
example: false
|
||||
description: |
|
||||
Configuration for a monitoring integration with Uptime Kuma using
|
||||
the Push monitor type.
|
||||
@@ -2230,6 +2592,12 @@ properties:
|
||||
PagerDuty integration key used to notify PagerDuty when a
|
||||
backup errors. Supports the "{credential ...}" syntax.
|
||||
example: a177cad45bd374409f78906a810a3074
|
||||
send_logs:
|
||||
type: boolean
|
||||
description: |
|
||||
Send borgmatic logs to PagerDuty when a backup errors.
|
||||
Defaults to true.
|
||||
example: false
|
||||
description: |
|
||||
Configuration for a monitoring integration with PagerDuty. Create an
|
||||
account at https://www.pagerduty.com if you'd like to use this
|
||||
@@ -2422,5 +2790,27 @@ properties:
|
||||
description: |
|
||||
Command to use instead of "keepassxc-cli".
|
||||
example: /usr/local/bin/keepassxc-cli
|
||||
key_file:
|
||||
type: string
|
||||
description: |
|
||||
Path to a key file for unlocking the KeePassXC database.
|
||||
example: /path/to/keyfile
|
||||
yubikey:
|
||||
type: string
|
||||
description: |
|
||||
YubiKey slot and optional serial number used to access the
|
||||
KeePassXC database. The format is "<slot[:serial]>", where:
|
||||
* <slot> is the YubiKey slot number (e.g., `1` or `2`).
|
||||
* <serial> (optional) is the YubiKey's serial number (e.g.,
|
||||
`7370001`).
|
||||
example: "1:7370001"
|
||||
description: |
|
||||
Configuration for integration with the KeePassXC password manager.
|
||||
default_actions:
|
||||
type: boolean
|
||||
description: |
|
||||
Whether to apply default actions (e.g., backup) when no arguments
|
||||
are supplied to the borgmatic command. If set to true, borgmatic
|
||||
triggers the default actions (create, prune, compact and check). If
|
||||
set to false, borgmatic displays the help message instead.
|
||||
example: true
|
||||
|
||||
@@ -4,7 +4,7 @@ import os
|
||||
import jsonschema
|
||||
import ruamel.yaml
|
||||
|
||||
import borgmatic.config
|
||||
import borgmatic.config.arguments
|
||||
from borgmatic.config import constants, environment, load, normalize, override
|
||||
|
||||
|
||||
@@ -21,6 +21,18 @@ def schema_filename():
|
||||
return schema_path
|
||||
|
||||
|
||||
def load_schema(schema_path): # pragma: no cover
|
||||
'''
|
||||
Given a schema filename path, load the schema and return it as a dict.
|
||||
|
||||
Raise Validation_error if the schema could not be parsed.
|
||||
'''
|
||||
try:
|
||||
return load.load_configuration(schema_path)
|
||||
except (ruamel.yaml.error.YAMLError, RecursionError) as error:
|
||||
raise Validation_error(schema_path, (str(error),))
|
||||
|
||||
|
||||
def format_json_error_path_element(path_element):
|
||||
'''
|
||||
Given a path element into a JSON data structure, format it for display as a string.
|
||||
@@ -84,13 +96,17 @@ def apply_logical_validation(config_filename, parsed_configuration):
|
||||
)
|
||||
|
||||
|
||||
def parse_configuration(config_filename, schema_filename, overrides=None, resolve_env=True):
|
||||
def parse_configuration(
|
||||
config_filename, schema_filename, arguments, overrides=None, resolve_env=True
|
||||
):
|
||||
'''
|
||||
Given the path to a config filename in YAML format, the path to a schema filename in a YAML
|
||||
rendition of JSON Schema format, a sequence of configuration file override strings in the form
|
||||
of "option.suboption=value", and whether to resolve environment variables, return the parsed
|
||||
configuration as a data structure of nested dicts and lists corresponding to the schema. Example
|
||||
return value:
|
||||
rendition of JSON Schema format, arguments as dict from action name to argparse.Namespace, a
|
||||
sequence of configuration file override strings in the form of "option.suboption=value", and
|
||||
whether to resolve environment variables, return the parsed configuration as a data structure of
|
||||
nested dicts and lists corresponding to the schema. Example return value.
|
||||
|
||||
Example return value:
|
||||
|
||||
{
|
||||
'source_directories': ['/home', '/etc'],
|
||||
@@ -113,6 +129,7 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
|
||||
except (ruamel.yaml.error.YAMLError, RecursionError) as error:
|
||||
raise Validation_error(config_filename, (str(error),))
|
||||
|
||||
borgmatic.config.arguments.apply_arguments_to_config(config, schema, arguments)
|
||||
override.apply_overrides(config, schema, overrides)
|
||||
constants.apply_constants(config, config.get('constants') if config else {})
|
||||
|
||||
@@ -138,16 +155,22 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
|
||||
return config, config_paths, logs
|
||||
|
||||
|
||||
def normalize_repository_path(repository):
|
||||
def normalize_repository_path(repository, base=None):
|
||||
'''
|
||||
Given a repository path, return the absolute path of it (for local repositories).
|
||||
Optionally, use a base path for resolving relative paths, e.g. to the configured working directory.
|
||||
'''
|
||||
# A colon in the repository could mean that it's either a file:// URL or a remote repository.
|
||||
# If it's a remote repository, we don't want to normalize it. If it's a file:// URL, we do.
|
||||
if ':' not in repository:
|
||||
return os.path.abspath(repository)
|
||||
return (
|
||||
os.path.abspath(os.path.join(base, repository)) if base else os.path.abspath(repository)
|
||||
)
|
||||
elif repository.startswith('file://'):
|
||||
return os.path.abspath(repository.partition('file://')[-1])
|
||||
local_path = repository.partition('file://')[-1]
|
||||
return (
|
||||
os.path.abspath(os.path.join(base, local_path)) if base else os.path.abspath(local_path)
|
||||
)
|
||||
else:
|
||||
return repository
|
||||
|
||||
|
||||
@@ -266,8 +266,8 @@ def log_command(full_command, input_file=None, output_file=None, environment=Non
|
||||
width=MAX_LOGGED_COMMAND_LENGTH,
|
||||
placeholder=' ...',
|
||||
)
|
||||
+ (f" < {getattr(input_file, 'name', '')}" if input_file else '')
|
||||
+ (f" > {getattr(output_file, 'name', '')}" if output_file else '')
|
||||
+ (f" < {getattr(input_file, 'name', input_file)}" if input_file else '')
|
||||
+ (f" > {getattr(output_file, 'name', output_file)}" if output_file else '')
|
||||
)
|
||||
|
||||
|
||||
@@ -315,8 +315,8 @@ def execute_command(
|
||||
shell=shell,
|
||||
env=environment,
|
||||
cwd=working_directory,
|
||||
# Necessary for the passcommand credential hook to work.
|
||||
close_fds=not bool((environment or {}).get('BORG_PASSPHRASE_FD')),
|
||||
# Necessary for passing credentials via anonymous pipe.
|
||||
close_fds=False,
|
||||
)
|
||||
if not run_to_completion:
|
||||
return process
|
||||
@@ -333,6 +333,7 @@ def execute_command(
|
||||
|
||||
def execute_command_and_capture_output(
|
||||
full_command,
|
||||
input_file=None,
|
||||
capture_stderr=False,
|
||||
shell=False,
|
||||
environment=None,
|
||||
@@ -342,28 +343,30 @@ def execute_command_and_capture_output(
|
||||
):
|
||||
'''
|
||||
Execute the given command (a sequence of command/argument strings), capturing and returning its
|
||||
output (stdout). If capture stderr is True, then capture and return stderr in addition to
|
||||
stdout. If shell is True, execute the command within a shell. If an environment variables dict
|
||||
is given, then pass it into the command. If a working directory is given, use that as the
|
||||
present working directory when running the command. If a Borg local path is given, and the
|
||||
command matches it (regardless of arguments), treat exit code 1 as a warning instead of an
|
||||
error. But if Borg exit codes are given as a sequence of exit code configuration dicts, then use
|
||||
that configuration to decide what's an error and what's a warning.
|
||||
output (stdout). If an input file descriptor is given, then pipe it to the command's stdin. If
|
||||
capture stderr is True, then capture and return stderr in addition to stdout. If shell is True,
|
||||
execute the command within a shell. If an environment variables dict is given, then pass it into
|
||||
the command. If a working directory is given, use that as the present working directory when
|
||||
running the command. If a Borg local path is given, and the command matches it (regardless of
|
||||
arguments), treat exit code 1 as a warning instead of an error. But if Borg exit codes are given
|
||||
as a sequence of exit code configuration dicts, then use that configuration to decide what's an
|
||||
error and what's a warning.
|
||||
|
||||
Raise subprocesses.CalledProcessError if an error occurs while running the command.
|
||||
'''
|
||||
log_command(full_command, environment=environment)
|
||||
log_command(full_command, input_file, environment=environment)
|
||||
command = ' '.join(full_command) if shell else full_command
|
||||
|
||||
try:
|
||||
output = subprocess.check_output(
|
||||
command,
|
||||
stdin=input_file,
|
||||
stderr=subprocess.STDOUT if capture_stderr else None,
|
||||
shell=shell,
|
||||
env=environment,
|
||||
cwd=working_directory,
|
||||
# Necessary for the passcommand credential hook to work.
|
||||
close_fds=not bool((environment or {}).get('BORG_PASSPHRASE_FD')),
|
||||
# Necessary for passing credentials via anonymous pipe.
|
||||
close_fds=False,
|
||||
)
|
||||
except subprocess.CalledProcessError as error:
|
||||
if (
|
||||
@@ -422,8 +425,8 @@ def execute_command_with_processes(
|
||||
shell=shell,
|
||||
env=environment,
|
||||
cwd=working_directory,
|
||||
# Necessary for the passcommand credential hook to work.
|
||||
close_fds=not bool((environment or {}).get('BORG_PASSPHRASE_FD')),
|
||||
# Necessary for passing credentials via anonymous pipe.
|
||||
close_fds=False,
|
||||
)
|
||||
except (subprocess.CalledProcessError, OSError):
|
||||
# Something has gone wrong. So vent each process' output buffer to prevent it from hanging.
|
||||
|
||||
@@ -2,9 +2,11 @@ import logging
|
||||
import os
|
||||
import re
|
||||
import shlex
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
import borgmatic.execute
|
||||
import borgmatic.logger
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -44,54 +46,184 @@ def make_environment(current_environment, sys_module=sys):
|
||||
return environment
|
||||
|
||||
|
||||
def execute_hook(commands, umask, config_filename, description, dry_run, **context):
|
||||
def filter_hooks(command_hooks, before=None, after=None, hook_name=None, action_names=None):
|
||||
'''
|
||||
Given a list of hook commands to execute, a umask to execute with (or None), a config filename,
|
||||
a hook description, and whether this is a dry run, run the given commands. Or, don't run them
|
||||
if this is a dry run.
|
||||
Given a sequence of command hook dicts from configuration and one or more filters (before name,
|
||||
after name, calling hook name, or a sequence of action names), filter down the command hooks to
|
||||
just the ones that match the given filters.
|
||||
'''
|
||||
return tuple(
|
||||
hook_config
|
||||
for hook_config in command_hooks or ()
|
||||
for config_action_names in (hook_config.get('when'),)
|
||||
if before is None or hook_config.get('before') == before
|
||||
if after is None or hook_config.get('after') == after
|
||||
if action_names is None
|
||||
or config_action_names is None
|
||||
or set(config_action_names or ()).intersection(set(action_names))
|
||||
)
|
||||
|
||||
|
||||
def execute_hooks(command_hooks, umask, working_directory, dry_run, **context):
|
||||
'''
|
||||
Given a sequence of command hook dicts from configuration, a umask to execute with (or None), a
|
||||
working directory to execute with, and whether this is a dry run, run the commands for each
|
||||
hook. Or don't run them if this is a dry run.
|
||||
|
||||
The context contains optional values interpolated by name into the hook commands.
|
||||
|
||||
Raise ValueError if the umask cannot be parsed.
|
||||
Raise ValueError if the umask cannot be parsed or a hook is invalid.
|
||||
Raise subprocesses.CalledProcessError if an error occurs in a hook.
|
||||
'''
|
||||
if not commands:
|
||||
logger.debug(f'No commands to run for {description} hook')
|
||||
return
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
|
||||
dry_run_label = ' (dry run; not actually running hooks)' if dry_run else ''
|
||||
|
||||
context['configuration_filename'] = config_filename
|
||||
commands = [interpolate_context(description, command, context) for command in commands]
|
||||
for hook_config in command_hooks:
|
||||
commands = hook_config.get('run')
|
||||
|
||||
if len(commands) == 1:
|
||||
logger.info(f'Running command for {description} hook{dry_run_label}')
|
||||
else:
|
||||
logger.info(
|
||||
f'Running {len(commands)} commands for {description} hook{dry_run_label}',
|
||||
)
|
||||
if 'before' in hook_config:
|
||||
description = f'before {hook_config.get("before")}'
|
||||
elif 'after' in hook_config:
|
||||
description = f'after {hook_config.get("after")}'
|
||||
else:
|
||||
raise ValueError(f'Invalid hook configuration: {hook_config}')
|
||||
|
||||
if umask:
|
||||
parsed_umask = int(str(umask), 8)
|
||||
logger.debug(f'Set hook umask to {oct(parsed_umask)}')
|
||||
original_umask = os.umask(parsed_umask)
|
||||
else:
|
||||
original_umask = None
|
||||
if not commands:
|
||||
logger.debug(f'No commands to run for {description} hook')
|
||||
continue
|
||||
|
||||
try:
|
||||
for command in commands:
|
||||
if dry_run:
|
||||
continue
|
||||
commands = [interpolate_context(description, command, context) for command in commands]
|
||||
|
||||
borgmatic.execute.execute_command(
|
||||
[command],
|
||||
output_log_level=(logging.ERROR if description == 'on-error' else logging.WARNING),
|
||||
shell=True,
|
||||
environment=make_environment(os.environ),
|
||||
if len(commands) == 1:
|
||||
logger.info(f'Running {description} command hook{dry_run_label}')
|
||||
else:
|
||||
logger.info(
|
||||
f'Running {len(commands)} commands for {description} hook{dry_run_label}',
|
||||
)
|
||||
finally:
|
||||
if original_umask:
|
||||
os.umask(original_umask)
|
||||
|
||||
if umask:
|
||||
parsed_umask = int(str(umask), 8)
|
||||
logger.debug(f'Setting hook umask to {oct(parsed_umask)}')
|
||||
original_umask = os.umask(parsed_umask)
|
||||
else:
|
||||
original_umask = None
|
||||
|
||||
try:
|
||||
for command in commands:
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
borgmatic.execute.execute_command(
|
||||
[command],
|
||||
output_log_level=(
|
||||
logging.ERROR if hook_config.get('after') == 'error' else logging.ANSWER
|
||||
),
|
||||
shell=True,
|
||||
environment=make_environment(os.environ),
|
||||
working_directory=working_directory,
|
||||
)
|
||||
finally:
|
||||
if original_umask:
|
||||
os.umask(original_umask)
|
||||
|
||||
|
||||
class Before_after_hooks:
|
||||
'''
|
||||
A Python context manager for executing command hooks both before and after the wrapped code.
|
||||
|
||||
Example use as a context manager:
|
||||
|
||||
with borgmatic.hooks.command.Before_after_hooks(
|
||||
command_hooks=config.get('commands'),
|
||||
before_after='do_stuff',
|
||||
umask=config.get('umask'),
|
||||
dry_run=dry_run,
|
||||
hook_name='myhook',
|
||||
):
|
||||
do()
|
||||
some()
|
||||
stuff()
|
||||
|
||||
With that context manager in place, "before" command hooks execute before the wrapped code runs,
|
||||
and "after" command hooks execute after the wrapped code completes.
|
||||
'''
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
command_hooks,
|
||||
before_after,
|
||||
umask,
|
||||
working_directory,
|
||||
dry_run,
|
||||
hook_name=None,
|
||||
action_names=None,
|
||||
**context,
|
||||
):
|
||||
'''
|
||||
Given a sequence of command hook configuration dicts, the before/after name, a umask to run
|
||||
commands with, a working directory to run commands with, a dry run flag, the name of the
|
||||
calling hook, a sequence of action names, and any context for the executed commands, save
|
||||
those data points for use below.
|
||||
'''
|
||||
self.command_hooks = command_hooks
|
||||
self.before_after = before_after
|
||||
self.umask = umask
|
||||
self.working_directory = working_directory
|
||||
self.dry_run = dry_run
|
||||
self.hook_name = hook_name
|
||||
self.action_names = action_names
|
||||
self.context = context
|
||||
|
||||
def __enter__(self):
|
||||
'''
|
||||
Run the configured "before" command hooks that match the initialized data points.
|
||||
'''
|
||||
try:
|
||||
execute_hooks(
|
||||
borgmatic.hooks.command.filter_hooks(
|
||||
self.command_hooks,
|
||||
before=self.before_after,
|
||||
hook_name=self.hook_name,
|
||||
action_names=self.action_names,
|
||||
),
|
||||
self.umask,
|
||||
self.working_directory,
|
||||
self.dry_run,
|
||||
**self.context,
|
||||
)
|
||||
except (OSError, subprocess.CalledProcessError) as error:
|
||||
if considered_soft_failure(error):
|
||||
return
|
||||
|
||||
# Trigger the after hook manually, since raising here will prevent it from being run
|
||||
# otherwise.
|
||||
self.__exit__(None, None, None)
|
||||
|
||||
raise ValueError(f'Error running before {self.before_after} hook: {error}')
|
||||
|
||||
def __exit__(self, exception_type, exception, traceback):
|
||||
'''
|
||||
Run the configured "after" command hooks that match the initialized data points.
|
||||
'''
|
||||
try:
|
||||
execute_hooks(
|
||||
borgmatic.hooks.command.filter_hooks(
|
||||
self.command_hooks,
|
||||
after=self.before_after,
|
||||
hook_name=self.hook_name,
|
||||
action_names=self.action_names,
|
||||
),
|
||||
self.umask,
|
||||
self.working_directory,
|
||||
self.dry_run,
|
||||
**self.context,
|
||||
)
|
||||
except (OSError, subprocess.CalledProcessError) as error:
|
||||
if considered_soft_failure(error):
|
||||
return
|
||||
|
||||
raise ValueError(f'Error running after {self.before_after} hook: {error}')
|
||||
|
||||
|
||||
def considered_soft_failure(error):
|
||||
|
||||
@@ -19,9 +19,11 @@ def load_credential(hook_config, config, credential_parameters):
|
||||
|
||||
raise ValueError(f'Cannot load invalid credential: "{name}"')
|
||||
|
||||
expanded_credential_path = os.path.expanduser(credential_path)
|
||||
|
||||
try:
|
||||
with open(
|
||||
os.path.join(config.get('working_directory', ''), credential_path)
|
||||
os.path.join(config.get('working_directory', ''), expanded_credential_path)
|
||||
) as credential_file:
|
||||
return credential_file.read().rstrip(os.linesep)
|
||||
except (FileNotFoundError, OSError) as error:
|
||||
|
||||
@@ -11,32 +11,35 @@ def load_credential(hook_config, config, credential_parameters):
|
||||
'''
|
||||
Given the hook configuration dict, the configuration dict, and a credential parameters tuple
|
||||
containing a KeePassXC database path and an attribute name to load, run keepassxc-cli to fetch
|
||||
the corresponidng KeePassXC credential and return it.
|
||||
the corresponding KeePassXC credential and return it.
|
||||
|
||||
Raise ValueError if keepassxc-cli can't retrieve the credential.
|
||||
'''
|
||||
try:
|
||||
(database_path, attribute_name) = credential_parameters
|
||||
except ValueError:
|
||||
path_and_name = ' '.join(credential_parameters)
|
||||
raise ValueError(f'Invalid KeePassXC credential parameters: {credential_parameters}')
|
||||
|
||||
raise ValueError(
|
||||
f'Cannot load credential with invalid KeePassXC database path and attribute name: "{path_and_name}"'
|
||||
)
|
||||
expanded_database_path = os.path.expanduser(database_path)
|
||||
|
||||
if not os.path.exists(database_path):
|
||||
raise ValueError(
|
||||
f'Cannot load credential because KeePassXC database path does not exist: {database_path}'
|
||||
)
|
||||
if not os.path.exists(expanded_database_path):
|
||||
raise ValueError(f'KeePassXC database path does not exist: {database_path}')
|
||||
|
||||
return borgmatic.execute.execute_command_and_capture_output(
|
||||
# Build the keepassxc-cli command.
|
||||
command = (
|
||||
tuple(shlex.split((hook_config or {}).get('keepassxc_cli_command', 'keepassxc-cli')))
|
||||
+ ('show', '--show-protected', '--attributes', 'Password')
|
||||
+ (
|
||||
'show',
|
||||
'--show-protected',
|
||||
'--attributes',
|
||||
'Password',
|
||||
database_path,
|
||||
attribute_name,
|
||||
('--key-file', hook_config['key_file'])
|
||||
if hook_config and hook_config.get('key_file')
|
||||
else ()
|
||||
)
|
||||
).rstrip(os.linesep)
|
||||
+ (
|
||||
('--yubikey', hook_config['yubikey'])
|
||||
if hook_config and hook_config.get('yubikey')
|
||||
else ()
|
||||
)
|
||||
+ (expanded_database_path, attribute_name) # Ensure database and entry are last.
|
||||
)
|
||||
|
||||
return borgmatic.execute.execute_command_and_capture_output(command).rstrip(os.linesep)
|
||||
|
||||
@@ -5,7 +5,7 @@ import re
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
CREDENTIAL_NAME_PATTERN = re.compile(r'^\w+$')
|
||||
CREDENTIAL_NAME_PATTERN = re.compile(r'^[\w.-]+$')
|
||||
|
||||
|
||||
def load_credential(hook_config, config, credential_parameters):
|
||||
|
||||
@@ -48,6 +48,47 @@ def get_subvolume_mount_points(findmnt_command):
|
||||
Subvolume = collections.namedtuple('Subvolume', ('path', 'contained_patterns'), defaults=((),))
|
||||
|
||||
|
||||
def get_subvolume_property(btrfs_command, subvolume_path, property_name):
|
||||
output = borgmatic.execute.execute_command_and_capture_output(
|
||||
tuple(btrfs_command.split(' '))
|
||||
+ (
|
||||
'property',
|
||||
'get',
|
||||
'-t', # Type.
|
||||
'subvol',
|
||||
subvolume_path,
|
||||
property_name,
|
||||
),
|
||||
)
|
||||
|
||||
try:
|
||||
value = output.strip().split('=')[1]
|
||||
except IndexError:
|
||||
raise ValueError(f'Invalid {btrfs_command} property output')
|
||||
|
||||
return {
|
||||
'true': True,
|
||||
'false': False,
|
||||
}.get(value, value)
|
||||
|
||||
|
||||
def omit_read_only_subvolume_mount_points(btrfs_command, subvolume_paths):
|
||||
'''
|
||||
Given a Btrfs command to run and a sequence of Btrfs subvolume mount points, filter them down to
|
||||
just those that are read-write. The idea is that Btrfs can't actually snapshot a read-only
|
||||
subvolume, so we should just ignore them.
|
||||
'''
|
||||
retained_subvolume_paths = []
|
||||
|
||||
for subvolume_path in subvolume_paths:
|
||||
if get_subvolume_property(btrfs_command, subvolume_path, 'ro'):
|
||||
logger.debug(f'Ignoring Btrfs subvolume {subvolume_path} because it is read-only')
|
||||
else:
|
||||
retained_subvolume_paths.append(subvolume_path)
|
||||
|
||||
return tuple(retained_subvolume_paths)
|
||||
|
||||
|
||||
def get_subvolumes(btrfs_command, findmnt_command, patterns=None):
|
||||
'''
|
||||
Given a Btrfs command to run and a sequence of configured patterns, find the intersection
|
||||
@@ -67,7 +108,11 @@ def get_subvolumes(btrfs_command, findmnt_command, patterns=None):
|
||||
# backup. Sort the subvolumes from longest to shortest mount points, so longer mount points get
|
||||
# a whack at the candidate pattern piñata before their parents do. (Patterns are consumed during
|
||||
# this process, so no two subvolumes end up with the same contained patterns.)
|
||||
for mount_point in reversed(get_subvolume_mount_points(findmnt_command)):
|
||||
for mount_point in reversed(
|
||||
omit_read_only_subvolume_mount_points(
|
||||
btrfs_command, get_subvolume_mount_points(findmnt_command)
|
||||
)
|
||||
):
|
||||
subvolumes.extend(
|
||||
Subvolume(mount_point, contained_patterns)
|
||||
for contained_patterns in (
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import copy
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import shlex
|
||||
|
||||
import borgmatic.borg.pattern
|
||||
@@ -23,14 +24,92 @@ def make_dump_path(base_directory): # pragma: no cover
|
||||
return dump.make_data_source_dump_path(base_directory, 'mariadb_databases')
|
||||
|
||||
|
||||
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
|
||||
DEFAULTS_EXTRA_FILE_FLAG_PATTERN = re.compile('^--defaults-extra-file=(?P<filename>.*)$')
|
||||
|
||||
|
||||
def database_names_to_dump(database, config, environment, dry_run):
|
||||
def parse_extra_options(extra_options):
|
||||
'''
|
||||
Given a requested database config and a configuration dict, return the corresponding sequence of
|
||||
database names to dump. In the case of "all", query for the names of databases on the configured
|
||||
host and return them, excluding any system databases that will cause problems during restore.
|
||||
Given an extra options string, split the options into a tuple and return it. Additionally, if
|
||||
the first option is "--defaults-extra-file=...", then remove it from the options and return the
|
||||
filename.
|
||||
|
||||
So the return value is a tuple of: (parsed options, defaults extra filename).
|
||||
|
||||
The intent is to support downstream merging of multiple "--defaults-extra-file"s, as
|
||||
MariaDB/MySQL only allows one at a time.
|
||||
'''
|
||||
split_extra_options = tuple(shlex.split(extra_options)) if extra_options else ()
|
||||
|
||||
if not split_extra_options:
|
||||
return ((), None)
|
||||
|
||||
match = DEFAULTS_EXTRA_FILE_FLAG_PATTERN.match(split_extra_options[0])
|
||||
|
||||
if not match:
|
||||
return (split_extra_options, None)
|
||||
|
||||
return (split_extra_options[1:], match.group('filename'))
|
||||
|
||||
|
||||
def make_defaults_file_options(username=None, password=None, defaults_extra_filename=None):
|
||||
'''
|
||||
Given a database username and/or password, write it to an anonymous pipe and return the flags
|
||||
for passing that file descriptor to an executed command. The idea is that this is a more secure
|
||||
way to transmit credentials to a database client than using an environment variable.
|
||||
|
||||
If no username or password are given, then return the options for the given defaults extra
|
||||
filename (if any). But if there is a username and/or password and a defaults extra filename is
|
||||
given, then "!include" it from the generated file, effectively allowing multiple defaults extra
|
||||
files.
|
||||
|
||||
Do not use the returned value for multiple different command invocations. That will not work
|
||||
because each pipe is "used up" once read.
|
||||
'''
|
||||
escaped_password = None if password is None else password.replace('\\', '\\\\')
|
||||
|
||||
values = '\n'.join(
|
||||
(
|
||||
(f'user={username}' if username is not None else ''),
|
||||
(f'password="{escaped_password}"' if escaped_password is not None else ''),
|
||||
)
|
||||
).strip()
|
||||
|
||||
if not values:
|
||||
if defaults_extra_filename:
|
||||
return (f'--defaults-extra-file={defaults_extra_filename}',)
|
||||
|
||||
return ()
|
||||
|
||||
fields_message = ' and '.join(
|
||||
field_name
|
||||
for field_name in (
|
||||
(f'username ({username})' if username is not None else None),
|
||||
('password' if password is not None else None),
|
||||
)
|
||||
if field_name is not None
|
||||
)
|
||||
include_message = f' (including {defaults_extra_filename})' if defaults_extra_filename else ''
|
||||
logger.debug(f'Writing database {fields_message} to defaults extra file pipe{include_message}')
|
||||
|
||||
include = f'!include {defaults_extra_filename}\n' if defaults_extra_filename else ''
|
||||
|
||||
read_file_descriptor, write_file_descriptor = os.pipe()
|
||||
os.write(write_file_descriptor, f'{include}[client]\n{values}'.encode('utf-8'))
|
||||
os.close(write_file_descriptor)
|
||||
|
||||
# This plus subprocess.Popen(..., close_fds=False) in execute.py is necessary for the database
|
||||
# client child process to inherit the file descriptor.
|
||||
os.set_inheritable(read_file_descriptor, True)
|
||||
|
||||
return (f'--defaults-extra-file=/dev/fd/{read_file_descriptor}',)
|
||||
|
||||
|
||||
def database_names_to_dump(database, config, username, password, environment, dry_run):
|
||||
'''
|
||||
Given a requested database config, a configuration dict, a database username and password, an
|
||||
environment dict, and whether this is a dry run, return the corresponding sequence of database
|
||||
names to dump. In the case of "all", query for the names of databases on the configured host and
|
||||
return them, excluding any system databases that will cause problems during restore.
|
||||
'''
|
||||
if database['name'] != 'all':
|
||||
return (database['name'],)
|
||||
@@ -40,24 +119,22 @@ def database_names_to_dump(database, config, environment, dry_run):
|
||||
mariadb_show_command = tuple(
|
||||
shlex.quote(part) for part in shlex.split(database.get('mariadb_command') or 'mariadb')
|
||||
)
|
||||
extra_options, defaults_extra_filename = parse_extra_options(database.get('list_options'))
|
||||
show_command = (
|
||||
mariadb_show_command
|
||||
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
|
||||
+ make_defaults_file_options(username, password, defaults_extra_filename)
|
||||
+ extra_options
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
|
||||
+ (
|
||||
(
|
||||
'--user',
|
||||
borgmatic.hooks.credential.parse.resolve_credential(database['username'], config),
|
||||
)
|
||||
if 'username' in database
|
||||
else ()
|
||||
)
|
||||
+ (('--ssl',) if database.get('tls') is True else ())
|
||||
+ (('--skip-ssl',) if database.get('tls') is False else ())
|
||||
+ ('--skip-column-names', '--batch')
|
||||
+ ('--execute', 'show schemas')
|
||||
)
|
||||
|
||||
logger.debug('Querying for "all" MariaDB databases to dump')
|
||||
|
||||
show_output = execute_command_and_capture_output(show_command, environment=environment)
|
||||
|
||||
return tuple(
|
||||
@@ -67,8 +144,19 @@ def database_names_to_dump(database, config, environment, dry_run):
|
||||
)
|
||||
|
||||
|
||||
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
|
||||
|
||||
|
||||
def execute_dump_command(
|
||||
database, config, dump_path, database_names, environment, dry_run, dry_run_label
|
||||
database,
|
||||
config,
|
||||
username,
|
||||
password,
|
||||
dump_path,
|
||||
database_names,
|
||||
environment,
|
||||
dry_run,
|
||||
dry_run_label,
|
||||
):
|
||||
'''
|
||||
Kick off a dump for the given MariaDB database (provided as a configuration dict) to a named
|
||||
@@ -95,21 +183,17 @@ def execute_dump_command(
|
||||
shlex.quote(part)
|
||||
for part in shlex.split(database.get('mariadb_dump_command') or 'mariadb-dump')
|
||||
)
|
||||
extra_options, defaults_extra_filename = parse_extra_options(database.get('options'))
|
||||
dump_command = (
|
||||
mariadb_dump_command
|
||||
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
|
||||
+ make_defaults_file_options(username, password, defaults_extra_filename)
|
||||
+ extra_options
|
||||
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
|
||||
+ (
|
||||
(
|
||||
'--user',
|
||||
borgmatic.hooks.credential.parse.resolve_credential(database['username'], config),
|
||||
)
|
||||
if 'username' in database
|
||||
else ()
|
||||
)
|
||||
+ (('--ssl',) if database.get('tls') is True else ())
|
||||
+ (('--skip-ssl',) if database.get('tls') is False else ())
|
||||
+ ('--databases',)
|
||||
+ database_names
|
||||
+ ('--result-file', dump_filename)
|
||||
@@ -165,19 +249,16 @@ def dump_data_sources(
|
||||
|
||||
for database in databases:
|
||||
dump_path = make_dump_path(borgmatic_runtime_directory)
|
||||
environment = dict(
|
||||
os.environ,
|
||||
**(
|
||||
{
|
||||
'MYSQL_PWD': borgmatic.hooks.credential.parse.resolve_credential(
|
||||
database['password'], config
|
||||
)
|
||||
}
|
||||
if 'password' in database
|
||||
else {}
|
||||
),
|
||||
username = borgmatic.hooks.credential.parse.resolve_credential(
|
||||
database.get('username'), config
|
||||
)
|
||||
password = borgmatic.hooks.credential.parse.resolve_credential(
|
||||
database.get('password'), config
|
||||
)
|
||||
environment = dict(os.environ)
|
||||
dump_database_names = database_names_to_dump(
|
||||
database, config, username, password, environment, dry_run
|
||||
)
|
||||
dump_database_names = database_names_to_dump(database, config, environment, dry_run)
|
||||
|
||||
if not dump_database_names:
|
||||
if dry_run:
|
||||
@@ -193,6 +274,8 @@ def dump_data_sources(
|
||||
execute_dump_command(
|
||||
renamed_database,
|
||||
config,
|
||||
username,
|
||||
password,
|
||||
dump_path,
|
||||
(dump_name,),
|
||||
environment,
|
||||
@@ -205,6 +288,8 @@ def dump_data_sources(
|
||||
execute_dump_command(
|
||||
database,
|
||||
config,
|
||||
username,
|
||||
password,
|
||||
dump_path,
|
||||
dump_database_names,
|
||||
environment,
|
||||
@@ -278,6 +363,7 @@ def restore_data_source_dump(
|
||||
port = str(
|
||||
connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
|
||||
)
|
||||
tls = data_source.get('restore_tls', data_source.get('tls'))
|
||||
username = borgmatic.hooks.credential.parse.resolve_credential(
|
||||
(
|
||||
connection_params['username']
|
||||
@@ -296,20 +382,19 @@ def restore_data_source_dump(
|
||||
mariadb_restore_command = tuple(
|
||||
shlex.quote(part) for part in shlex.split(data_source.get('mariadb_command') or 'mariadb')
|
||||
)
|
||||
extra_options, defaults_extra_filename = parse_extra_options(data_source.get('restore_options'))
|
||||
restore_command = (
|
||||
mariadb_restore_command
|
||||
+ make_defaults_file_options(username, password, defaults_extra_filename)
|
||||
+ extra_options
|
||||
+ ('--batch',)
|
||||
+ (
|
||||
tuple(data_source['restore_options'].split(' '))
|
||||
if 'restore_options' in data_source
|
||||
else ()
|
||||
)
|
||||
+ (('--host', hostname) if hostname else ())
|
||||
+ (('--port', str(port)) if port else ())
|
||||
+ (('--protocol', 'tcp') if hostname or port else ())
|
||||
+ (('--user', username) if username else ())
|
||||
+ (('--ssl',) if tls is True else ())
|
||||
+ (('--skip-ssl',) if tls is False else ())
|
||||
)
|
||||
environment = dict(os.environ, **({'MYSQL_PWD': password} if password else {}))
|
||||
environment = dict(os.environ)
|
||||
|
||||
logger.debug(f"Restoring MariaDB database {data_source['name']}{dry_run_label}")
|
||||
if dry_run:
|
||||
|
||||
@@ -53,6 +53,7 @@ def dump_data_sources(
|
||||
logger.info(f'Dumping MongoDB databases{dry_run_label}')
|
||||
|
||||
processes = []
|
||||
|
||||
for database in databases:
|
||||
name = database['name']
|
||||
dump_filename = dump.make_data_source_dump_filename(
|
||||
@@ -89,14 +90,41 @@ def dump_data_sources(
|
||||
return processes
|
||||
|
||||
|
||||
def make_password_config_file(password):
|
||||
'''
|
||||
Given a database password, write it as a MongoDB configuration file to an anonymous pipe and
|
||||
return its filename. The idea is that this is a more secure way to transmit a password to
|
||||
MongoDB than providing it directly on the command-line.
|
||||
|
||||
Do not use the returned value for multiple different command invocations. That will not work
|
||||
because each pipe is "used up" once read.
|
||||
'''
|
||||
logger.debug('Writing MongoDB password to configuration file pipe')
|
||||
|
||||
read_file_descriptor, write_file_descriptor = os.pipe()
|
||||
os.write(write_file_descriptor, f'password: {password}'.encode('utf-8'))
|
||||
os.close(write_file_descriptor)
|
||||
|
||||
# This plus subprocess.Popen(..., close_fds=False) in execute.py is necessary for the database
|
||||
# client child process to inherit the file descriptor.
|
||||
os.set_inheritable(read_file_descriptor, True)
|
||||
|
||||
return f'/dev/fd/{read_file_descriptor}'
|
||||
|
||||
|
||||
def build_dump_command(database, config, dump_filename, dump_format):
|
||||
'''
|
||||
Return the mongodump command from a single database configuration.
|
||||
Return the custom mongodump_command from a single database configuration.
|
||||
'''
|
||||
all_databases = database['name'] == 'all'
|
||||
|
||||
password = borgmatic.hooks.credential.parse.resolve_credential(database.get('password'), config)
|
||||
|
||||
dump_command = tuple(
|
||||
shlex.quote(part) for part in shlex.split(database.get('mongodump_command') or 'mongodump')
|
||||
)
|
||||
return (
|
||||
('mongodump',)
|
||||
dump_command
|
||||
+ (('--out', shlex.quote(dump_filename)) if dump_format == 'directory' else ())
|
||||
+ (('--host', shlex.quote(database['hostname'])) if 'hostname' in database else ())
|
||||
+ (('--port', shlex.quote(str(database['port']))) if 'port' in database else ())
|
||||
@@ -112,18 +140,7 @@ def build_dump_command(database, config, dump_filename, dump_format):
|
||||
if 'username' in database
|
||||
else ()
|
||||
)
|
||||
+ (
|
||||
(
|
||||
'--password',
|
||||
shlex.quote(
|
||||
borgmatic.hooks.credential.parse.resolve_credential(
|
||||
database['password'], config
|
||||
)
|
||||
),
|
||||
)
|
||||
if 'password' in database
|
||||
else ()
|
||||
)
|
||||
+ (('--config', make_password_config_file(password)) if password else ())
|
||||
+ (
|
||||
('--authenticationDatabase', shlex.quote(database['authentication_database']))
|
||||
if 'authentication_database' in database
|
||||
@@ -216,7 +233,7 @@ def restore_data_source_dump(
|
||||
|
||||
def build_restore_command(extract_process, database, config, dump_filename, connection_params):
|
||||
'''
|
||||
Return the mongorestore command from a single database configuration.
|
||||
Return the custom mongorestore_command from a single database configuration.
|
||||
'''
|
||||
hostname = connection_params['hostname'] or database.get(
|
||||
'restore_hostname', database.get('hostname')
|
||||
@@ -237,7 +254,10 @@ def build_restore_command(extract_process, database, config, dump_filename, conn
|
||||
config,
|
||||
)
|
||||
|
||||
command = ['mongorestore']
|
||||
command = list(
|
||||
shlex.quote(part)
|
||||
for part in shlex.split(database.get('mongorestore_command') or 'mongorestore')
|
||||
)
|
||||
if extract_process:
|
||||
command.append('--archive')
|
||||
else:
|
||||
@@ -251,7 +271,7 @@ def build_restore_command(extract_process, database, config, dump_filename, conn
|
||||
if username:
|
||||
command.extend(('--username', username))
|
||||
if password:
|
||||
command.extend(('--password', password))
|
||||
command.extend(('--config', make_password_config_file(password)))
|
||||
if 'authentication_database' in database:
|
||||
command.extend(('--authenticationDatabase', database['authentication_database']))
|
||||
if 'restore_options' in database:
|
||||
|
||||
@@ -6,6 +6,7 @@ import shlex
|
||||
import borgmatic.borg.pattern
|
||||
import borgmatic.config.paths
|
||||
import borgmatic.hooks.credential.parse
|
||||
import borgmatic.hooks.data_source.mariadb
|
||||
from borgmatic.execute import (
|
||||
execute_command,
|
||||
execute_command_and_capture_output,
|
||||
@@ -26,11 +27,12 @@ def make_dump_path(base_directory): # pragma: no cover
|
||||
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
|
||||
|
||||
|
||||
def database_names_to_dump(database, config, environment, dry_run):
|
||||
def database_names_to_dump(database, config, username, password, environment, dry_run):
|
||||
'''
|
||||
Given a requested database config and a configuration dict, return the corresponding sequence of
|
||||
database names to dump. In the case of "all", query for the names of databases on the configured
|
||||
host and return them, excluding any system databases that will cause problems during restore.
|
||||
Given a requested database config, a configuration dict, a database username and password, an
|
||||
environment dict, and whether this is a dry run, return the corresponding sequence of database
|
||||
names to dump. In the case of "all", query for the names of databases on the configured host and
|
||||
return them, excluding any system databases that will cause problems during restore.
|
||||
'''
|
||||
if database['name'] != 'all':
|
||||
return (database['name'],)
|
||||
@@ -40,24 +42,26 @@ def database_names_to_dump(database, config, environment, dry_run):
|
||||
mysql_show_command = tuple(
|
||||
shlex.quote(part) for part in shlex.split(database.get('mysql_command') or 'mysql')
|
||||
)
|
||||
extra_options, defaults_extra_filename = (
|
||||
borgmatic.hooks.data_source.mariadb.parse_extra_options(database.get('list_options'))
|
||||
)
|
||||
show_command = (
|
||||
mysql_show_command
|
||||
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
|
||||
+ borgmatic.hooks.data_source.mariadb.make_defaults_file_options(
|
||||
username, password, defaults_extra_filename
|
||||
)
|
||||
+ extra_options
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
|
||||
+ (
|
||||
(
|
||||
'--user',
|
||||
borgmatic.hooks.credential.parse.resolve_credential(database['username'], config),
|
||||
)
|
||||
if 'username' in database
|
||||
else ()
|
||||
)
|
||||
+ (('--ssl',) if database.get('tls') is True else ())
|
||||
+ (('--skip-ssl',) if database.get('tls') is False else ())
|
||||
+ ('--skip-column-names', '--batch')
|
||||
+ ('--execute', 'show schemas')
|
||||
)
|
||||
|
||||
logger.debug('Querying for "all" MySQL databases to dump')
|
||||
|
||||
show_output = execute_command_and_capture_output(show_command, environment=environment)
|
||||
|
||||
return tuple(
|
||||
@@ -68,7 +72,15 @@ def database_names_to_dump(database, config, environment, dry_run):
|
||||
|
||||
|
||||
def execute_dump_command(
|
||||
database, config, dump_path, database_names, environment, dry_run, dry_run_label
|
||||
database,
|
||||
config,
|
||||
username,
|
||||
password,
|
||||
dump_path,
|
||||
database_names,
|
||||
environment,
|
||||
dry_run,
|
||||
dry_run_label,
|
||||
):
|
||||
'''
|
||||
Kick off a dump for the given MySQL/MariaDB database (provided as a configuration dict) to a
|
||||
@@ -94,21 +106,21 @@ def execute_dump_command(
|
||||
mysql_dump_command = tuple(
|
||||
shlex.quote(part) for part in shlex.split(database.get('mysql_dump_command') or 'mysqldump')
|
||||
)
|
||||
extra_options, defaults_extra_filename = (
|
||||
borgmatic.hooks.data_source.mariadb.parse_extra_options(database.get('options'))
|
||||
)
|
||||
dump_command = (
|
||||
mysql_dump_command
|
||||
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
|
||||
+ borgmatic.hooks.data_source.mariadb.make_defaults_file_options(
|
||||
username, password, defaults_extra_filename
|
||||
)
|
||||
+ extra_options
|
||||
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
|
||||
+ (
|
||||
(
|
||||
'--user',
|
||||
borgmatic.hooks.credential.parse.resolve_credential(database['username'], config),
|
||||
)
|
||||
if 'username' in database
|
||||
else ()
|
||||
)
|
||||
+ (('--ssl',) if database.get('tls') is True else ())
|
||||
+ (('--skip-ssl',) if database.get('tls') is False else ())
|
||||
+ ('--databases',)
|
||||
+ database_names
|
||||
+ ('--result-file', dump_filename)
|
||||
@@ -164,19 +176,16 @@ def dump_data_sources(
|
||||
|
||||
for database in databases:
|
||||
dump_path = make_dump_path(borgmatic_runtime_directory)
|
||||
environment = dict(
|
||||
os.environ,
|
||||
**(
|
||||
{
|
||||
'MYSQL_PWD': borgmatic.hooks.credential.parse.resolve_credential(
|
||||
database['password'], config
|
||||
)
|
||||
}
|
||||
if 'password' in database
|
||||
else {}
|
||||
),
|
||||
username = borgmatic.hooks.credential.parse.resolve_credential(
|
||||
database.get('username'), config
|
||||
)
|
||||
password = borgmatic.hooks.credential.parse.resolve_credential(
|
||||
database.get('password'), config
|
||||
)
|
||||
environment = dict(os.environ)
|
||||
dump_database_names = database_names_to_dump(
|
||||
database, config, username, password, environment, dry_run
|
||||
)
|
||||
dump_database_names = database_names_to_dump(database, config, environment, dry_run)
|
||||
|
||||
if not dump_database_names:
|
||||
if dry_run:
|
||||
@@ -192,6 +201,8 @@ def dump_data_sources(
|
||||
execute_dump_command(
|
||||
renamed_database,
|
||||
config,
|
||||
username,
|
||||
password,
|
||||
dump_path,
|
||||
(dump_name,),
|
||||
environment,
|
||||
@@ -204,6 +215,8 @@ def dump_data_sources(
|
||||
execute_dump_command(
|
||||
database,
|
||||
config,
|
||||
username,
|
||||
password,
|
||||
dump_path,
|
||||
dump_database_names,
|
||||
environment,
|
||||
@@ -277,6 +290,7 @@ def restore_data_source_dump(
|
||||
port = str(
|
||||
connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
|
||||
)
|
||||
tls = data_source.get('restore_tls', data_source.get('tls'))
|
||||
username = borgmatic.hooks.credential.parse.resolve_credential(
|
||||
(
|
||||
connection_params['username']
|
||||
@@ -295,20 +309,23 @@ def restore_data_source_dump(
|
||||
mysql_restore_command = tuple(
|
||||
shlex.quote(part) for part in shlex.split(data_source.get('mysql_command') or 'mysql')
|
||||
)
|
||||
extra_options, defaults_extra_filename = (
|
||||
borgmatic.hooks.data_source.mariadb.parse_extra_options(data_source.get('restore_options'))
|
||||
)
|
||||
restore_command = (
|
||||
mysql_restore_command
|
||||
+ ('--batch',)
|
||||
+ (
|
||||
tuple(data_source['restore_options'].split(' '))
|
||||
if 'restore_options' in data_source
|
||||
else ()
|
||||
+ borgmatic.hooks.data_source.mariadb.make_defaults_file_options(
|
||||
username, password, defaults_extra_filename
|
||||
)
|
||||
+ extra_options
|
||||
+ ('--batch',)
|
||||
+ (('--host', hostname) if hostname else ())
|
||||
+ (('--port', str(port)) if port else ())
|
||||
+ (('--protocol', 'tcp') if hostname or port else ())
|
||||
+ (('--user', username) if username else ())
|
||||
+ (('--ssl',) if tls is True else ())
|
||||
+ (('--skip-ssl',) if tls is False else ())
|
||||
)
|
||||
environment = dict(os.environ, **({'MYSQL_PWD': password} if password else {}))
|
||||
environment = dict(os.environ)
|
||||
|
||||
logger.debug(f"Restoring MySQL database {data_source['name']}{dry_run_label}")
|
||||
if dry_run:
|
||||
|
||||
@@ -159,6 +159,7 @@ def dump_data_sources(
|
||||
|
||||
for database_name in dump_database_names:
|
||||
dump_format = database.get('format', None if database_name == 'all' else 'custom')
|
||||
compression = database.get('compression')
|
||||
default_dump_command = 'pg_dumpall' if database_name == 'all' else 'pg_dump'
|
||||
dump_command = tuple(
|
||||
shlex.quote(part)
|
||||
@@ -199,6 +200,7 @@ def dump_data_sources(
|
||||
)
|
||||
+ (('--no-owner',) if database.get('no_owner', False) else ())
|
||||
+ (('--format', shlex.quote(dump_format)) if dump_format else ())
|
||||
+ (('--compress', shlex.quote(str(compression))) if compression is not None else ())
|
||||
+ (('--file', shlex.quote(dump_filename)) if dump_format == 'directory' else ())
|
||||
+ (
|
||||
tuple(shlex.quote(option) for option in database['options'].split(' '))
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import os
|
||||
import pathlib
|
||||
|
||||
IS_A_HOOK = False
|
||||
@@ -11,6 +12,10 @@ def get_contained_patterns(parent_directory, candidate_patterns):
|
||||
paths, but there's a parent directory (logical volume, dataset, subvolume, etc.) at /var, then
|
||||
/var is what we want to snapshot.
|
||||
|
||||
If a parent directory and a candidate pattern are on different devices, skip the pattern. That's
|
||||
because any snapshot of a parent directory won't actually include "contained" directories if
|
||||
they reside on separate devices.
|
||||
|
||||
For this function to work, a candidate pattern path can't have any globs or other non-literal
|
||||
characters in the initial portion of the path that matches the parent directory. For instance, a
|
||||
parent directory of /var would match a candidate pattern path of /var/log/*/data, but not a
|
||||
@@ -27,6 +32,8 @@ def get_contained_patterns(parent_directory, candidate_patterns):
|
||||
if not candidate_patterns:
|
||||
return ()
|
||||
|
||||
parent_device = os.stat(parent_directory).st_dev if os.path.exists(parent_directory) else None
|
||||
|
||||
contained_patterns = tuple(
|
||||
candidate
|
||||
for candidate in candidate_patterns
|
||||
@@ -35,6 +42,7 @@ def get_contained_patterns(parent_directory, candidate_patterns):
|
||||
pathlib.PurePath(parent_directory) == candidate_path
|
||||
or pathlib.PurePath(parent_directory) in candidate_path.parents
|
||||
)
|
||||
if candidate.device == parent_device
|
||||
)
|
||||
candidate_patterns -= set(contained_patterns)
|
||||
|
||||
|
||||
@@ -71,13 +71,16 @@ def dump_data_sources(
|
||||
)
|
||||
continue
|
||||
|
||||
command = (
|
||||
'sqlite3',
|
||||
sqlite_command = tuple(
|
||||
shlex.quote(part) for part in shlex.split(database.get('sqlite_command') or 'sqlite3')
|
||||
)
|
||||
command = sqlite_command + (
|
||||
shlex.quote(database_path),
|
||||
'.dump',
|
||||
'>',
|
||||
shlex.quote(dump_filename),
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
f'Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}'
|
||||
)
|
||||
@@ -160,11 +163,11 @@ def restore_data_source_dump(
|
||||
except FileNotFoundError: # pragma: no cover
|
||||
pass
|
||||
|
||||
restore_command = (
|
||||
'sqlite3',
|
||||
database_path,
|
||||
sqlite_restore_command = tuple(
|
||||
shlex.quote(part)
|
||||
for part in shlex.split(data_source.get('sqlite_restore_command') or 'sqlite3')
|
||||
)
|
||||
|
||||
restore_command = sqlite_restore_command + (shlex.quote(database_path),)
|
||||
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
|
||||
# if the restore paths don't exist in the archive.
|
||||
execute_command_with_processes(
|
||||
|
||||
@@ -134,7 +134,16 @@ def get_all_dataset_mount_points(zfs_command):
|
||||
)
|
||||
)
|
||||
|
||||
return tuple(sorted(line.rstrip() for line in list_output.splitlines()))
|
||||
return tuple(
|
||||
sorted(
|
||||
{
|
||||
mount_point
|
||||
for line in list_output.splitlines()
|
||||
for mount_point in (line.rstrip(),)
|
||||
if mount_point != 'none'
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def snapshot_dataset(zfs_command, full_snapshot_name): # pragma: no cover
|
||||
@@ -411,7 +420,7 @@ def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, d
|
||||
continue
|
||||
|
||||
if not dry_run:
|
||||
shutil.rmtree(snapshots_directory)
|
||||
shutil.rmtree(snapshot_mount_path, ignore_errors=True)
|
||||
|
||||
# Destroy snapshots.
|
||||
full_snapshot_names = get_all_snapshots(zfs_command)
|
||||
|
||||
@@ -3,6 +3,7 @@ import importlib
|
||||
import logging
|
||||
import pkgutil
|
||||
|
||||
import borgmatic.hooks.command
|
||||
import borgmatic.hooks.credential
|
||||
import borgmatic.hooks.data_source
|
||||
import borgmatic.hooks.monitoring
|
||||
|
||||
@@ -28,7 +28,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
filename in any log entries. If this is a dry run, then don't actually ping anything.
|
||||
'''
|
||||
if state not in MONITOR_STATE_TO_CRONHUB:
|
||||
logger.debug(f'Ignoring unsupported monitoring {state.name.lower()} in Cronhub hook')
|
||||
logger.debug(f'Ignoring unsupported monitoring state {state.name.lower()} in Cronhub hook')
|
||||
return
|
||||
|
||||
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
|
||||
|
||||
@@ -28,7 +28,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
filename in any log entries. If this is a dry run, then don't actually ping anything.
|
||||
'''
|
||||
if state not in MONITOR_STATE_TO_CRONITOR:
|
||||
logger.debug(f'Ignoring unsupported monitoring {state.name.lower()} in Cronitor hook')
|
||||
logger.debug(f'Ignoring unsupported monitoring state {state.name.lower()} in Cronitor hook')
|
||||
return
|
||||
|
||||
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
|
||||
|
||||
@@ -64,7 +64,7 @@ def get_handler(identifier):
|
||||
def format_buffered_logs_for_payload(identifier):
|
||||
'''
|
||||
Get the handler previously added to the root logger, and slurp buffered logs out of it to
|
||||
send to Healthchecks.
|
||||
send to the monitoring service.
|
||||
'''
|
||||
try:
|
||||
buffering_handler = get_handler(identifier)
|
||||
|
||||
@@ -6,20 +6,36 @@ import platform
|
||||
import requests
|
||||
|
||||
import borgmatic.hooks.credential.parse
|
||||
import borgmatic.hooks.monitoring.logs
|
||||
from borgmatic.hooks.monitoring import monitor
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
EVENTS_API_URL = 'https://events.pagerduty.com/v2/enqueue'
|
||||
DEFAULT_LOGS_PAYLOAD_LIMIT_BYTES = 10000
|
||||
HANDLER_IDENTIFIER = 'pagerduty'
|
||||
|
||||
|
||||
def initialize_monitor(
|
||||
integration_key, config, config_filename, monitoring_log_level, dry_run
|
||||
): # pragma: no cover
|
||||
def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
|
||||
'''
|
||||
No initialization is necessary for this monitor.
|
||||
Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
|
||||
we can send them all to PagerDuty upon a failure state. But skip this if the "send_logs" option
|
||||
is false.
|
||||
'''
|
||||
pass
|
||||
if hook_config.get('send_logs') is False:
|
||||
return
|
||||
|
||||
ping_body_limit = max(
|
||||
DEFAULT_LOGS_PAYLOAD_LIMIT_BYTES
|
||||
- len(borgmatic.hooks.monitoring.logs.PAYLOAD_TRUNCATION_INDICATOR),
|
||||
0,
|
||||
)
|
||||
|
||||
borgmatic.hooks.monitoring.logs.add_handler(
|
||||
borgmatic.hooks.monitoring.logs.Forgetful_buffering_handler(
|
||||
HANDLER_IDENTIFIER, ping_body_limit, monitoring_log_level
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run):
|
||||
@@ -30,16 +46,13 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
'''
|
||||
if state != monitor.State.FAIL:
|
||||
logger.debug(
|
||||
f'Ignoring unsupported monitoring {state.name.lower()} in PagerDuty hook',
|
||||
f'Ignoring unsupported monitoring state {state.name.lower()} in PagerDuty hook',
|
||||
)
|
||||
return
|
||||
|
||||
dry_run_label = ' (dry run; not actually sending)' if dry_run else ''
|
||||
logger.info(f'Sending failure event to PagerDuty {dry_run_label}')
|
||||
|
||||
if dry_run:
|
||||
return
|
||||
|
||||
try:
|
||||
integration_key = borgmatic.hooks.credential.parse.resolve_credential(
|
||||
hook_config.get('integration_key'), config
|
||||
@@ -48,6 +61,10 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
logger.warning(f'PagerDuty credential error: {error}')
|
||||
return
|
||||
|
||||
logs_payload = borgmatic.hooks.monitoring.logs.format_buffered_logs_for_payload(
|
||||
HANDLER_IDENTIFIER
|
||||
)
|
||||
|
||||
hostname = platform.node()
|
||||
local_timestamp = datetime.datetime.now(datetime.timezone.utc).astimezone().isoformat()
|
||||
payload = json.dumps(
|
||||
@@ -66,11 +83,14 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
'hostname': hostname,
|
||||
'configuration filename': config_filename,
|
||||
'server time': local_timestamp,
|
||||
'logs': logs_payload,
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
logger.debug(f'Using PagerDuty payload: {payload}')
|
||||
|
||||
if dry_run:
|
||||
return
|
||||
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
try:
|
||||
@@ -83,6 +103,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
|
||||
def destroy_monitor(ping_url_or_uuid, config, monitoring_log_level, dry_run): # pragma: no cover
|
||||
'''
|
||||
No destruction is necessary for this monitor.
|
||||
Remove the monitor handler that was added to the root logger. This prevents the handler from
|
||||
getting reused by other instances of this monitor.
|
||||
'''
|
||||
pass
|
||||
borgmatic.hooks.monitoring.logs.remove_handler(HANDLER_IDENTIFIER)
|
||||
|
||||
@@ -37,7 +37,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
|
||||
try:
|
||||
response = requests.get(f'{push_url}?{query}')
|
||||
response = requests.get(f'{push_url}?{query}', verify=hook_config.get('verify_tls', True))
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as error:
|
||||
|
||||
@@ -16,6 +16,42 @@ def initialize_monitor(
|
||||
pass
|
||||
|
||||
|
||||
def send_zabbix_request(server, headers, data):
|
||||
'''
|
||||
Given a Zabbix server URL, HTTP headers as a dict, and valid Zabbix JSON payload data as a dict,
|
||||
send a request to the Zabbix server via API.
|
||||
|
||||
Return the response "result" value or None.
|
||||
'''
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
|
||||
logger.debug(f'Sending a "{data["method"]}" request to the Zabbix server')
|
||||
|
||||
try:
|
||||
response = requests.post(server, headers=headers, json=data)
|
||||
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as error:
|
||||
logger.warning(f'Zabbix error: {error}')
|
||||
|
||||
return None
|
||||
|
||||
try:
|
||||
result = response.json().get('result')
|
||||
error_message = result['data'][0]['error']
|
||||
except requests.exceptions.JSONDecodeError:
|
||||
logger.warning('Zabbix error: Cannot parse API response')
|
||||
|
||||
return None
|
||||
except (TypeError, KeyError, IndexError):
|
||||
return result
|
||||
else:
|
||||
logger.warning(f'Zabbix error: {error_message}')
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run):
|
||||
'''
|
||||
Update the configured Zabbix item using either the itemid, or a host and key.
|
||||
@@ -48,6 +84,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
)
|
||||
except ValueError as error:
|
||||
logger.warning(f'Zabbix credential error: {error}')
|
||||
|
||||
return
|
||||
|
||||
server = hook_config.get('server')
|
||||
@@ -57,13 +94,9 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
value = state_config.get('value')
|
||||
headers = {'Content-Type': 'application/json-rpc'}
|
||||
|
||||
logger.info(f'Updating Zabbix{dry_run_label}')
|
||||
logger.info(f'Pinging Zabbix{dry_run_label}')
|
||||
logger.debug(f'Using Zabbix URL: {server}')
|
||||
|
||||
if server is None:
|
||||
logger.warning('Server missing for Zabbix')
|
||||
return
|
||||
|
||||
# Determine the Zabbix method used to store the value: itemid or host/key
|
||||
if itemid is not None:
|
||||
logger.info(f'Updating {itemid} on Zabbix')
|
||||
@@ -74,8 +107,8 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
'id': 1,
|
||||
}
|
||||
|
||||
elif (host and key) is not None:
|
||||
logger.info(f'Updating Host:{host} and Key:{key} on Zabbix')
|
||||
elif host is not None and key is not None:
|
||||
logger.info(f'Updating Host: "{host}" and Key: "{key}" on Zabbix')
|
||||
data = {
|
||||
'jsonrpc': '2.0',
|
||||
'method': 'history.push',
|
||||
@@ -85,58 +118,63 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
|
||||
|
||||
elif host is not None:
|
||||
logger.warning('Key missing for Zabbix')
|
||||
return
|
||||
|
||||
return
|
||||
elif key is not None:
|
||||
logger.warning('Host missing for Zabbix')
|
||||
|
||||
return
|
||||
else:
|
||||
logger.warning('No Zabbix itemid or host/key provided')
|
||||
|
||||
return
|
||||
|
||||
# Determine the authentication method: API key or username/password
|
||||
if api_key is not None:
|
||||
logger.info('Using API key auth for Zabbix')
|
||||
headers['Authorization'] = 'Bearer ' + api_key
|
||||
|
||||
elif (username and password) is not None:
|
||||
logger.info('Using user/pass auth with user {username} for Zabbix')
|
||||
auth_data = {
|
||||
headers['Authorization'] = f'Bearer {api_key}'
|
||||
elif username is not None and password is not None:
|
||||
logger.info(f'Using user/pass auth with user {username} for Zabbix')
|
||||
login_data = {
|
||||
'jsonrpc': '2.0',
|
||||
'method': 'user.login',
|
||||
'params': {'username': username, 'password': password},
|
||||
'id': 1,
|
||||
}
|
||||
|
||||
if not dry_run:
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
try:
|
||||
response = requests.post(server, headers=headers, json=auth_data)
|
||||
data['auth'] = response.json().get('result')
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as error:
|
||||
logger.warning(f'Zabbix error: {error}')
|
||||
result = send_zabbix_request(server, headers, login_data)
|
||||
|
||||
if not result:
|
||||
return
|
||||
|
||||
headers['Authorization'] = f'Bearer {result}'
|
||||
elif username is not None:
|
||||
logger.warning('Password missing for Zabbix authentication')
|
||||
return
|
||||
|
||||
return
|
||||
elif password is not None:
|
||||
logger.warning('Username missing for Zabbix authentication')
|
||||
|
||||
return
|
||||
else:
|
||||
logger.warning('Authentication data missing for Zabbix')
|
||||
|
||||
return
|
||||
|
||||
if not dry_run:
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
try:
|
||||
response = requests.post(server, headers=headers, json=data)
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as error:
|
||||
logger.warning(f'Zabbix error: {error}')
|
||||
send_zabbix_request(server, headers, data)
|
||||
|
||||
if username is not None and password is not None:
|
||||
logout_data = {
|
||||
'jsonrpc': '2.0',
|
||||
'method': 'user.logout',
|
||||
'params': [],
|
||||
'id': 1,
|
||||
}
|
||||
|
||||
if not dry_run:
|
||||
send_zabbix_request(server, headers, logout_data)
|
||||
|
||||
|
||||
def destroy_monitor(ping_url_or_uuid, config, monitoring_log_level, dry_run): # pragma: no cover
|
||||
|
||||
@@ -29,12 +29,13 @@ def interactive_console():
|
||||
return sys.stderr.isatty() and os.environ.get('TERM') != 'dumb'
|
||||
|
||||
|
||||
def should_do_markup(no_color, configs):
|
||||
def should_do_markup(configs, json_enabled):
|
||||
'''
|
||||
Given the value of the command-line no-color argument, and a dict of configuration filename to
|
||||
corresponding parsed configuration, determine if we should enable color marking up.
|
||||
Given a dict of configuration filename to corresponding parsed configuration (which already have
|
||||
any command-line overrides applied) and whether json is enabled, determine if we should enable
|
||||
color marking up.
|
||||
'''
|
||||
if no_color:
|
||||
if json_enabled:
|
||||
return False
|
||||
|
||||
if any(config.get('color', True) is False for config in configs.values()):
|
||||
@@ -256,7 +257,7 @@ class Log_prefix:
|
||||
self.original_prefix = get_log_prefix()
|
||||
set_log_prefix(self.prefix)
|
||||
|
||||
def __exit__(self, exception, value, traceback):
|
||||
def __exit__(self, exception_type, exception, traceback):
|
||||
'''
|
||||
Restore any original prefix.
|
||||
'''
|
||||
|
||||
@@ -24,6 +24,9 @@ def handle_signal(signal_number, frame):
|
||||
logger.critical('Exiting due to TERM signal')
|
||||
sys.exit(EXIT_CODE_FROM_SIGNAL + signal.SIGTERM)
|
||||
elif signal_number == signal.SIGINT:
|
||||
# Borg doesn't always exit on a SIGINT, so give it a little encouragement.
|
||||
os.killpg(os.getpgrp(), signal.SIGTERM)
|
||||
|
||||
raise KeyboardInterrupt()
|
||||
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ COPY . /app
|
||||
RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib
|
||||
RUN pip install --break-system-packages --no-cache /app && borgmatic config generate && chmod +r /etc/borgmatic/config.yaml
|
||||
RUN borgmatic --help > /command-line.txt \
|
||||
&& for action in repo-create transfer create prune compact check delete extract config "config bootstrap" "config generate" "config validate" export-tar mount umount repo-delete restore repo-list list repo-info info break-lock "key export" "key change-passphrase" borg; do \
|
||||
&& for action in repo-create transfer create prune compact check delete extract config "config bootstrap" "config generate" "config validate" export-tar mount umount repo-delete restore repo-list list repo-info info break-lock "key export" "key import" "key change-passphrase" recreate borg; do \
|
||||
echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \
|
||||
&& borgmatic $action --help >> /command-line.txt; done
|
||||
RUN /app/docs/fetch-contributors >> /contributors.html
|
||||
|
||||
@@ -165,6 +165,7 @@ ul {
|
||||
}
|
||||
li {
|
||||
padding: .25em 0;
|
||||
line-height: 1.5;
|
||||
}
|
||||
li ul {
|
||||
list-style-type: disc;
|
||||
|
||||
@@ -26,8 +26,7 @@ def list_merged_pulls(url):
|
||||
|
||||
|
||||
def list_contributing_issues(url):
|
||||
# labels = bug, design finalized, etc.
|
||||
response = requests.get(f'{url}?labels=19,20,22,23,32,52,53,54', headers={'Accept': 'application/json', 'Content-Type': 'application/json'})
|
||||
response = requests.get(url, headers={'Accept': 'application/json', 'Content-Type': 'application/json'})
|
||||
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
@@ -39,7 +38,7 @@ PULLS_API_ENDPOINT_URLS = (
|
||||
'https://projects.torsion.org/api/v1/repos/borgmatic-collective/borgmatic/pulls',
|
||||
'https://api.github.com/repos/borgmatic-collective/borgmatic/pulls',
|
||||
)
|
||||
ISSUES_API_ENDPOINT_URL = 'https://projects.torsion.org/api/v1/repos/borgmatic-collective/borgmatic/issues'
|
||||
ISSUES_API_ENDPOINT_URL = 'https://projects.torsion.org/api/v1/repos/borgmatic-collective/borgmatic/issues?state=all'
|
||||
RECENT_CONTRIBUTORS_CUTOFF_DAYS = 365
|
||||
|
||||
|
||||
|
||||
@@ -7,18 +7,112 @@ eleventyNavigation:
|
||||
---
|
||||
## Preparation and cleanup hooks
|
||||
|
||||
If you find yourself performing preparation tasks before your backup runs, or
|
||||
cleanup work afterwards, borgmatic hooks may be of interest. Hooks are shell
|
||||
commands that borgmatic executes for you at various points as it runs, and
|
||||
they're configured in the `hooks` section of your configuration file. But if
|
||||
you're looking to backup a database, it's probably easier to use the [database
|
||||
backup
|
||||
feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
|
||||
instead.
|
||||
If you find yourself performing preparation tasks before your backup runs or
|
||||
doing cleanup work afterwards, borgmatic command hooks may be of interest. These
|
||||
are custom shell commands you can configure borgmatic to execute at various
|
||||
points as it runs.
|
||||
|
||||
You can specify `before_backup` hooks to perform preparation steps before
|
||||
(But if you're looking to backup a database, it's probably easier to use the
|
||||
[database backup
|
||||
feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
|
||||
instead.)
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 2.0.0 (**not yet
|
||||
released**)</span> Command hooks are now configured via a list of `commands:` in
|
||||
your borgmatic configuration file. For example:
|
||||
|
||||
```yaml
|
||||
commands:
|
||||
- before: action
|
||||
when: [create]
|
||||
run:
|
||||
- echo "Before create!"
|
||||
- after: action
|
||||
when:
|
||||
- create
|
||||
- prune
|
||||
run:
|
||||
- echo "After create or prune!"
|
||||
- after: error
|
||||
run:
|
||||
- echo "Something went wrong!"
|
||||
```
|
||||
|
||||
If you're coming from an older version of borgmatic, there is tooling to help
|
||||
you [upgrade your
|
||||
configuration](https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-your-configuration)
|
||||
to this new command hook format.
|
||||
|
||||
Note that if a `run:` command contains a special YAML character such as a colon,
|
||||
you may need to quote the entire string (or use a [multiline
|
||||
string](https://yaml-multiline.info/)) to avoid an error:
|
||||
|
||||
```yaml
|
||||
commands:
|
||||
- before: action
|
||||
when: [create]
|
||||
run:
|
||||
- "echo Backup: start"
|
||||
```
|
||||
|
||||
Each command in the `commands:` list has the following options:
|
||||
|
||||
* `before` or `after`: Name for the point in borgmatic's execution that the commands should be run before or after, one of:
|
||||
* `action` runs before each action for each repository. This replaces the deprecated `before_create`, `after_prune`, etc.
|
||||
* `repository` runs before or after all actions for each repository. This replaces the deprecated `before_actions` and `after_actions`.
|
||||
* `configuration` runs before or after all actions and repositories in the current configuration file.
|
||||
* `everything` runs before or after all configuration files. Errors here do not trigger `error` hooks or the `fail` state in monitoring hooks. This replaces the deprecated `before_everything` and `after_everything`.
|
||||
* `error` runs after an error occurs—and it's only available for `after`. This replaces the deprecated `on_error` hook.
|
||||
* `when`: Only trigger the hook when borgmatic is run with particular actions (`create`, `prune`, etc.) listed here. Defaults to running for all actions.
|
||||
* `run`: List of one or more shell commands or scripts to run when this command hook is triggered.
|
||||
|
||||
An `after` command hook runs even if an error occurs in the corresponding
|
||||
`before` hook or between those two hooks. This allows you to perform cleanup
|
||||
steps that correspond to `before` preparation commands—even when something goes
|
||||
wrong. This is a departure from the way that the deprecated `after_*` hooks
|
||||
worked in borgmatic prior to version 2.0.0.
|
||||
|
||||
Additionally, when command hooks run, they respect the `working_directory`
|
||||
option if it is configured, meaning that the hook commands are run in that
|
||||
directory.
|
||||
|
||||
|
||||
### Order of execution
|
||||
|
||||
Here's a way of visualizing how all of these command hooks slot into borgmatic's
|
||||
execution.
|
||||
|
||||
Let's say you've got a borgmatic configuration file with a configured
|
||||
repository. And suppose you configure several command hooks and then run
|
||||
borgmatic for the `create` and `prune` actions. Here's the order of execution:
|
||||
|
||||
* Run `before: everything` hooks (from all configuration files).
|
||||
* Run `before: configuration` hooks (from the first configuration file).
|
||||
* Run `before: repository` hooks (for the first repository).
|
||||
* Run `before: action` hooks for `create`.
|
||||
* Actually run the `create` action (e.g. `borg create`).
|
||||
* Run `after: action` hooks for `create`.
|
||||
* Run `before: action` hooks for `prune`.
|
||||
* Actually run the `prune` action (e.g. `borg prune`).
|
||||
* Run `after: action` hooks for `prune`.
|
||||
* Run `after: repository` hooks (for the first repository).
|
||||
* Run `after: configuration` hooks (from the first configuration file).
|
||||
* Run `after: everything` hooks (from all configuration files).
|
||||
|
||||
This same order of execution extends to multiple repositories and/or
|
||||
configuration files.
|
||||
|
||||
|
||||
### Deprecated command hooks
|
||||
|
||||
<span class="minilink minilink-addedin">Prior to version 2.0.0</span> The
|
||||
command hooks worked a little differently. In these older versions of borgmatic,
|
||||
you can specify `before_backup` hooks to perform preparation steps before
|
||||
running backups and specify `after_backup` hooks to perform cleanup steps
|
||||
afterwards. Here's an example:
|
||||
afterwards. These deprecated command hooks still work in version 2.0.0+,
|
||||
although see below about a few semantic differences starting in that version.
|
||||
|
||||
Here's an example of these deprecated hooks:
|
||||
|
||||
```yaml
|
||||
before_backup:
|
||||
@@ -43,6 +137,15 @@ instance, `before_prune` runs before a `prune` action for a repository, while
|
||||
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
|
||||
these options in the `hooks:` section of your configuration.
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 2.0.0</span> An `after_*`
|
||||
command hook runs even if an error occurs in the corresponding `before_*` hook
|
||||
or between those two hooks. This allows you to perform cleanup steps that
|
||||
correspond to `before_*` preparation commands—even when something goes wrong.
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 2.0.0</span> When command
|
||||
hooks run, they respect the `working_directory` option if it is configured,
|
||||
meaning that the hook commands are run in that directory.
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
|
||||
`before_actions` and `after_actions` hooks run before/after all the actions
|
||||
(like `create`, `prune`, etc.) for each repository. These hooks are a good
|
||||
@@ -57,49 +160,13 @@ but not if an error occurs in a previous hook or in the backups themselves.
|
||||
(Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file
|
||||
rather than once per repository.)
|
||||
|
||||
|
||||
## Variable interpolation
|
||||
|
||||
The before and after action hooks support interpolating particular runtime
|
||||
variables into the hook command. Here's an example that assumes you provide a
|
||||
separate shell script:
|
||||
|
||||
| ||||